id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2309.06304 | (Almost-)Quantum Bell Inequalities and Device-Independent Applications | Investigations of the boundary of the quantum correlation set through the
derivation of quantum Bell inequalities have gained increased attention in
recent years, which are related to Tsirelson's problem and have significant
applications in DI information processing. However, determining quantum Bell
inequalities is a notoriously difficult task and only isolated examples are
known. In this paper, we present families of (almost-)quantum Bell inequalities
and highlight three foundational and DI applications. Firstly, quantum
correlations on the non-signaling boundary are crucial in the DI randomness
extraction from weak sources. In the practical Bell scenario of two players
with two k-outcome measurements, we derive quantum Bell inequalities that show
a separation of the quantum boundary from certain portions of the no-signaling
boundary of dimension up to 4k-8, extending previous results. As an immediate
by-product of this, we give a general proof of Aumann's Agreement theorem for
quantum systems and the almost-quantum correlations, which implies Aumann's
agreement theorem is a reasonable physical principle in the context of
epistemics to pick out both quantum theory and almost-quantum correlations from
general no-signaling theories. Secondly, we present a family of quantum Bell
inequalities in the two players with m binary measurements scenarios, that
serve to self-test the two-qubit singlet and 2m measurements. Interestingly,
this claim generalizes the result for m=2 discovered by
Tsirelson-Landau-Masanes and shows an improvement over the state-of-the-art
DIRA. Lastly, we use our quantum Bell inequalities to derive the general form
of the principle of no advantage in nonlocal computation, which is an
information-theoretic principle that serves to characterize the quantum
correlation set. With this, we provide the most precise characterization of the
quantum boundary known so far. | Yuan Liu, Ho Yiu Chung, Ravishankar Ramanathan | 2023-09-12T15:13:02Z | http://arxiv.org/abs/2309.06304v4 | # Investigations of the boundary of quantum correlations and device-independent applications
###### Abstract
The set of correlations between measurement outcomes observed by separated parties in a Bell test is of vital importance in Device-Independent (DI) information processing. However, characterising this set of quantum correlations is a hard problem, with a number of open questions. Here, we present families of quantum Bell inequalities that approximate this set in Bell scenarios with an arbitrary number of players, settings and outcomes, and study their applications to device-independent information processing. Firstly, it is known that quantum correlations on the non-signaling boundary are of crucial importance in the task of DI randomness extraction from weak sources. In the Bell scenario of two players with two \(k\)-outcome measurements, we derive inequalities that show a separation of the quantum boundary from classes of non-local faces of the non-signaling polytope of dimension \(5\leq 4k-4\), extending previous results from nonlocality distillation and the collapse of communication complexity. Secondly, in the scenario of two players with \(m\) binary measurements, we consider a non-trivial portion of the quantum boundary that generalizes the boundary that for \(m=2\) discovered by Tsirelson-Landau-Masanes. We prove that all points on this generalized boundary serve to self-test the two-qubit singlet and the corresponding \(m\) measurements. In this scenario, we also derive a low-dimensional region of the quantum boundary that coincides with the boundary of the set of classical correlations.
## I Introduction
One of the most striking features of quantum mechanics is non-locality, the phenomenon of violation of Bell inequalities by separated physical systems. The correlations between local measurement outcomes on such systems show, in a fully device-independent manner, that quantum theory differs fundamentally from all classical theories that are constrained by the principle of local causality [1; 2]. Besides their foundational interest, in recent years, the quantum correlations have been shown to be a vital resource in device-independent (DI) information processing applications, such as quantum key distribution [3; 4], randomness extraction and expansion [5; 6], self-testing of quantum states and measurements [7; 8], and reduction of communication complexity [9].
The Bell inequalities delineate the boundary of the set of classical correlations, and any violation of a Bell inequality indicates that the observed distribution of conditional measurement outcomes is nonlocal. Moreover, the verification of nonlocal correlations (and the correct execution of DI tasks built upon these) can be performed by simple statistical tests of the measurement devices and a fundamental rule of nature, viz. the no-superluminal signaling principle of relativity. While the classification of the entire set of Bell inequalities for arbitrary number of measurement systems, inputs and outputs is a challenge, at least a systematic method for the identification of novel Bell inequalities is known since the work by Pitowsky [10].
On the other hand, the set of behaviors (conditional probability distributions for outcomes conditioned on the different inputs) obtainable in quantum theory, denoted Q, is known to lie in between the classical set L and the general no-signaling set NS [11]. The set Q is convex but is in general not a polytope unlike L and NS. The characterisation of the boundary of Q via the derivation of (in general, non-linear) quantum Bell inequalities has proven to be a much more challenging task [13] and only a few examples have been found so far [14; 15; 16; 17; 18; 19; 20; 12]. For fundamental reasons as well as to identify the optimal quantum correlations for different applications, it is of importance to characterize the set of quantum correlations, and understand how it fits in between the polytopes of classical and general non-signaling correlations.
Specific DI applications demand quantum correlations that exhibit particular properties. For instance, the task of randomness amplification [21; 22; 23; 24; 25; 26; 27; 28] requires the use of quantum correlations that lie on the no-signaling boundary to allow extraction of randomness from arbitrarily weak seeds. As such, the quantum correlations exhibiting pseudo-telepathy [29; 30] or demonstrating the Hardy paradox [31; 32] have found use in this task. Similarly, another important task that has gained prominence in recent years is self-testing, namely the unique identification (up to local isometries) of a quantum state and measurements, solely from the observed correlations in a Bell test. As such, this task requires the identification of quantum correlations that can be generated in such a unique manner. Finally, the study of the boundary of the quantum set is also important from a fundamental viewpoint in the problem of identifying appropriate information-theoretic principles that single out the set of quantum correlations from amongst general no-signaling ones. Of particular importance are the principle of information causality [33], macroscopic locality [34], local orthogonality [35], no advantage in non-local computation [19], and the collapse of communication complexity [36], all of which have been shown to lead to non-trivial bounds on the set of quantum correlations. The identification of non-local
no-signaling boxes that are excluded from the quantum set serves as a useful testing ground and pointers towards the ultimate principle picking out the quantum set. Other fundamental questions regarding the boundary of the quantum set include 2 out of the 29 open problems in quantum information listed in [37].
In this paper, we explore the boundary of the quantum set with specific regard to regions coinciding with a no-signaling or a local boundary, and non-trivial regions leading to self-testing. To do this, we expand on a class of (non-linear) inequalities defining the boundary of the Almost Quantum Set [38]. Such inequalities were used to exclude all non-local vertices of the no-signaling polytope (for arbitrary number of parties, inputs and outputs) by one of us in [39]. Here, we explore these inequalities to exclude further non-trivial regions of the no-signaling polytope. Specifically, in the \((2,2,k)\) Bell scenario (with two players performing two measurements with \(k\) outcomes each), we derive optimal inequalities that show the exclusion of all non-local faces of the no-signaling polytope of dimension up to \(4k-4\). This extends the known region of excluded boxes from the no-signaling boundary obtained in [17], and through the procedure of non-locality distillation [16; 40] and the collapse of communication complexity [41]. Secondly, we derive a class of tight quantum Bell inequalities in the \((2,m,2)\) Bell scenario (with two players performing \(m\) binary measurements) and show their usefulness in self-testing the two-qubit singlet state. In this regard, we generalise the results regarding the self-testings of the singlet in the \((2,2,2)\) scenario obtained in [42] and the self-testing of the correlations leading to the optimal violation of the chained Bell inequality in [43]. Finally, we study the faces of the \((2,m,2)\) correlation set (excluding the local marginals), and identify low-dimensional regions in which the quantum correlation set coincides with the classical correlation polytope. In this regard, we generalise the results obtained by Linden et al. in [19].
## II Quantum Bell Inequalities
#### ii.0.1 Bell scenario and the set of quantum correlations
We label a Bell scenario as \((n,m,k)\), in which \(n\) space-like separated parties perform measurements on a shared physical system, each party has \(m\) choices of local measurement and each measurement yields \(k\) possible outcomes. Specifically, we focus on the bipartite scenario \((2,m,k)\) (experimenters Alice and Bob), in which we label the local measurements as \(x,y\in[m]\) (where \([m]=\{1,2,\ldots,m\}\)) and the possible outcomes of each measurement are \(a,b\in[k]\) (where \([k]=\{1,2\ldots,k\}\)) respectively. Let \(P(a,b]x,y)\) denote the probability of obtaining the outcomes \(a,b\) given that Alice and Bob chose measurement settings \(x,y\), these probabilities obey non-negativity \(P(a,b|x,y)\geq 0,\forall x,y,a,b\) and are normalized \(\sum_{a,b}P(a,b|x,y)=1,\forall x,y\). A box then refers to a collection of conditional probability distributions \(\mathrm{P}:=\{P(a,b|x,y)\}_{x,y\in[m]:a,b\in[k]}\) (we will also write a box in terms of a vector \(\mathrm{\tilde{P}}\) or \(|\mathrm{P}\rangle\)). There are three different physical models of our interest which can be translated into three different types of constraints on the boxes. The first set of interest consists of general no-signaling boxes. This set is defined by the requirement
Figure 1: Relationship of the quantum set Q, classical set L and no-signaling set NS. Classical set L and no-signaling set NS are convex polytopes. Quantum set Q is a convex set, it may saturate no-signaling boundaries with (2) or without (3) a local deterministic vertex on the boundaries, and it may also coincide with classical boundaries with (4) or without (4) a non-local no-signaling vertex on the boundaries.
that the local marginal probabilities of one party are not infected by the input of the other party, i.e. the experimenters cannot signal their inputs to each other.
\[\begin{array}{ll}\sum\limits_{a}P(a,b|x,y)=\sum\limits_{a}P(a,b|x^{\prime},y) &\forall b,x,x^{\prime},y\\ \sum\limits_{b}P(a,b|x,y)=\sum\limits_{b}P(a,b|x,y^{\prime})&\forall a,x,y,y^{ \prime}\end{array} \tag{1}\]
The set of all boxes satisfying the above no-signaling constraints forms the no-signaling convex polytope \(\mathrm{NS}(2,m,k)\). The second set of interest consists of the Quantum boxes. In this case, there exists a quantum state \(\rho\) in the Hilbert space of arbitrary dimension and a set of measurement operators (positive operator valued measure (POVM) elements) \(\{A_{x}^{a}\}\) and \(\{B_{y}^{b}\}\) such that the joint conditional probabilities obey the Born's rule:
\[P(a,b|x,y)=\mathrm{Tr}(\rho A_{x}^{a}\otimes B_{y}^{b})\qquad\forall a,b,x,y \tag{2}\]
All such quantum boxes form the quantum convex set \(\mathrm{Q}(2,m,k)\). The third set of interest consists of the Local Causal (Classical) boxes. In this case, there exists a hidden variable \(\lambda\) with distribution \(p(\lambda)\), and corresponding local response functions \(p(a|x,\lambda),p(b|y,\lambda)\) such that
\[P(a,b|x,y)=\sum\limits_{\lambda}p(\lambda)p(a|x,\lambda)p(b|y,\lambda)\qquad \forall a,b,x,y \tag{3}\]
We denote by \(\mathrm{L}(2,m,k)\) the local convex polytope, and the extreme points of this set \(\mathrm{L}(2,m,k)\) are the local deterministic boxes.
In general, the relationship of these three sets is \(\mathrm{L}(n,m,k)\subsetneq\mathrm{Q}(n,m,k)\subsetneq\mathrm{NS}(n,m,k)\).
#### ii.1.2 Almost quantum set and the Theta body of the Orthogonality Graph
In [45; 46], Navascues, Pironio, and Acin introduced a hierarchy of semi-definite programs (SDPs) that form an outer approximation of the quantum set. In the SDPs, the requirement of tensor product structure of measurement operators of space-like separated parties is replaced with the commutation requirement of the operators, thus the conditional probabilities are expressed as
\[P(a,b|x,y)=\langle\psi|A_{x}^{a}B_{y}^{b}|\psi\rangle\qquad\forall a,b,x,y \tag{4}\]
with projection operators \(\{A_{x}^{a}\},\{B_{y}^{b}\}\) and the commutation requirement \([A_{x}^{a},B_{y}^{b}]=0,\forall a,b,x,y\). In the hierarchy, let \(\mathrm{S}_{0}:=\{\mathbb{I}\}\cup\{A_{x}^{a}\}\cup\{B_{y}^{b}\}\), and the set \(\mathrm{S}_{k},k\geq 1\) contains \(S_{k-1}\) and products of \(k\) elements of \(\mathrm{S}_{0}\). For example, \(\mathrm{S}_{1}=\{\mathbb{I}\}\cup\{A_{x}^{a}\}\cup\{B_{y}^{b}\}\), \(\mathrm{S}_{2}=\mathrm{S}_{1}\cup\{A_{x}^{a}A_{y^{\prime}}^{a}\}\cup\{B_{y}^{ b}B_{y^{\prime}}^{b}\}\cup\{A_{x}^{a}B_{y}^{b}\}\), etc.. The moment matrix \(\Gamma^{(k)}\) which is related to the set \(\mathrm{S}_{k}\), then corresponds to the \(k\)-th level of the hierarchy, with entries \(\Gamma^{(k)}_{i,j}=\langle\psi|s_{i}^{(k)\dagger}s_{j}^{(k)}|\psi\rangle\) where \(s_{i}^{(k)}\in\mathrm{S}_{k},\forall i\in[\mathrm{S}_{k}]\) and the size of \(\Gamma^{(k)}\) being \(|\mathrm{S}_{k}|\times|\mathrm{S}_{k}|\). The box \(\mathrm{P}\) is in the convex set \(\mathrm{Q}_{k}\) of level \(k\) of the hierarchy if the elements of the box \(\mathrm{P}\) are the entries of the moment matrix at level \(k\) and such a moment matrix \(\Gamma^{(k)}\) is positive semi-definite \(\Gamma^{(k)}\geq 0\). The convex sets \(\mathrm{Q}_{k}\) of the hierarchy converge towards the quantum set \(\mathrm{Q}\) from outside, in the sense that \(\mathrm{Q}_{1}\supsetneq\mathrm{Q}_{2}\supsetneq\mathrm{Q}_{3}\supsetneq\cdots \supseteq\mathrm{Q}\). Thus, for a box \(\mathrm{P}\), the lack of existence of \(\Gamma^{(k)}\geq 0\) at any level \(k\) of the hierarchy means the exclusion of the box from the quantum set. A particular level of the hierarchy between levels \(1\) and \(2\) is of great interest, which is labeled as level \(1+ab\). And this set has been highlighted as the _Almost Quantum_ set \(\widetilde{\mathrm{Q}}\)[38], in the sense that the set \(\widetilde{\mathrm{Q}}\) satisfies almost all the reasonable information-theoretic principles (except the principle of information causality, for which a proof is not known), that pick out quantum correlations from general no-signaling ones.
To the Bell scenario \((2,m,k)\), there corresponds an orthogonality graph \(G_{B}\)[47; 48]. Each event \((a,b|x,y)\) in the Bell scenario \((2,m,k)\) corresponds to a distinct vertex of the graph, and two vertices are connected by an edge if the two events involve different outcomes for the same local measurement by at least one of the parties (such events are termed _locally orthogonal_). The Theta body \(\mathrm{TH}(G)\) of the graph \(G=(V,E)\) is the convex set defined as follows.
**Definition 1**.: _[_49_]__. For any graph \(G=(V,E)\), \(\mathit{TH}(G):=\{\tilde{\mathrm{P}}=(|\langle\psi|u_{i}\rangle|^{2}:i\in|V|) \in\mathbb{R}_{+}^{|V|}:\|\psi\rangle\|=\|u_{i}\rangle\|=1,\{|u_{i}\rangle\}\) is an orthonormal representation of \(G\}\)._
We consider the orthogonality graph \(G_{B}\) for a Bell scenario, and define a subset of the Theta body \(\mathrm{TH}^{(c)}(G_{B})\subsetneq\mathrm{TH}(G_{B})\), in which additional clique constraints are applied to the maximum cliques \(\{c\}\) of the orthogonality graph \(G_{B}\) as:
**Definition 2**.: _[_48_]_ _For the orthogonality graph \(G_{B}\) corresponding to the Bell scenario \((n,m,k)\), define the set \(\mathit{TH}^{(c)}(G)\) as \(\mathit{TH}^{(c)}(G_{B}):=\left\{\tilde{\mathrm{P}}=\left(|\langle\psi|u_{i} \rangle|^{2}\right)\in\mathit{TH}(G_{B}):\forall c\in C_{n,ns},\quad\sum_{icc}| \langle\psi|u_{i}\rangle|^{2}=1\right\}\). Here, \(C_{n,ns}\) denotes the set of maximum cliques of orthogonality graph \(G_{B}\)._
It has been shown that the set \(\mathrm{TH}^{(c)}(G_{B})\) is equivalent to the Almost Quantum set.
**Theorem**.: _[_50_]_ _For any Bell scenario \((n,m,k)\), the almost quantum set is equivalent to the Theta body with maximum clique constraints, i.e. \(\widetilde{\mathbb{Q}}=\mathrm{TH}^{(c)}(G_{B})\)._
In summary, for a Bell scenario \((2,m,k)\), we get the following important relationships:
\[\mathrm{TH}(G_{B})\ni\mathrm{TH}^{(c)}(G_{B})=\widetilde{\mathbb{Q}}= \mathrm{Q}_{1+ab}\ni\mathbb{Q} \tag{5}\]
Thus, the exclusion of a box \(\mathrm{P}\) from the set \(\mathrm{TH}(G_{B})\) implies that this box has no quantum realization, i.e. \(\mathrm{P}\notin\mathrm{Q}(2,m,k)\). Now, we utilise the following dual characterisation of the set \(\mathrm{TH}(G_{B})\)[51]:
\[\mathrm{TH}(G_{B})=\{|P\rangle\in\mathbb{R}^{|V|}:\langle P|M|P\rangle-\sum_ {i\in|v|}M_{i,i}|P\rangle_{i}\leq 0,\forall M\in\mathbb{M}\} \tag{6}\]
with
\[\mathbb{M}:=\{M\in\mathbb{S}^{|V|}:M_{u,v}=0\big{(}u\neq v,u\neq v\big{)},M \geq 0\}. \tag{7}\]
The set of inequalities \(\langle P|M|P\rangle-\sum_{i\in|v|}M_{i,i}|P\rangle_{i}\leq 0\) therefore defines a set of (almost) quantum Bell inequalities for any number of players, inputs and outputs separating (almost) quantum from the post-quantum set. Choosing appropriate \(M\) satisfying (7) allows to identify non-trivial boundaries of the Almost Quantum set (and recovers some of the boundaries identified by principles such as macroscopic locality [52; 53] or local orthogonality [35]). In certain Bell scenarios such as when two players perform binary measurements, this class of inequalities also allows to identify tight boundary regions of the quantum set.
## III Excluding non-local no-signaling boxes from the quantum set
We have seen a class of inequalities that provide a bound on the quantum set. In [39], the authors used these inequalities to exclude, from the quantum set, all non-local vertices of the no-signaling polytope for any number of players, inputs and outputs. In this section, we generalise this statement and identify optimal quantum Bell inequalities to exclude specific non-local regions of the no-signaling polytope. We thereby extend known results regarding the excluded region of the no-signaling polytope, that were previously obtained using the method of nonlocality distillation (any box that can be distilled to a PR box through local operations and shared randomness must be excluded from the quantum set) [40; 16] and the collapse of communication complexity [41]. The purpose of this section is more generally to illustrate the utility of the class of inequalities in (6) in easily excluding non-local regions of the no-signaling polytope from the quantum set.
To identify the optimal \(M\) satisfying (7) that serves to exclude a non-local box \(\mathrm{P}\), we solve the following SDP:
\[\max_{M} \langle\mathrm{P}|M|\mathrm{P}\rangle-\sum_{i\in|v|}M_{i,i}| \mathrm{P}\rangle_{i} \tag{8}\] \[s.t. M\geq 0\] \[M_{u,v}=0\big{(}u\neq v,u\neq v\big{)}\] \[M\in\mathbb{S}^{|V|}\]
where \(|\mathrm{P}\rangle\) is the vector form of the box \(\mathrm{P}\). If the solution to the SDP is positive, then the corresponding quantum Bell inequality certifies that the box \(\mathrm{P}\) is not in the quantum set. In the following subsections, for specific regions of the no-signaling boundary in the \((2,2,2)\) and the \((2,2,k)\) Bell scenarios of dimension up to \(4k-4\) we analytically present psd matrices \(\widetilde{M}\) satisfying (7) such that \(\{\mathrm{P}|\widetilde{M}|\mathrm{P}\rangle-\sum_{i\in|v|}\widehat{M}_{i,i}| \mathrm{P}\rangle_{i}>0\), thereby excluding these regions from the quantum set.
### The excluded no-signaling boundary in the \((2,2,2)\) Bell scenario
**Theorem 1**.: _In the \((2,2,2)\) Bell scenario, let \(\mathrm{P}\) be a point on a face of the no-signaling polytope \(\mathrm{NS}(2,2,2)\) of dimension \(d\leq 4\), such that \(\mathrm{P}\notin\mathrm{L}(2,2,2)\), then \(\mathrm{P}\notin\mathrm{Q}(2,2,2)\)._
It's well-known that in this simplest \((2,2,2)\) Bell scenario, there are \(24\) extreme vertices of \(\mathrm{NS}(2,2,2)\) and \(8\) of them are non-local denoted as \(\mathrm{PR}\) boxes and the other \(16\) are local deterministic boxes. The \(\mathrm{PR}\) boxes [11] are equal up to relabelings of the inputs and outputs. Due to the symmetries of the \((2,2,2)\) scenario, each \(\mathrm{PR}\) box is neighboring to \(8\) local deterministic boxes and they form an \(8\)-dimensional simplex [54], to understand the behavior of boxes on the boundary of the no-signaling polytope, we focus on the boxes on the faces of this simplex.
Proof.: To prove this result. III.1, we introduce a useful tool that we term a _chained sequence_
**Definition 3**.: _Chained sequence. A chained sequence is a set of elements \(C=\{p_{i}\}\) in a box that satisfies the following conditions: 1. Orthogonality: Every pair of adjacent elements \(p_{i},p_{i+1},\forall i\in\{1,2,\ldots,|C|\}\) (\(p_{|C|+1}:=p_{1}\)) correspond to locally orthogonal events. 2. Normalization: Every pair of adjacent elements \(p_{i},p_{i+1},\forall i\in\{1,2,\ldots,|C|\}\) satisfy the normalization condition \(p_{i}+p_{i+1}=1\)._
We will also need to use some variants of the sequence to construct the corresponding matrices \(M\), viz. unsaturated and composite chained sequences.
**Definition 4**.: _Unsaturated chained sequence. An \(n\)-unsaturated chained sequence is a chained sequence in which the **normalization condition** between \(n\) adjacent pairs is unsaturated._
That is, there exists \(n\) indices \(i\) such that \(p_{i}+p_{i+1}<1\).
**Definition 5**.: _Composite chained sequence. A composite chained sequence is one in which some element \(p_{i}\) is composed of more than one event, i.e. \(p_{i}=\{p_{i,1},p_{i,2},\ldots\}\) for some \(i\). If \(p_{i}\) is a composite element, the orthogonality and normalization conditions are extended to all sub-elements \(p_{i,j}\), i.e. (1) Orthogonality condition for adjacent pairs \(p_{i},p_{i+1}\): any two elements in \(\{\{p_{i,1},p_{i,2},\ldots\},p_{i+1}\}\) are locally orthogonal events. (2) Normalization condition for adjacent pairs \(p_{i},p_{i+1}\): \(\sum_{j}p_{i,j}+p_{i+1}=1\).
1. 0-dimensional faces (Non-local Vertices). Since the 8 PR boxes in \((2,2,2)\) scenario are equivalent (up to relabeling the indexes of inputs and outputs), we will just analyze the one in Fig. 2(a). Select a chained sequence with 5 elements on the PR box, and label the elements as \(p_{1},p_{2},\ldots,p_{5}\). Since for every neighboring pair we have \(p_{i}+p_{i+1}=1\), we can construct the matrix \(M_{0}\) based on this sequence as: \[M_{0}=\sum_{i=1}^{5}|i,i+1\rangle\langle i,i+1|=|1,2\rangle\langle 1,2|+|2,3 \rangle\langle 2,3|+|3,4\rangle\langle 3,4|+|4,5\rangle\langle 4,5|+|5,1 \rangle\langle 5,1|>0\] (9) where \(|i,i+1\rangle\) is a \(0/1\) vector of length 5, with support on the two entries \(i,i+1\). \(M_{0}\) is positive definite, so there exists an \(\epsilon>0\), such that \(\widehat{M}_{0}=M_{0}-\epsilon I_{\{1,1\}}\geq 0\), where \(I_{\{1,1\}}\) is the indicator matrix of size \(5\times 5\) with only one non-zero entry at position \((1,1)\) which is 1. And
\[\langle\mathrm{P}|\widehat{M}_{0}|\mathrm{P}\rangle-\sum_{i=1}^{5}\widehat{M} _{0(i,j)}|\mathrm{P}\rangle_{i}=-\epsilon p_{1}(p_{1}-1)>0. \tag{10}\]
Figure 2: (a) The 5 red elements in the PR box form a chained sequence, it’s easy to verify that any two adjacent elements pair \(p_{i}\) and \(p_{i+1}\) satisfy the orthogonality and normalization conditions. (b) After mixing the PR box with some local deterministic boxes, the normalization conditions between some adjacent elements pair might be unsaturated. Here we show the structures of 4 types of unsaturated chained sequences that we will use in the proof of the theorem.
Note that here, the box \(|\mathrm{P}\rangle\) is a vector of length \(16\) and the \(M\) matrix in the definition of Theta body \(TH(G)\) in this scenario is of size \(16\times 16\), but here we only pick five events and construct the matrix \(\widetilde{M}_{0}\) of size \(5\times 5\). So that, we implicitly extend the matrix \(\widetilde{M}_{0}\) with suitable \(0\) entries to obtain (10).
2. 1-dimensional faces. These are the convex combination of a PR box and one neighboring Local deterministic box. \[\mathrm{P}=c_{NS}\cdot PR+(1-c_{NS})\cdot L_{i}\] (11) where \(i\in\{1,2\ldots,8\}\) and \(0<c_{NS}<1\). The non-zero probabilities of each Local deterministic box \(L_{i}\) correspond to three non-zero and one (unique) zero probability event of the PR box, so the mixture of the PR box with the Local deterministic box \(L_{i}\) will invalidate the normalization property of one pair of adjacent elements in the chained sequence. Relabeling the sequence such that \(p_{1}+p_{5}<1\), we construct the matrix \(M_{1}\) as: \[M_{1}=\sum_{i=1}^{4}|i,i+1\rangle\langle i,i+1|=|1,2\rangle\langle 1,2|+|2,3 \rangle\langle 2,3|+|3,4\rangle\langle 3,4|+|4,5\rangle\langle 4,5|\geq 0,\] (12) where we have excluded the \(|1,5\rangle\langle 1,5|\) term owing to the fact that \(p_{1}+p_{5}<1\). We thereby construct the following matrix: \[\widetilde{M}_{1}=4\cdot M_{1}+I_{(1,5)}+I_{(5,1)}=\left(\begin{array}{ ccccc}4&4&0&0&1\\ 4&8&4&0&0\\ 0&4&8&4&0\\ 0&0&4&8&4\\ 1&0&0&4&4\end{array}\right)\] (13) which can be verified to be positive definite by applying elementary row operations to transform \(\widetilde{M}_{1}\) to an upper triangular matrix with all the diagonal entries positive, and \[\langle\mathrm{P}|\widetilde{M}_{1}|\mathrm{P}\rangle-\sum_{i=1}^{5} \widetilde{M}_{1(i,i)}|\mathrm{P}\rangle_{i}=2p_{1}\cdot p_{5}>0.\] (14) Thus all the boxes on the 1-dimensional faces of the no-signaling polytope \(\mathrm{NS}(2,2,2)\) are excluded from quantum set \(\mathrm{Q}(2,2,2)\).
3. 2-dimensional faces. These are the convex combinations of the PR box and two neighboring Local deterministic boxes. \[\mathrm{P}=c_{NS}\cdot PR+c_{1}\cdot L_{i}+c_{2}\cdot L_{j}\] (15) where \(i,j\in\{1,2\ldots,8\},i\neq j\) and \(c_{NS}>0,c_{1}>0,c_{2}>0,c_{NS}+c_{1}+c_{2}=1\). In a particular subset of cases, we can still find a 1-unsaturated sequence with the normalization condition unsaturated between only one pair \(p_{i},p_{i+1}\). This happens when the non-zero events of the two local deterministic boxes which are the zero probability events of the PR box, are in the same setting \((x,y)\). As discussed above, there must exist a 1-unsaturated sequence and we can construct a matrix as in the previous case to exclude such boxes. The remaining cases are (up to relabeling) the following. (1) \(p_{4}+p_{5}<1\) and \(p_{1}+p_{5}<1\). The matrix \(M_{2,1}\) in this case is constructed as: \[M_{2,1}=|1,2\rangle\langle 1,2|+|2,3\rangle\langle 2,3|+|3,4\rangle\langle 3,4|\geq 0,\] (16) where we have excluded the \(|1,5\rangle\langle 1,5|\) and \(|4,5\rangle\langle 4,5|\) terms owing to the fact that the normalization condition is not saturated for these pairs. And the corresponding matrix \(\widetilde{M}_{2,1}\) is then constructed as: \[\widetilde{M}_{2,1}=8M_{2,1}+c_{NS}^{2}\big{(}I_{(1,1)}+I_{(4,4)}+2I_{(5,5)} \big{)}+2c_{NS}\big{(}I_{(1,5)}+I_{(5,1)}+I_{(4,5)}+I_{(5,4)}\big{)}=\left( \begin{array}{cccc}8+c_{NS}^{2}&8&0&0&2c_{NS}\\ 8&16&8&0&0\\ 0&8&16&8&0\\ 0&0&8&8+c_{NS}^{2}&2c_{NS}\\ 2c_{NS}&0&0&2c_{NS}&2c_{NS}^{2}\end{array}\right)\] (17) \(\widetilde{M}_{2,1}\) can be verified to be positive definite for all \(0<c_{NS}<1\) by applying elementary row operations to transform the matrix to an upper triangular matrix with all the diagonal entries positive. And we see that \[\begin{split}&\langle\mathrm{P}|\widetilde{M}_{2,1}|\mathrm{P} \rangle-\sum_{i=1}^{5}\widetilde{M}_{2,1(i,i)}|\mathrm{P}\rangle_{i}=c_{NS}^{ 2}\big{(}p_{1}^{2}-p_{1}+p_{4}^{2}-p_{4}+2p_{5}^{2}-2p_{5}\big{)}+4c_{NS} \big{(}p_{1}p_{5}+p_{4}p_{5}\big{)}\\ &=c_{NS}^{2}\big{(}p_{1}^{2}+p_{4}^{2}+2p_{5}^{2}\big{)}+c_{NS}^{2} \bigg{(}p_{1}\left(\frac{2}{c_{NS}}p_{5}-1\right)+p_{5}\left(\frac{2}{c_{NS}} p_{1}-1\right)+p_{4}\left(\frac{2}{c_{NS}}p_{5}-1\right)+p_{5}\left(\frac{2}{c_{NS}} p_{4}-1\right)\bigg{)}.\end{split}\] (18)
Since \(p_{i}>\frac{c_{NS}}{2},\forall i\in\{1,\ldots,5\}\), we see that (18) is positive.
(2) \(p_{3}+p_{4}<1\) and \(p_{5}+p_{1}<1\). The matrix \(M_{2,2}\) in this case is constructed as:
\[M_{2,2}=|1,2\rangle\langle 1,2|+|2,3\rangle\langle 2,3|+|4,5\rangle\langle 4,5| \geq 0, \tag{19}\]
where as before we have excluded the terms corresponding to the unsaturated normalization conditions. And the corresponding matrix \(\widetilde{M}_{2,2}\) is:
\[\widetilde{M}_{2,2}=8M_{2,2}+c_{NS}^{2}(I_{(1,1)}+I_{(3,3)}+I_{(4,4)}+I_{(5,5 )})+2c_{NS}(I_{1,4}+I_{4,1}+I_{3,5}+I_{5,3})=\left(\begin{array}{cccc}8+c_{NS }^{2}&8&0&2c_{NS}&0\\ 8&16&8&0&0\\ 0&8&8+c_{NS}^{2}&0&2c_{NS}\\ 2c_{NS}&0&0&8+c_{NS}^{2}&8\\ 0&0&2c_{NS}&8&8+c_{NS}^{2}\end{array}\right) \tag{20}\]
\(\widetilde{M}_{2,2}\) can be verified to be positive definite for all \(0<c_{NS}<1\) by applying elementary row operations to transform the matrix to an upper triangular matrix with all the diagonal entries positive. And
\[\begin{split}&\langle\mathrm{P}|\widetilde{M}_{2,2}|\mathrm{P} \rangle-\sum_{i=1}^{5}\widetilde{M}_{2,2(i,j)}|\mathrm{P}\rangle_{i}=c_{NS}^ {2}(p_{1}^{2}-p_{1}+p_{3}^{2}-p_{3}+p_{4}^{2}-p_{4}+p_{5}^{2}-p_{5})+4c_{NS}( p_{1}p_{4}+p_{3}p_{5})\\ &=c_{NS}^{2}\big{(}p_{1}^{2}+p_{3}^{2}+p_{4}^{2}+p_{5}^{2}\big{)}+c_{NS}^ {2}\left(p_{1}\left(\frac{2}{c_{NS}}p_{4}-1\right)+p_{4}\left(\frac{2}{c_{NS}} p_{1}-1\right)+p_{3}\left(\frac{2}{c_{NS}}p_{5}-1\right)+p_{5}\left(\frac{2}{c_{NS}} p_{3}-1\right)\right)\end{split} \tag{21}\]
Since \(p_{i}>\frac{c_{NS}}{2},\forall i\in\{1,\ldots,5\}\), we see that (21) is positive.
1. 3-dimensional faces. These are convex combinations of a PR box and three neighboring Local deterministic boxes. \[\mathrm{P}=c_{NS}\cdot PR+c_{1}\cdot L_{i}+c_{2}\cdot L_{j}+c_{3}\cdot L_{k}\] (22) where \(i,j,k\in\{1,2\ldots,8\},i+j\neq k\) and \(c_{NS},c_{1},c_{2},c_{3}\) are positive and \(c_{NS}+c_{1}+c_{2}+c_{3}=1\).
Apart from the cases which are handled by the previous analysis, there is one further case to consider. This happens when the non-zero probability events of the three local deterministic boxes which are zero probability events in the PR box, are at most from three different settings \((x,y)\). Let us denote by \((\widetilde{x},\widetilde{y})\) the setting in which the zero events of PR box are not touched by the three local deterministic boxes. And at least one of the four zero probability events of the PR box in the setting \((\widetilde{x}+1,\widetilde{y})\), \((\widetilde{x},\widetilde{y}+1)\) are not touched by the local deterministic boxes. Up to relabeling in this case we have \(p_{3}+p_{4}<1,p_{4}+p_{5}<1,p_{5}+p_{1}<1\). The matrix \(M_{3}\) in this case is constructed as:
\[M_{3}=|1,2\rangle\langle 1,2|+|2,3\rangle\langle 2,3|\geq 0 \tag{23}\]
And the corresponding matrix \(\widetilde{M}\) can be constructed as:
\[\begin{split}&\widetilde{M}_{3}=8\cdot M_{3}+c_{NS}\big{(}I_{(1, 1)}+I_{(3,3)}\big{)}+\big{(}2c_{NS}+c_{NS}^{2}\big{)}\big{(}I_{(4,4)}+I_{(5,5 )}\big{)}+2c_{NS}\big{(}I_{(1,5)}+I_{(5,1)}+I_{(3,4)}+I_{(4,3)}+I_{(4,3)}+I_{(4, 5)}+I_{(5,4)}\big{)}\\ &=\left(\begin{array}{cccc}8+c_{NS}&8&0&0&2c_{NS}\\ 8&16&8&0&0\\ 0&8&+c_{NS}&2c_{NS}&0\\ 0&0&2c_{NS}&2c_{NS}+c_{NS}^{2}\end{array}\right)\\ &2c_{NS}&0&0&2c_{NS}&2c_{NS}+c_{NS}^{2}\end{split} \tag{24}\]
\(\widetilde{M}_{3}\) can be verified to be positive definite for all \(0<c_{NS}<1\) by applying elementary row operations to transform the matrix to an upper triangular matrix with all the diagonal entries positive. And
\[\begin{split}\langle\mathrm{P}|\widetilde{M}_{3}|\mathrm{P} \rangle-\sum_{i=1}^{5}\widetilde{M}_{3(i,i)}|\mathrm{P}\rangle_{i}&=c_{NS}(p_ {1}^{2}-p_{1}+p_{3}^{2}-p_{3}+2p_{4}^{2}-2p_{4}+2p_{5}^{2}-2p_{5})+c_{NS}^{2} (p_{4}^{2}-p_{4}+p_{5}^{2}-p_{5})\\ &+4c_{NS}\big{(}p_{1}p_{5}+p_{3}p_{4}+p_{4}p_{5})\\ &=2c_{NS}\big{(}p_{1}+p_{4}+p_{5}\big{)}(p_{1}+p_{4}+p_{5}-1)+c_{NS}^{2} (p_{4}^{2}+p_{5}^{2})-c_{NS}^{2}(p_{4}+p_{5}).\end{split} \tag{25}\]
Note that \(p_{1}=p_{3}\) and \(p_{1}+p_{4}+p_{5}=\frac{3}{2}c_{NS}+c_{1}+c_{2}+c_{3}=1+\frac{1}{2}c_{NS}\), so that (25) is positive.
V. 4-dimensional faces. These are the convex combination of the one PR box and four neighboring Local deterministic boxes. We find there are only two types of boxes on the 4-dimensional faces (up to relabeling) in which one can not find a unsaturated sequence we have discussed so far.
(1) Case 1.
\[\begin{array}{c|c|c|c}&B_{0}&B_{1}\\ &0&1&0&1\\ \hline\hline A_{0}&0&\times&\times&\times\\ &1&0&\times&0&\times\\ \hline A_{1}&0&\times&0&\times\\ &1&\times&\times&0\\ \end{array} \tag{26}\]
In this case, we choose the following six events from this box and label them as: \(p_{1}:=p(11|A_{0}B_{0}),p_{2,1}:=p(00|A_{0}B_{0}),p_{2,2}:=p(01|A_{0}B_{0}),p_ {3}:=p(11|A_{0}B_{1}),p_{4}:=p(10|A_{1}B_{1}),p_{5}:=p(00|A_{1}B_{0}).\) Under this labeling, we see that these six events form a 2-unsaturated sequence where \(p_{2}\) is composed of \(p_{2,1},p_{2,2}\). Thus this box doesn't have quantum realizations.
(2) Case 2.
\[\begin{array}{c|c|c|c}&B_{0}&B_{1}\\ &0&1&0&1\\ \hline\hline A_{0}&0&\times&\times&\times\\ &1&0&\times&0&\times\\ \hline A_{1}&0&\times&\times&\times\\ &1&0&\times&0\\ \end{array} \tag{27}\]
In this case, we choose the six events from this box and label them as: \(p_{1}:=p(11|A_{0}B_{0}),p_{2,1}:=p(00|A_{0}B_{0}),p_{2,2}:=p(01|A_{0}B_{0}),p_ {3}:=p(11|A_{0}B_{1}),p_{4}:=p(10|A_{1}B_{1}),p_{5}:=p(00|A_{1}B_{0}).\) Under this labeling, we see that these six events form a 3-unsaturated sequence where \(p_{2}\) is composed of \(p_{2,1},p_{2,2}\). Thus this box doesn't have quantum realizations. In summary, all the boxes on the 4-dimensional faces of the no-signaling polytope \(\mathrm{NS}(2,2,2)\) are excluded from the quantum set \(\mathrm{Q}(2,2,2)\).
As we have seen, all non-local boxes on the faces of the no-signaling polytope \(\mathrm{NS}(2,2,2)\) up to dimension four are excluded from the quantum set. The optimal quantum point for the well-known Hardy paradox shows that the quantum set reaches a five-dimensional non-local boundary of the non-signaling set, so that the theorem is optimal.
### Exclusion of non-local non-signaling boxes in \((2,2,k)\) Bell scenarios for \(k\geq 2\)
In this section, we will extend Theorem. III.1 to more general Bell scenarios \((2,2,k),k\geq 2\), and show how to rule out non-local no-signaling regions in this more general case.
**Theorem 2**.: _In the \((2,2,k)\) Bell scenario with \(k\geq 2\), let \(\mathrm{P}\) be a point on a face of the no-signaling polytope \(\mathrm{NS}(2,2,k)\) of dimension \(d\leq 4k-4\), such that \(\mathrm{P}\notin\mathrm{L}(2,2,k)\), then \(\mathrm{P}\notin\mathrm{Q}(2,2,k)\)._
Before proving this theorem. III.2, we recall some preliminary knowledge regarding the structure of the no-signaling polytope \(\mathrm{NS}(2,2,k)\).
**Lemma 1**.: _[_54_]_ _The non-local vertices of \(\mathrm{NS}(2,2,k)\) for two inputs \(x,y\in\{0,1\}\) and outputs \(a\in\{0,\ldots,k-1\}\) and \(b\in\{0,\ldots,k-1\}\) are equivalent under local relabelling to_
\[p(a,b|x,y)=\begin{cases}1/d:&(b-a)\bmod d=x\cdot y\\ &a,b\in\{0,\ldots,d-1\}\\ 0:&otherwise\end{cases} \tag{28}\]
_for each \(d\in\{2,\ldots,k\}\)._
In our proof for each \(k\), we ignore the non-local extremal points for which \(d\neq k\), because these vertices are already handled by the \((2,2,k^{\prime})\) scenarios, with \(k^{\prime}<k\). In other words, in the \((2,2,k)\) Bell scenario, we only consider the non-local vertex of P in the above lemma with \(n=k\), denote it by \(PR^{(k)}\). \(PR^{(k)}\) is equivalent to the box shown in Fig. 3 up to relabeling, and we will only focus on this non-local vertex in the following proof.
**Lemma 2**.: _In the \((2,2,k)\) Bell scenario with \(k\geq 2\), the box \(PR^{(k)}\) is adjacent to \(4k\) local deterministic boxes \(L^{(k)}\) and they form a \(4k\)-dimensional simplex._
Proof.: By [39], the no-signaling graphs corresponding to \(L^{(k)}\) and \(PR^{(k)}\) are both connected graphs. We denote the no-signaling graphs corresponding to \(L^{k}\) and \(PR^{k}\) to be \(G_{l}\) and \(G_{nl}\) respectively. If \(L^{k}\) is a neighbour of \(PR^{k}\), then the intersection of their no-signaling graphs is a path consisting of \(3\) vertices and \(2\) edges, simply because the local deterministic box satisfies three out of the four winning constraints of the unique game defined by \(PR^{k}\). Thus given \(G_{l}\), we can define a unique \(2\)-edge path in \(G_{nl}\). On the other hand, given any \(2\)-edge path in \(G_{nl}\), we can construct a unique local box. Thus there is a one-to-one correspondence between the local neighbours of \(PR^{(k)}\) and \(2\)-edge paths in \(G_{nl}\). So that the number of adjacent local boxes is the same as the number of \(2\)-edge paths of \(G_{nl}\), which is the same as the number of vertices in \(G_{nl}\), which is \(4k\). And since the graph \(G_{nl}\) is a cycle of length at least \(8\), a point that is a uniform mixture of two local boxes \(L^{(k)}\) (two \(2\)-edge paths in \(G_{nl}\)) cannot be obtained as a convex combination involving any other local box \(L^{(k)}\) (any other \(2\)-edge path) on this face. So that every pair of local boxes on this face is adjacent to each other, so that the box \(PR^{k}\) and the \(4k\) local deterministic boxes \(L^{(k)}\) form a \(4k\)-dimensional simplex.
We now proceed to the proof of theorem. III.2.
Proof.: To begin with, we partition the box \(\{p(a,b|x,y)\}\) into sub-blocks which follow the structure depicted in Fig. 3. By doing so, we ensure that all the non-zero entries of the \(PR^{(k)}\) fall within the sub-blocks of size \((k-1)\times(k-1)\) and \(1\times 1\). This partition enables us to relabel the inputs and outputs of any boxes lying on the faces of the no-signaling polytope \(\operatorname{NS}(2,2,k)\) of dimension \(d=4k-(8-i)\), \(\forall i\in\{1,2\ldots,8\}\), in such a way that the non-zero events of the \(PR^{(k)}\) box are in the \((k-1)\times(k-1)\) and \(1\times 1\) blocks, and at most \(i\) non-zero events lie outside the \((k-1)\times(k-1)\) and \(1\times 1\) blocks. As before, we construct a composite chained sequence comprising five elements \(p_{1},\ldots,p_{5}\), as shown in Fig. 3. Within each \((k-1)\times(k-1)\) block, we treat all the events as sub-elements and view the entire block as a single element in the composite chained sequence. In this case, we have naturally \(p_{1}+p_{5}<1\) as seen from the Fig. 3.
Figure 3: A non-local extreme vertex of no-signaling polytope in \((2,2,k)\) Bell scenario, \(PR^{(k)}\). The probability of the labeled event in this box is \(\frac{1}{k}\) and the others are \(0\). Dividing the box in this way (see the black blocks) such that there is a block of size \((k-1)\times(k-1)\) and a block of size \(1\times 1\) in each setting pair. One can see that all the non-zero entries of \(PR^{(k)}\) are in the blocks of size \((k-1)\times(k-1)\) and \(1\times 1\). The \(5\) red dashes blocks for a composite chain sequence with \(5\) elements, in which the elements \(p_{2}\) and \(p_{4}\)\(p_{2},p_{4}\) are the composite ones with \((k-1)^{2}\) sub-elements and the normalization condition between \(p_{1},p_{5}\) is ”naturally unsaturated” when \(k>2\).
1. \(d\leq 4k-7\)-dimensional faces. By construction, we see that for any box lying on the faces of the no-signaling polytope \(\mathrm{NS}(2,2,k)\) of dimension \(d\leq 4k-7\), at most one non-zero probability is located outside the \((k-1)\times(k-1)\) and \(1\times 1\) blocks. Up to relabeling, in this case we can construct the matrix \(\widetilde{M}_{1}^{k}\) as: \[\widetilde{M}_{1}^{(k)}=4\cdot M_{1}^{(k)}+I_{(1,5)}+I_{(5,1)}=\left(\begin{array} []{cccc}4&4_{1\times(k-1)^{2}}&0&0&1\\ 4_{(k-1)^{2}\times 1}&8_{(k-1)^{2}\times(k-1)^{2}}&4_{(k-1)^{2}\times 1}&0&0\\ 0&4_{1\times(k-1)^{2}}&8&4_{1\times(k-1)^{2}}&0\\ 0&0&4_{1(k-1)^{2}\times 1}&8_{(k-1)^{2}\times(k-1)^{2}}&4_{(k-1)^{2}\times 1}\\ 1&0&0&4_{1\times(k-1)^{2}}&4\end{array}\right)\] (29) Note that \(4_{1\times(k-1)^{2}}\) refers to a \(1\times(k-1)^{2}\) submatrix with all entries equal to \(4\). We thus obtain the following inequality: \[\langle\mathrm{P}|\widetilde{M}_{1}^{(k)}|\mathrm{P}\rangle-\sum_{i=1}^{5} \widetilde{M}_{1(i,i)}^{(k)}|\mathrm{P}\rangle_{i}=2p_{1}\cdot p_{5}>0.\] (30)
2. \(d=4k-6\)-dimensional faces. Given that \(p_{5}+p_{1}<1\), we have two cases: (1) \(p_{4}+p_{5}<1\). Following the construction of the matrix (17), we can construct \(\widetilde{M}_{2,1}^{(k)}\) as \[\widetilde{M}_{2,1}^{(k)}= f(k)\cdot M_{2,1}^{(k)}+c_{NS}^{2}(I_{(1,1)}+I_{(4,4)}+2I_{(5,5)}) +k\cdot c_{NS}(I_{(1,5)}+I_{(5,1)}+I_{(4,5)}+I_{(5,4)})\] \[= \left(\begin{array}{cccc}f(k)+c_{NS}^{2}&f(k)_{1\times(k-1)^{2 }}&0&0&k\cdot c_{NS}\\ f(k)_{(k-1)^{2}\times 1}&2f(k)_{(k-1)^{2}\times(k-1)^{2}}&f(k)_{(k-1)^{2}\times 1}&0&0 \\ 0&f(k)_{1\times(k-1)^{2}}&2f(k)&f(k)_{1\times(k-1)^{2}}&0\\ 0&0&f(k)_{(k-1)^{2}\times 1}&\left(f(k)+c_{NS}^{2}\right)_{(k-1)^{2}\times(k-1)^{2 }}&(k\cdot c_{NS})_{(k-1)^{2}\times 1}\\ k\cdot c_{NS}&0&0&(k\cdot c_{NS})_{1\times(k-1)^{2}}&2c_{NS}^{2}\end{array}\right)\] (31)
\(\widetilde{M}_{2,1}\) can be verified to be positive definite for all \(0<c_{NS}<1\) with \(f(k)=k^{3}\). And we have
\[\langle\mathrm{P}|\widetilde{M}_{2,1}^{(d)}|\mathrm{P}\rangle- \sum_{i=1}^{5}\widetilde{M}_{2,1(i,i)}^{(d)}|\mathrm{P}\rangle_{i}=c_{NS}^{2} \left(p_{1}^{2}-p_{1}+p_{4}^{2}-p_{4}+2p_{5}^{2}-2p_{5}\right)+2kc_{NS}\left(p _{1}p_{5}+p_{4}p_{5}\right)\] \[=c_{NS}^{2}\left(p_{1}^{2}+p_{4}^{2}+2p_{5}^{2}\right)+c_{NS}^{2} \left(p_{1}\left(\frac{k}{c_{NS}}p_{5}-1\right)+p_{5}\left(\frac{k}{c_{NS}}p _{1}-1\right)\right) \tag{32}\] \[+c_{NS}^{2}\left(p_{4}\left(\frac{k}{c_{NS}}p_{5}-1\right)+p_{5} \left(\frac{k}{c_{NS}}p_{4}-1\right)\right)\]
where \(p_{4}=\left(\sum_{i=1}^{(k-1)^{2}}p_{4,i}\right)\). Since the \(p_{1},p_{5}\) are the non-zero events of \(PR^{(k)}\), their values are larger than \(\frac{c_{NS}}{k}\). There are \(k-1\) events in each \((k-1)\times(k-1)\) block that have this value. Therefore we have \(p_{4}\geq\frac{k-1}{k}c_{NS}\) and the expression in Eq.(32) is positive.
(2) \(p_{3}+p_{4}<1\). Following the construction of the matrix (20), we can construct \(\widetilde{M}_{2,2}^{(k)}\) as
\[\widetilde{M}_{2,2}^{(k)}= f(k)\cdot M_{2,2}^{(k)}+c_{NS}^{2}(I_{(1,1)}+I_{(3,3)}+I_{(4,4)}+I_{(5,5)})+k\cdot c_{NS}(I_{1,4}+I_{4,1}+I_{3,5}+I_{3,3})\] \[=\left(\begin{array}{cccc}f(k)+c_{NS}^{2}&f(k)_{1\times(k-1)^{2 }}&0&kc_{NS}&0\\ f(k)_{(k-1)^{2}\times 1}&2f(k)_{(k-1)^{2}\times(k-1)^{2}}&f(k)_{(k-1)^{2}\times 1}&0&0 \\ 0&f(k)_{1\times(k-1)^{2}}&f(k)+c_{NS}^{2}&0_{1\times(k-1)^{2}}&kc_{NS}\\ kc_{NS}&0&0_{(k-1)^{2}\times 1}&\left(f(k)+c_{NS}^{2}\right)_{(k-1)^{2}\times(k-1)^{2}}&f(k) _{(k-1)^{2}\times 1}\\ 0&0&kc_{NS}&f(k)_{1\times(k-1)^{2}}&f(k)+c_{NS}^{2}\end{array}\right) \tag{33}\]
\(\widetilde{M}_{2,2}\) can be verified to be positive definite for all \(0<c_{NS}<1\) with \(f(k)=k^{3}\). And we have
\[\begin{split}&\langle{\rm P}|\widetilde{M}_{2,2}^{(k)}|{\rm P} \rangle-\sum_{i=1}^{5}\widetilde{M}_{2,2(i,i)}^{(k)}|{\rm P}\rangle_{i}\\ &=c_{NS}^{2}\left(p_{1}^{2}-p_{1}+p_{3}^{2}-p_{3}+p_{4}^{2}-p_{4 }+p_{5}^{2}-p_{5}\right)+2kc_{NS}\left(p_{1}p_{4}+p_{3}p_{5}\right)\\ &=c_{NS}^{2}\left(p_{1}^{2}+p_{3}^{2}+p_{4}^{2}+p_{5}^{2}\right) +c_{NS}^{2}\left(p_{3}\left(\frac{k}{c_{NS}}p_{5}-1\right)+p_{5}\left(\frac{k} {c_{NS}}p_{3}-1\right)\right)\\ &+c_{NS}^{2}\left(p_{1}\left(\frac{k}{c_{NS}}p_{4}-1\right)+p_{4 }\left(\frac{k}{c_{NS}}p_{1}-1\right)\right)\end{split} \tag{34}\]
where \(p_{4}=\left(\sum_{i=1}^{(k-1)^{2}}p_{4,i}\right)\). Since the \(p_{1},p_{3},p_{5}\) are the non-zero events of the PR box \(PR^{(k)}\), their values are larger than \(\frac{c_{NS}}{k}\). And there are \(k-1\) events in each \((k-1)\times(k-1)\) block that have this value. Therefore, we have \(p_{4}\geq\frac{k-1}{k}c_{NS}\) and the expression in Eq.(34) is positive.
In summary, all the boxes on the \(4k-6\)-dimensional faces of the no-signaling polytope \(\text{NS}(2,2,k)\) have no quantum realizations.
1. \(d=4k-5\)-dimensional faces. There are two cases to consider, similar to the cases considered in the \((2,2,2)\) scenario. (1) In the first case we have that in addition to \(p_{1}+p_{5}<1\), also \(p_{3}+p_{4}<1\) and \(p_{4}+p_{5}<1\). The matrix \(M_{3,1}^{(k)}\) in this case is constructed as: \[M_{3,1}^{(k)}=|1,2\rangle\langle 1,2|+|2,3\rangle\langle 2,3|\succeq 0.\] (35) And the corresponding matrix \(\widetilde{M}_{3,1}^{(k)}\) is then constructed as: \[\begin{split}\widetilde{M}_{3,1}^{(k)}&=f(k) \cdot M_{3,1}^{(k)}+\frac{k}{2}c_{NS}\big{(}I_{(1,1)}+I_{(3,3)}\big{)}+(kc_{NS }+c_{NS}^{2})\big{(}I_{(4,4)}+I_{(5,5)}\big{)}+kc_{NS}\big{(}I_{(1,5)}+I_{(5,1 )}+I_{(3,4)}+I_{(4,3)}+I_{(4,3)}+I_{(4,5)}+I_{(5,4)}\big{)}\\ &=\left(\begin{array}{cccc}f(k)+\frac{k}{2}c_{NS}&f(k)_{1\times (k-1)^{2}}&0&0&kc_{NS}\\ f(k)_{(k-1)^{2}\times 1}&2f(k)_{(k-1)^{2}\times(k-1)^{2}}&f(k)_{(k-1)^{2}\times 1}&0&0\\ 0&f(k)_{1\times(k-1)^{2}}&f(k)+\frac{k}{2}c_{NS}&(kc_{NS})_{1\times(k-1)^{2}} &0\\ 0&0&(kc_{NS})_{(k-1)^{2}\times 1}&(kc_{NS}+c_{NS}^{2})_{(k-1)^{2}\times(k-1)^{2}}&(kc_{NS})_{(k -1)^{2}\times 1}\\ kc_{NS}&0&0&(kc_{NS})_{1\times(k-1)^{2}}&kc_{NS}+c_{NS}^{2}\end{array}\right) \end{split}\] (36) \(\widetilde{M}_{3,1}^{(k)}\) can be verified to be positive definite for all \(0<c_{NS}<1\) with \(f(k)=k^{3}\) by applying elementary row operations to transform the matrix to an upper triangular matrix with all the diagonal entries positive. And we have \[\begin{split}\langle{\rm P}|\widetilde{M}_{3,1}^{(k)}|{\rm P} \rangle-\sum_{i=1}^{5}\widetilde{M}_{3,1(i,j)}^{(k)}|{\rm P}\rangle_{i}&= \frac{k}{2}c_{NS}\left(p_{1}^{2}-p_{1}+p_{3}^{2}-p_{3}+2p_{4}^{2}-2p_{4}+2p_{5 }^{2}-2p_{5}\right)\\ &+c_{NS}^{2}\left(p_{4}^{2}-p_{4}+p_{5}^{2}-p_{5}\right)+2kc_{NS} \left(p_{1}p_{5}+p_{3}p_{4}+p_{4}p_{5}\right)\\ &=kc_{NS}\left(p_{1}+p_{4}+p_{5}\right)\left(p_{1}+p_{4}+p_{5}-1 \right)+c_{NS}^{2}\left(p_{4}^{2}+p_{5}^{2}\right)-c_{NS}^{2}\left(p_{4}+p_{5} \right)\end{split}\] (37) Note that \(p_{4}=\left(\sum_{i=1}^{(k-1)^{2}}p_{4,i}\right)\geq\frac{k-1}{k}c_{NS}\), \(p_{1}=p_{3}\) and \(p_{1}+p_{4}+p_{5}=1+\frac{1}{k}c_{NS}\), thus (37) is positive. (2) In the second case, we have that in addition to \(p_{1}+p_{5}<1\), also \(p_{1}+p_{2}<1\) and \(p_{4}+p_{5}<1\). The matrix \(M_{3,2}^{(k)}\) in this case is constructed as: \[M_{3,2}^{(k)}=|2,3\rangle\langle 2,3|+|3,4\rangle\langle 3,4|\succeq 0.\] (38)
And the corresponding matrix \(\widehat{M}_{3,2}^{(k)}\) is then constructed as:
\[\widehat{M}_{3,2}^{(k)} =f(k)\cdot M_{3,2}^{(k)}+\frac{k}{2}c_{NS}\big{(}I_{(2,2)}+I_{(4,4)} \big{)}+(kc_{NS}+c_{NS}^{2})\big{(}I_{(5,5)}+I_{(1,1)}\big{)}+kc_{NS}\big{(}I_{ (2,1)}+I_{(1,2)}+I_{(4,5)}+I_{(5,4)}+I_{(5,1)}+I_{(1,5)}\big{)} \tag{39}\] \[=\left(\begin{array}{cccc}kc_{NS}+c_{NS}^{2}&(kc_{NS})_{1\times( k-1)^{2}}&0&0&kc_{NS}\\ (kc_{NS})_{(k-1)^{2}\times 1}&\big{(}f(k)+\frac{k}{2}c_{NS}\big{)}_{(k-1)^{2} \times(k-1)^{2}}&f(k)_{(k-1)^{2}\times 1}&0&0\\ 0&f(k)_{1\times(k-1)^{2}}&2f(k)&f(k)_{1\times(k-1)^{2}}&0\\ 0&0&f(k)_{(k-1)^{2}\times 1}&\big{(}f(k)+\frac{k}{2}c_{NS}\big{)}_{(k-1)^{2} \times(k-1)^{2}}&(kc_{NS})_{(k-1)^{2}\times 1}\\ kc_{NS}&0&0&(kc_{NS})_{1\times(k-1)^{2}}&kc_{NS}+c_{NS}^{2}\end{array}\right)\]
\(\widehat{M}_{3,2}^{(k)}\) can be verified to be positive definite for all \(0<c_{NS}<1\) with \(f(k)=k^{3}\) by applying elementary row operations to transform the matrix to an upper triangular matrix with all the diagonal entries positive. And we have
\[\begin{split}\big{(}\mathrm{P}|\widehat{M}_{3,2}^{(k)}|\mathrm{P }\big{)}-\sum_{i=1}^{5}\widehat{M}_{3,2(i,i)}^{(k)}|\mathrm{P}\rangle_{i}& =\frac{k}{2}c_{NS}\big{(}p_{2}^{2}-p_{2}+p_{4}^{2}-p_{4}+2p_{5}^{2 }-2p_{5}+2p_{1}^{2}-2p_{1}\big{)}\\ &+c_{NS}^{2}\big{(}p_{5}^{2}-p_{5}+p_{1}^{2}-p_{1}\big{)}+2kc_{NS} \big{(}p_{2}p_{1}+p_{4}p_{5}+p_{5}p_{1}\big{)}\\ &=kc_{NS}\big{(}p_{4}+p_{5}+p_{1}\big{)}\left(p_{4}+p_{5}+p_{1}-1 \right)+c_{NS}^{2}\big{(}p_{5}^{2}+p_{1}^{2}\big{)}-c_{NS}^{2}\big{(}p_{5}+p_{ 1}\big{)}\end{split} \tag{40}\]
Again \(p_{2}=\left(\sum_{i=1}^{(k-1)^{2}}p_{2,i}\right)\geq\frac{k-1}{k}c_{NS},p_{4} =\left(\sum_{i=1}^{(k-1)^{2}}p_{4,i}\right)\geq\frac{k-1}{k}c_{NS}\), \(p_{2}=p_{4}\) and \(p_{1}+p_{4}+p_{5}=1+\frac{1}{k}c_{NS}\), thus (40) is positive. In summary, all the boxes on the \(4k-5\)-dimensional faces of the no-signaling polytope \(\mathrm{NS}(2,2,k)\) are not in the quantum set \(\mathrm{Q}(2,2,k)\)
4. \(d=4k-4\)-dimensional faces. The proof for the boxes on the \(d=4k-4\)-dimensional faces of no-signaling polytope \(\mathrm{NS}(2,2,k)\) follows analogously to the one previously presented for the \(4\)-dimensional face of the no-signaling polytope \(\mathrm{NS}(2,2,2)\) in the \((2,2,2)\) scenario.
To summarize, we have demonstrated that any non-local boxes located on the faces of the no-signaling polytope \(\mathrm{NS}(2,2,k)\), \(\forall k\geq 2\) with a dimension of \(d\leq 4k-4\) are not in the quantum set \(\mathrm{Q}(2,2,k)\).
## IV Quantum Bell inequalities and self-testing
In the previous sections, we proved that there are no quantum realizations for non-local boxes on the faces of \(\mathrm{NS}(2,2,k)\) of dimension \(d\leq 4k-4\). As stated earlier, the well-known Hardy paradox shows that quantum theory allows for a realization of a point on a five-dimensional face of the no-signaling set \(\mathrm{NS}(2,2,2)\). In [28], we extended this point to a whole region on the five-dimensional face that admit quantum realization (in an effort to find an optimal point for DI randomness extraction). Furthermore, we showed that these points allow to self-test all pure two-qubit entangled states except the maximally entangled state.
In this section, we present a class of tight quantum Bell inequalities in the \((2,m,2)\) where each player performs \(m\) binary measurements. Let's denote by \(A_{i}\) and \(B_{i}\left(i=1,\ldots,m\right)\) the binary observable (outcomes \(\pm 1\)) of Alice and Bob respectively. From the Tsirelson characterization of the set of quantum correlations in this scenario and by considering the positive semidefinite completion problem for the set of correlators in this cycle scenario, we obtain one boundary of the set of quantum correlations \((A_{x}B_{y})\) as follows.
**Proposition 1**.: _In the \((2,m,2)\) scenario, consider the \(2m\) operators \(\{A_{1},\ldots,A_{m},B_{1},\ldots,B_{m}\}\) with binary outcomes \(\pm 1\) and assumed to fulfill \([A_{x},B_{y}]=0\) and an unknown state \(|\psi\rangle\). Define the unit vectors \(\tilde{A}_{x}=A_{x}|\psi\rangle\) and \(\tilde{B}_{y}=B_{y}|\psi\rangle\), and the observed correlations \(E_{x,y}\equiv\langle\psi|A_{x}B_{y}|\psi\rangle=\tilde{A}_{x}^{+}\tilde{B}_{y} =\cos\alpha_{x,y}\), where \(\alpha_{x,y}\geq 0\) is the angle between \(\tilde{A}_{x}\) and \(\tilde{B}_{y}\). The set of correlations \(E_{x,y}\) achievable in quantum theory (up to relabeling the index of operators) has the boundary:_
\[\sum_{i=1}^{m-1}\left(\alpha_{i,i}+\alpha_{i+1,i}\right)=\alpha_{1,m}-\alpha_{m,m} \tag{41}\]
This statement generalises the well-known boundary of the quantum correlation set in the \((2,2,2)\) scenario characterised by Tsirelson-Landau- Masanes as [55; 56; 57]:
\[\sum_{(x,y)\neq(i,j)}\arcsin\left(\langle A_{x}B_{y}\rangle\right)-\arcsin \left(\langle A_{i}B_{j}\rangle\right)=\xi\pi, \tag{42}\]
where \(i,j\in\{1,2\}\) and \(\xi=\pm 1\). One can also see that the corresponding points on the \((2,m,2)\) boundary optimally violate a weighted Braunstein-Caves chained inequality expression of the form
\[I_{ch}^{m}\coloneqq\sum_{i=1}^{m-1}\left(c_{i,i}\langle A_{i}B_{i}\rangle+c_{ i+1,i}\langle A_{i+1}B_{i}\rangle\right)+c_{m,m}\langle A_{m}B_{m}\rangle-c_{1,m} \langle A_{1}B_{m}\rangle \tag{43}\]
with appropriate weights \(c_{i,j}\). Here, we show that points on this boundary serve to self-test the singlet state (along with suitable measurements). This generalises the characterisation of singlet self-testing boundary points in the \((2,2,2)\) scenario obtained in [42] and the self-testing of the chain inequality proven in [43] (where a robust result was derived).
It's easy to verify that the singlet is self-tested by this circuit Fig. 4 if these control operators are unitary (if they are not unitary, one can perform a regularization on the non-unitary operators) and satisfy [44]:
\[\begin{split} Z_{A}|\psi\rangle&=Z_{B}|\psi\rangle \\ X_{A}|\psi\rangle&=X_{B}|\psi\rangle\\ X_{A}Z_{A}|\psi\rangle&=-Z_{A}X_{A}|\psi\rangle\\ X_{B}Z_{B}|\psi\rangle&=-Z_{B}X_{B}|\psi\rangle \end{split} \tag{44}\]
We will see how to construct these control operators \(Z_{A/B},X_{A/B}\) from the binary measurements \(A_{x},B_{y}\) in the weighted chain inequality.
Let's denote by \(\theta_{i,j}(>0)\) the angles between \(\bar{A}_{i}\) and \(\bar{A}_{j}\). Given the scalar products of \(\bar{A}_{i}\bar{A}_{i+1}\) with \(\bar{B}_{i}\), the angle \(\theta_{i,i+1}\,\forall i=1,\ldots m-1\) must satisfy \(|\alpha_{i,i}-\alpha_{i+1,i}|\leq\theta_{i,i+1}\leq\alpha_{i,i}+\alpha_{i+1,i}\). The lower and upper bounds are reached when \(\bar{B}_{i}\) lies in the same plane of \(\bar{A}_{i}\bar{A}_{i+1}\).
\[\begin{split}|\alpha_{1,1}-\alpha_{2,1}|&\leq \theta_{1,2}\leq\alpha_{1,1}+\alpha_{2,1}\\ |\alpha_{2,2}-\alpha_{3,2}|&\leq\theta_{2,3}\leq \alpha_{2,2}+\alpha_{3,2}\\ |\alpha_{3,3}-\alpha_{4,3}|&\leq\theta_{3,4}\leq \alpha_{3,3}+\alpha_{4,3}\end{split} \tag{45}\]
And the angle \(\theta_{m,1}\) satisfies:
\[|\alpha_{m,m}-\alpha_{1,m}|\leq\theta_{m,1}\leq\alpha_{m,m}+\alpha_{1,m} \tag{46}\]
Together with the boundary condition Eq. (41), this implies that all vectors \(\bar{A}_{i},\bar{B}_{i}\,\forall i=1,\ldots,m\) lie on the same plane, equations in Eq. (45) all reach the upper bound and Eq. (46) reaches the lower bound. See Fig. 5. In particular, we have
Figure 4: Local isometry \(\Phi\) used to self-test the singlet. \(H\) is the Hadamard gate.
\[\begin{split}\bar{B}_{1}&=\frac{\sin\left(\alpha_{1,1} \right)\bar{A}_{2}+\sin\left(\alpha_{2,1}\right)\bar{A}_{1}}{\sin\left(\alpha_ {1,1}+\alpha_{2,1}\right)}\\ \bar{A}_{2}&=\frac{\sin\left(\alpha_{2,2}\right) \bar{B}_{1}+\sin\left(\alpha_{2,1}\right)\bar{B}_{2}}{\sin\left(\alpha_{2,1}+ \alpha_{2,2}\right)}.\end{split} \tag{47}\]
Using \(\left[A_{x},B_{y}\right]=0\), and \(A_{x}^{2}=B_{y}^{2}=I\), we obtain:
\[\begin{split}\left(A_{1}A_{2}+A_{2}A_{1}\right)\left|\psi \right\rangle&=2\cos\left(\alpha_{1,1}+\alpha_{2,1}\right)\left| \psi\right\rangle\\ \left(B_{1}B_{2}+B_{2}B_{1}\right)\left|\psi\right\rangle& =2\cos\left(\alpha_{2,2}+\alpha_{2,1}\right)\left|\psi\right\rangle \end{split} \tag{48}\]
Now, we can construct the control operators:
\[\begin{split} Z_{A}&=A_{1}\\ X_{A}&=\frac{A_{2}-\cos\left(\alpha_{1,1}+\alpha_{ 2,1}\right)A_{1}}{\sin\left(\alpha_{1,1}+\alpha_{2,1}\right)}\\ Z_{B}&=\frac{\sin\left(\alpha_{1,2}\right)B_{1}- \sin\left(\alpha_{1,1}\right)B_{2}}{\sin\left(\alpha_{1,2}-\alpha_{1,1}\right) }\\ X_{B}&=\frac{\cos\left(\alpha_{1,1}\right)B_{2}- \cos\left(\alpha_{1,2}\right)B_{1}}{\sin\left(\alpha_{1,2}-\alpha_{1,1}\right) }\end{split} \tag{49}\]
We obtain that:
\[\begin{split}\left(Z_{A}X_{A}+X_{A}Z_{A}\right)\left|\psi\right\rangle &=\frac{A_{1}A_{2}+A_{2}A_{1}-2\cos\left(\alpha_{1,1}+ \alpha_{2,1}\right)}{\sin\left(\alpha_{1,1}+\alpha_{2,1}\right)}\left|\psi \right\rangle=0\\ \left(Z_{B}X_{B}+X_{B}Z_{B}\right)\left|\psi\right\rangle& =\frac{\sin\left(\alpha_{1,2}+\alpha_{1,1}\right)\left(B_{1}B_{2}+B_{2}B_{1} \right)-\left(\sin\left(2\alpha_{1,1}\right)+\sin\left(2\alpha_{1,2}\right) \right)}{\sin^{2}\left(\alpha_{1,2}-\alpha_{1,1}\right)}\left|\psi\right\rangle \\ &=\frac{\sin\left(\alpha_{1,2}+\alpha_{1,1}\right)\left(B_{1}B_{2} +B_{2}B_{1}-2\cos\left(\alpha_{1,2}-\alpha_{1,1}\right)\right)}{\sin^{2}\left( \alpha_{1,2}-\alpha_{1,1}\right)}\left|\psi\right\rangle\\ &=\frac{2\sin\left(\alpha_{1,2}+\alpha_{1,1}\right)\left(\cos \left(\alpha_{2,2}+\alpha_{2,1}\right)-\cos\left(\alpha_{1,2}-\alpha_{1,1} \right)\right)}{\sin^{2}\left(\alpha_{1,2}-\alpha_{1,1}\right)}\left|\psi \right\rangle=0\end{split} \tag{50}\]
\[\begin{split}\langle\psi|Z_{A}Z_{B}|\psi\rangle&=\langle\psi| \frac{\sin\left(\alpha_{1,2}\right)A_{1}B_{1}-\sin\left(\alpha_{1,1}\right)A_{ 1}B_{2}}{\sin\left(\alpha_{1,2}-\alpha_{1,1}\right)}|\psi\rangle\\ &=\langle\psi|\left(\frac{\sin\left(\alpha_{1,2}\right)A_{1}B_{1 }}{\sin\left(\alpha_{1,2}-\alpha_{1,1}\right)}-\frac{\sin\left(\alpha_{1,1} \right)\sin\left(\alpha_{2,1}+\alpha_{2,2}\right)A_{1}A_{2}+\sin\left(\alpha_ {1,1}\right)\sin\left(\alpha_{2,2}\right)A_{1}B_{1}}{\sin\left(\alpha_{1,2}- \alpha_{1,1}\right)\sin\left(\alpha_{2,1}\right)}\right)|\psi\rangle\\ &=\langle\psi|\left(\frac{\sin\left(\alpha_{1,2}\right)\sin\left( \alpha_{2,1}\right)-\sin\left(\alpha_{1,1}\right)\sin\left(\alpha_{2,2}\right) }{\sin\left(\alpha_{1,2}-\alpha_{1,1}\right)\sin\left(\alpha_{2,1}\right)}A_ {1}B_{1}-\frac{\sin\left(\alpha_{1,1}\right)\sin\left(\alpha_{2,1}+\alpha_{2, 2}\right)}{\sin\left(\alpha_{1,2}-\alpha_{1,1}\right)\sin\left(\alpha_{2,1} \right)}A_{1}A_{2}\right)|\psi\rangle\\ &=\langle\psi|\left(\frac{\sin\left(\alpha_{1,2}\right)\sin\left( \alpha_{2,1}\right)-\sin\left(\alpha_{1,1}\right)\sin\left(\alpha_{2,2}\right) }{\sin\left(\alpha_{1,2}-\alpha_{1,1}\right)\sin\left(\alpha_{2,1}\right)\sin \left(\alpha_{1,1}+\alpha_{2,1}\right)}\sin\left(\alpha_{1,1}\right)A_{1}A_{ 2}\right.\right.\right.\\ &\left.\left.+\frac{\sin\left(\alpha_{1,2}\right)\sin\left(\alpha _{2,1}\right)-\sin\left(\alpha_{1,1}\right)\sin\left(\alpha_{2,2}\right)}{ \sin\left(\alpha_{1,2}-\alpha_{1,1}\right)\sin\left(\alpha_{1,1}+\alpha_{2,1} \right)}-\frac{\sin\left(\alpha_{1,1}\right)\sin\left(\alpha_{2,1}+\alpha_{2, 2}\right)}{\sin\left(\alpha_{1,2}-\alpha_{1,1}\right)\sin\left(\alpha_{2,1} \right)}A_{1}A_{2}\right)|\psi\rangle\\ &=\cdots\cdots=1\\ \langle\psi|X_{A}X_{B}|\psi\rangle=1\text{(similarly as above equation)}\end{split} \tag{51}\]
which imply the self-testing conditions Eq. (44). And they are unitary (if they are not, one can do the regularization action on the non-unitary operators). Any pair of vectors \(\tilde{A}_{i},\tilde{B}_{j}\) can be written as
\[\begin{split}\tilde{A}_{i}&=a_{z,i}Z_{A}|\psi\rangle +a_{x,i}X_{A}|\psi\rangle\\ \tilde{B}_{j}&=b_{z,i}J_{B}|\psi\rangle+b_{x,j}X_{B}| \psi\rangle\end{split} \tag{52}\]
where \(a_{z,i}=\tilde{A}_{i}\cdot Z_{A}|\psi\rangle,a_{x,i}=\tilde{A}_{i}\cdot X_{A}| \psi\rangle,b_{z,j}=\tilde{B}_{j}\cdot Z_{B}|\psi\rangle,b_{x,j}=\tilde{B}_{j} \cdot X_{B}|\psi\rangle\). Thus when the isometry is applied to any pair of operators, we get
\[\begin{split}\Phi\left(A_{i}B_{j}|\psi\rangle\left|00\right\rangle \right)&=a_{z,i}b_{z,j}\Phi\left(Z_{A}Z_{B}|\psi\rangle\left|00 \right\rangle\right)+a_{z,i}b_{x,i}\Phi\left(Z_{A}X_{B}|\psi\rangle\left|00 \right\rangle\right)\\ &+a_{x,i}b_{z,j}\Phi\left(X_{A}Z_{B}|\psi\rangle\left|00\right\rangle \right)+a_{x,i}b_{x,i}\Phi\left(X_{A}X_{B}|\psi\rangle\left|00\right\rangle \right)\\ &=\left(a_{z,i}b_{z,j}+a_{x,i}b_{x,j}\right)\Phi\left(|\psi\rangle \left|00\right\rangle\right)+\left(a_{x,i}b_{z,j}-a_{z,i}b_{x,j}\right)\Phi \left(X_{A}Z_{B}|\psi\rangle\left|00\right\rangle\right)\end{split} \tag{53}\]
And the first term in Eq. (53) is
\[\begin{split}\Phi\left(|\psi\rangle\left|00\right\rangle\right)& =\frac{1}{4}\left(\left(\mathbb{I}+Z_{A}\right)\left(\mathbb{I}+Z_{B} \right)|\psi\rangle|00\right)+X_{A}\left(\mathbb{I}-Z_{A}\right)\left(\mathbb{I }+Z_{B}\right)|\psi\rangle|10\rangle\\ &+X_{B}\left(\mathbb{I}+Z_{A}\right)\left(\mathbb{I}-Z_{B}\right)| \psi\rangle|01\rangle+X_{A}X_{B}\left(\mathbb{I}-Z_{A}\right)\left(\mathbb{I }-Z_{B}\right)|\psi\rangle|11\rangle\right)\\ &=\frac{1}{2}\left(\left(\mathbb{I}+Z_{A}\right)|\psi\rangle|00 \right)+X_{A}X_{B}\left(\mathbb{I}-Z_{A}\right)|\psi\rangle|11\rangle+0\cdot| \psi\rangle|10\rangle+0\cdot|\psi\rangle|01\rangle\right)\\ &=\frac{1}{\sqrt{2}}\left(\mathbb{I}+Z_{A}\right)|\psi\rangle \frac{1}{\sqrt{2}}\left(|00\rangle+|11\rangle\right)=|junk\rangle|\phi^{+} \rangle\end{split} \tag{54}\]
The second term in Eq. (53) is
\[\begin{split}\Phi\left(X_{A}Z_{B}|\psi\rangle\left|00\right\rangle \right)&=\frac{1}{4}\left(\left(\mathbb{I}+Z_{A}\right)\left( \mathbb{I}+Z_{B}\right)X_{A}Z_{B}|\psi\rangle\left|00\right\rangle+X_{A}\left( \mathbb{I}-Z_{A}\right)\left(\mathbb{I}+Z_{B}\right)X_{A}Z_{B}|\psi\rangle|10 \right)\\ &+X_{B}\left(\mathbb{I}+Z_{A}\right)\left(\mathbb{I}-Z_{B}\right)X_{A }Z_{B}|\psi\rangle|01\rangle+X_{A}X_{B}\left(\mathbb{I}-Z_{A}\right)\left( \mathbb{I}-Z_{B}\right)X_{A}Z_{B}|\psi\rangle|11\rangle\right)\\ &=\frac{1}{4}\left(\left(X_{A}\left(\mathbb{I}-Z_{A}\right)\left( \mathbb{I}+Z_{B}\right)Z_{B}|\psi\rangle|00\right)+\left(\mathbb{I}+Z_{A}\right) \left(\mathbb{I}+Z_{B}\right)Z_{B}|\psi\rangle|10\rangle\\ &+\left(\mathbb{I}+Z_{A}\right)\left(\mathbb{I}+Z_{B}\right)X_{A }X_{B}Z_{B}|\psi\rangle|01\rangle+X_{B}\left(\mathbb{I}+Z_{A}\right)\left( \mathbb{I}-Z_{B}\right)Z_{B}|\psi\rangle|11\rangle\right)\\ &=\frac{1}{4}\left(\left(\mathbb{I}+Z_{A}\right)\left(\mathbb{I}+Z_{B }\right)|\psi\rangle|10\rangle+\left(\mathbb{I}+Z_{A}\right)\left(\mathbb{I}+Z _{B}\right)X_{A}X_{B}Z_{B}|\psi\rangle|01\rangle+0\cdot|\psi\rangle|00\rangle+0 \cdot|\psi\rangle|11\rangle\right)\\ &=\frac{1}{\sqrt{2}}\left(\mathbb{I}+Z_{A}\right)|\psi\rangle \frac{1}{\sqrt{2}}\left(|10\rangle-|01\rangle\right)=|junk\rangle\sigma_{x,A} \sigma_{z,B}|\phi^{+}\rangle\end{split} \tag{55}\]
So Eq. (53) can be written as:
\[\begin{split}\Phi\left(A_{i}B_{j}|\psi\rangle\left|00\right\rangle \right)&=\left(a_{z,i}b_{z,j}+a_{x,i}b_{x,j}\right)\Phi\left(\left| \psi\rangle\left|00\right\rangle\right)+\left(a_{x,i}b_{z,j}-a_{z,i}b_{x,j} \right)\Phi\left(X_{A}Z_{B}|\psi\rangle|00\right\rangle)\\ &=\left(a_{z,i}b_{z,j}+a_{x,i}b_{x,j}\right)|junk\rangle|\phi^{+} \rangle+\left(a_{x,i}b_{z,j}-a_{z,i}b_{x,j}\right)|junk\rangle\sigma_{x,A} \sigma_{z,B}|\phi^{+}\rangle\\ &=|junk\rangle\left(a_{z,i}\sigma_{x,A}b_{z,j}\sigma_{z,B}+a_{x,i} \sigma_{x,A}b_{x,j}\sigma_{x,B}+a_{x,i}\sigma_{x,A}b_{z,j}\sigma_{z,B}+a_{z,i} \sigma_{x,A}b_{x,j}\sigma_{x,B}\right)|\phi^{+}\rangle\\ &=|junk\rangle\tilde{A
Note that \(\sigma_{x,A}\sigma_{x,B}|\phi^{+}\rangle=\sigma_{z,A}\sigma_{z,B}|\phi^{+} \rangle=|\phi^{+}\rangle\). Similarly for the local operators \(\Phi\left(A_{i}|\psi\rangle\left|00\right\rangle\right)=|junk\rangle\widetilde{A }_{i}|\phi^{+}\rangle\) and \(\Phi\left(B_{j}|\psi\rangle\left|00\right\rangle\right)=|junk\rangle\widetilde{B }_{j}|\phi^{+}\rangle\).
## V Common Faces of the Quantum and Classical Correlation Sets
In the previous sections, we have studied the boundary of the quantum set with specific regard to its relation with the no-signaling boundary and to self-testing applications. In this section, we explore the region of the boundary of the quantum set that also serves as the boundary of the classical set. As such, these regions serve as testing grounds and pointers towards information-theoretic means of constraining general non-signaling correlations and picking out quantum theory from amongst non-signaling theories.
Specifically, we focus on the set of correlations alone, i.e., the correlators \(\langle A_{x}B_{y}\rangle\) for binary observables \(A_{x},B_{y}\) excluding the local marginals \(\langle A_{x}\rangle,(B_{y})\). Such correlation Bell inequalities are also termed XOR games, and we present systematic constructions of nontrivial XOR games with \(m_{a},m_{b}\) inputs for Alice and Bob respectively such that the quantum value of the game equals the classical value.
In this bipartite \(m_{a}\times m_{b}\)-inputs, \(2\times 2\)-outputs Bell scenario, we can represent the extreme points of the local set as a column vector with \(2^{m_{a}+m_{b}}\) rows with all entries equal to either plus or minus one. Using the well-known Tsirelson characterisation [55, 59] of the set of quantum correlations in this scenario as an elliptope, and leveraging results on the facial structures of the set of correlation matrices [60], we obtain the following statement.
**Theorem 3** ([60]).: _If there exists a subset \(R=\{v_{1},...,v_{|R|}\}\subseteq L\), with cardinality \(r=|R|\leq\log_{2}(m_{a}+m_{b})\), and for any subset \(I\subseteq\{1,...,|R|\}\), we have_
\[\odot_{j\in\{1,...,r\}}v_{I,j}+0\]
_where_
\[v_{I,j}=\begin{cases}I_{2^{m_{a}+m_{b}}}+v_{j}&\text{if }j\in I\\ I_{2^{m_{a}+m_{b}}}-v_{j}&\text{if }j\notin I\end{cases}\]
_and \(I_{2^{m_{a}+m_{b}}}\) is a column vector with \(2^{m_{a}+m_{b}}\) rows and all entries equal to one and \(\odot\) is the Hadamard product. Then \(R\) forms the boundary of the local set which the quantum set saturates to. In such a case, the subset \(R=\{v_{1},...,v_{|R|}\}\) is said to be in general position._
We utilise Theorem 3 to construct a class of low-dimensional faces of the set of classical correlations that also serves as the boundary of the quantum set. In this respect, we recover the class of games discovered by Linden et al. in [19] for the special case that \(m_{a}=m_{b}=2^{k}\) for \(k\geq 2\).
**Lemma 3**.: _Let \(m_{a},m_{b}\in\mathbb{N}\) and \(\mathbb{N}\ni r\leq\log_{2}(m_{a}+m_{b})\). Let \(\{v_{1},...,v_{r}\}\) be set of general positions vector of \(2^{m_{a}+m_{b}}\) rows. Then up to swapping rows and re-indexing, the first \(2^{r}\) rows are unique. In particular, the unique choice for the first \(2^{r}\) rows are the lexicographical ordering. Moreover, if \(m_{a}=m_{b}=2^{k}\), \(r=\log_{2}(m_{a}+m_{b})=k+1\) and \(v_{1},...,v_{r}\) are in general position, then \(v_{1},...,v_{r}\) are unique up to swapping rows and re-indexing._
Proof.: Denote \((v_{j})_{i}\) to be the \(i^{th}\) row of the vector \(v_{j}\). Fix \(I\subseteq\{1,...,r\}\). Since \(\odot_{j}v_{I,j}\neq 0\), there exists \(i\in\{1,...2^{r}\}\) such that \(\prod_{j}(v_{I,j})_{i}\neq 0\). Thus \((v_{I,j})_{i}\neq 0\) for all \(j\in\{1,...,r\}\). Therefore, we have \((v_{j})_{i}=1\) if \(j\in I\) and \((v_{j})_{i}=-1\) if \(j\notin I\). Thus for every choice of \(I\), there is a unique row setting such that it ensures \(\odot_{j}v_{I,j}\neq 0\). Since we have \(r\) vectors and we need \(2^{r}\) difference row settings, up to swapping rows and re-indexing \(v_{i}\)'s, the first \(2^{r}\) rows are in a lexicographical ordering. In other words, up to swapping rows and re-indexing \(v_{i}\)'s, the first \(2^{r}\) rows are unique and are ordered by lexicographical ordering.
We will consider the case where \(m_{a}=m_{b}=2^{k}\) and \(r=k+1=\log_{2}(m_{a}+m_{b})\) where \(k\geq 2\). By Lemma 3, we have a unique set of vectors \(\{v_{1},...,v_{r}\}\), which is in general position. For convenience reason, we first concatenate \(v_{1},...,v_{r}\) into one matrix, denoted by \(G_{2^{k}}\). Notice that \(G_{2^{k}}\) is a \(2^{k+1}\) by \(r\) matrix. We define the \(2^{k}\) by \(2^{k}\) game matrix \(\mathcal{G}_{2^{k}}\) as follows: The \((i,j)\) position of \(\mathcal{G}_{2^{k}}\) corresponds to Alice's \(j^{th}\) input and Bob's \(j^{th}\) input. In other words, the \((i,j)\) position of \(\mathcal{G}_{2^{k}}\) corresponds to the \(i^{th}\) row and the \((2^{k}+j)^{th}\) row of \(G_{2^{k}}\) respectively. Let \(r_{i}=(x_{1},...,x_{r})\) and \(r_{j}=(y_{1},...,y_{r})\) be the \(i^{th}\) and the \((2^{k}+j)^{th}\) row of \(G_{2^{k}}\) respectively. We define the \((i,j)\) entry of \(\mathcal{G}_{2^{k}}\) be using an operator \(\star\) on the \(i^{th}\) and the \((2^{k}+j)^{th}\) row of \(G_{2^{k}}\) define as follows
\[(\mathcal{G}_{2^{k}})_{i,j}=r_{i}\star r_{j}=\begin{cases}1&\text{if }x_{1}y_{1}+...+x_{r}y_{r}>0\\ -1&\text{if }x_{1}y_{1}+...+x_{r}y_{r}<0\\ 0&\text{if }x_{1}y_{1}+...+x_{r}y_{r}=0\end{cases}\]
**Example 1**.: _If \(k=2\) (thus \(m_{a}=m_{b}=2^{2}=4\) and \(r=3\)), we have_
\[v_{1}=\begin{pmatrix}1\\ 1\\ 1\\ 1\\ -1\\ -1\\ -1\\ -1\\ -1\end{pmatrix}v_{2}=\begin{pmatrix}1\\ 1\\ -1\\ -1\\ 1\\ 1\\ -1\\ -1\\ -1\end{pmatrix}G_{2^{2}}=\begin{pmatrix}1&1&1\\ 1&1&-1\\ 1&-1&1\\ 1&-1&1\\ -1&1&-1\\ -1&-1&1\\ -1&-1&1\\ -1&-1&1\\ -1&-1&1\\ -1&-1&1\\ -1&-1&-1\end{pmatrix}\]
_for example, we have \((\mathcal{G}_{2^{2}})_{1,1}=(1,1,1)\star(-1,1,1)=1\) and \((\mathcal{G}_{2^{2}})_{1,2}=(1,1,1)\star(-1,1,-1)=-1\)._
If \(k=3\) (thus \(m_{a}=m_{b}=2^{3}=8\) and \(r=4\)), we have
\[v_{1}=\begin{pmatrix}1\\ 1\\ 1\\ 1\\ 1\\ 1\\ 1\\ -1\\ -1\\ -1\\ -1\\ -1\\ -1\\ -1\\ -1\end{pmatrix}v_{2}=\begin{pmatrix}1\\ 1\\ 1\\ 1\\ -1\\ -1\\ -1\\ -1\\ -1\\ -1\\ -1\\ -1\\ -1\\ -1\\ -1\\ -1\end{pmatrix}v_{3}=\begin{pmatrix}1\\ 1\\ -1\\ -1\\ 1\\ 1\\ -1\\ -1\\ -1\\ -1\\ -1\\ -1\\ -1\\ -1\\ -1\\ -1\\ -1\end{pmatrix}v_{4}=\begin{pmatrix}1\\ 1\\ 1\\ 1\\ 1\\ 1\\ 1\\ 1\\ -1\\ -1\\ -1\\ -1\\ -1\\ -1\\ -1\\ -1\\ -1\end{pmatrix}G_{2^{3}}=\begin{pmatrix}1&1&1&1\\ 1&1&-1\\ 1&1&-1&1\\ 1&1&-1&-1\\ 1&-1&1&1\\ 1&-1&1&-1\\ 1&-1&1&-1\\ -1&1&-1&1\\ -1&-1&1&-1\\ -1&-1&-1&1\\ -1&-1&-1&-1\end{pmatrix}\]
and
\[\mathcal{G}_{2^{3}}=\begin{pmatrix}1&0&0&-1&0&-1&-1\\ 0&1&-1&0&-1&-1&-1\\ 0&-1&1&0&-1&-1&0&-1\\ -1&0&0&1&-1&-1&-1&0&0&-1\\ -1&0&-1&-1&0&1&-1&0\\ -1&-1&0&-1&0&-1&1&0\\ -1&-1&-1&0&-1&0&0&1\end{pmatrix}\]
**Definition 6**.: _Define \(g_{k}:\{1,...,2^{k}\}\to\{0,1\}^{k}\) by sending \(i\in\{1,...,2^{k}\}\) to binary representation of \(i-1\). Define \(g_{k}:\{0,1\}^{k}\to\{\pm 1\}^{k}\) by sending \(0\) to \(1\) and \(1\) to \(-1\). Let \(i\in\{1,...,2^{k}\}\), we denote \(\vec{i}_{k}=f_{k}\circ g_{k}(i)\)._
_For example, \(g_{3}(2)=(0,0,1)\), \(f_{3}(0,0,1)=(1,1,-1)\) and \(\mathcal{I}_{3}=(1,1,-1)\)._
By above definition, we can observe that \((\mathcal{G}_{2^{k}})_{i,j}=(1,\vec{i}_{k})\star(-1,\vec{j}_{k})\).
**Definition 7**.: _Let \(k,x\) be positive integers. Define \(\mathcal{G}_{2^{k},x}\) to be the top left corner square block of \(\mathcal{G}_{2^{k}}\) with dimension \(x\) by \(x\). Define \(\overline{\mathcal{G}_{2^{k},x}}\) is the top-most right-most corner square block of \(\mathcal{G}_{2^{k}}\) of dimension \(x\) by \(x\). In particular, \(\mathcal{G}_{2^{k},2^{k}}=\overline{\mathcal{G}_{2^{k},2^{k}}}=\mathcal{G}_{2}\)._
_For example, \(\mathcal{G}_{2^{2},1}=(1)\), \(\overline{\mathcal{G}_{2^{2},1}}=(-1)\), \(\mathcal{G}_{2^{2},2}=\begin{pmatrix}1&-1\\ -1&1\end{pmatrix}\) and \(\overline{\mathcal{G}_{2^{2},2}}=\begin{pmatrix}-1&-1\\ -1&-1\end{pmatrix}\)_
**Lemma 4**.: _Consider the game matrix \(\mathcal{G}_{2^{k}}\), where \(k\geq 4\) and denote \(d=dim(\mathcal{G}_{2^{k}})=2^{k}\). The game matrix \(\mathcal{G}_{2^{k}}\) has form_
\[\mathcal{G}_{2^{k}}=\begin{pmatrix}A_{2^{k-3}}^{(d)}&\mathcal{G}_{2^{k-2}}& \mathcal{G}_{2^{k-2}}&\mathcal{G}_{2^{k-2}}&B_{2^{k-2}}^{(d)}\\ \mathcal{G}_{2^{k-2}}&A_{2^{k-2}}^{(d)}&B_{2^{k-2}}^{(d)}&\mathcal{G}_{2^{k-2} }\\ \mathcal{G}_{2^{k-2}}&B_{2^{k-2}}^{(d)}&A_{2^{k-2}}^{(d)}&\mathcal{G}_{2^{k-2} }\\ B_{2^{k-2}}^{(d)}&\mathcal{G}_{2^{k-2}}&\mathcal{G}_{2^{k-2}}&\mathcal{A}_{2^{k -2}}^{(d)}\end{pmatrix}\]
_where \(A_{2^{k-2}}^{(d)}=\begin{pmatrix}A_{2^{k-3}}^{(d)}&\mathcal{G}_{2^{k-3}}^{(d) }\\ \mathcal{G}_{2^{k}}^{(d)}&A_{2^{k-3}}^{(d)}\end{pmatrix}\)\(B_{2^{k-2}}^{(d)}=\begin{pmatrix}\overline{\mathcal{G}_{2^{k-3}}^{(d)}}&B_{2^{k-3}}^{(d)} \\ B_{2^{k-3}}^{(d)}&\overline{\mathcal{G}_{2^{k-3}}}^{(d)}\end{pmatrix}A_{1}^{(d) }=1\) and \(B_{1}^{(d)}=-1\)._
Proof.: We first partition \(\mathcal{G}_{2^{k}}\) into 16 sub square block matrices and label them as
\[\mathcal{G}_{2^{k}}=\begin{pmatrix}M_{1,1}&M_{1,2}&M_{1,3}&M_{1,4}\\ M_{2,1}&M_{2,2}&M_{2,3}&M_{2,4}\\ M_{3,1}&M_{3,2}&M_{3,3}&M_{3,4}\\ M_{4,1}&M_{4,2}&M_{4,3}&M_{4,4}\end{pmatrix}\]
First, we want to show that \(M_{1,2}=M_{1,3}=M_{2,1}=M_{2,4}=M_{3,1}=M_{3,4}=M_{4,2}=M_{4,3}=\mathcal{G}_{ 2^{k-2}}\). For \(i,j\in\{1,...,2^{k-2}\}\), we have
\[\begin{split}&(M_{1,2})_{i,j}=(1,1,1,\tilde{t}_{k-2})\star(-1,1,-1,\tilde{t}_{k-2})=(1,\downarrow\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
for some \(2^{k-2}\times 2^{k-2}\) matrices \(A^{(d)}_{2^{k-2}}\) and \(B^{(d)}_{2^{k-2}}\). We remain to consider matrix \(A^{(d)}_{2^{k-2}}\) and \(B^{(d)}_{2^{k-2}}\).
First, we consider \(A^{(d)}_{2^{k-2}}\) and partition it into four sub square block matrices as \(A^{(d)}_{2^{k-2}}=\begin{pmatrix}N_{1,1}&N_{1,2}\\ N_{2,1}&N_{2,2}\end{pmatrix}\). For \(i,j\in\{1,...,2^{k-3}\}\), we have
\[(N_{1,2})_{i,j} =(1,1,1,1,\tilde{i}_{k-3})\star(-1,1,1,-1,\tilde{j}_{k-3})=(1,1,1,-1,\tilde{i}_{k-3})\star(-1,1,1,1,\tilde{j}_{k-3})=(N_{2,1})_{i,j}\] \[(N_{1,1})_{i,j} =(1,1,1,1,\tilde{i}_{k-3})\star(-1,1,1,1,\tilde{j}_{k-3})=(1,1,1,-1,\tilde{i}_{k-3})\star(-1,1,1,-1,\tilde{j}_{k-3})=(N_{2,2})_{i,j}\] \[(N_{1,2})_{i,j} =(1,1,1,1,\tilde{i}_{k-3})\star(-1,1,1,-1,\tilde{j}_{k-3})\] \[=(1,1,\downarrow\mathcal{M},\tilde{i}_{k-3})\star(-1,1,1, \downarrow\mathcal{M},\tilde{j}_{k-3})\] \[=(1,1,\tilde{i}_{k-3})\star(-1,1,\tilde{j}_{k-3})\] \[=(\mathcal{G}_{\frac{d}{2},2^{k-3}})_{i,j}\]
Hence \(N_{1,1}=N_{2,2}\) and \(N_{1,2}=N_{2,1}=\mathcal{G}_{\frac{d}{2},2^{k-3}}\). Therefore we have \(A^{(d)}_{2^{k-2}}=\begin{pmatrix}A^{(d)}_{2^{k-3}}&\mathcal{G}_{\frac{d}{2},2^ {k-3}}\\ \mathcal{G}_{\frac{d}{2},2^{k-3}}&A^{(d)}_{2^{k-3}}\end{pmatrix}\) for some \((2^{k-3}\times 2^{k-3})\) matrix \(A^{(d)}_{2^{k-3}}\). Next, we continue to partition \(A^{(d)}_{2^{k-3}}\) into four sub square blocks as \(A^{(d)}_{2^{k-3}}=\begin{pmatrix}N^{\prime}_{1,1}&N^{\prime}_{1,2}\\ N^{\prime}_{2,1}&N^{\prime}_{2,2}\end{pmatrix}\). For \(i,j\in\{1,...,2^{k-4}\}\), we have
\[(N^{\prime}_{1,2})_{i,j} =(1,1,1,1,1,\tilde{i}_{k-4})\star(-1,1,1,1,-1,\tilde{j}_{k-4})=(1, 1,1,1,-1,\tilde{i}_{k-4})\star(-1,1,1,1,1,\tilde{j}_{k-4})=(N^{\prime}_{2,1}) _{i,j}\] \[(N^{\prime}_{1,1})_{i,j} =(1,1,1,1,1,\tilde{i}_{k-4})\star(-1,1,1,1,1,\tilde{j}_{k-4})=(1, 1,1,1,-1,\tilde{i}_{k-4})\star(-1,1,1,1,-1,\tilde{j}_{k-4})=(N^{\prime}_{2,2}) _{i,j}\] \[(N^{\prime}_{1,2})_{i,j} =(1,1,1,1,1,\tilde{i}_{k-4})\star(-1,1,1,1,-1,\tilde{j}_{k-4})\] \[=(1,1,1,\downarrow\mathcal{M},\tilde{i}_{k-4})\star(-1,1,1, \downarrow\mathcal{M},\tilde{j}_{k-4})\] \[=(1,1,1,\tilde{i}_{k-4})\star(-1,1,1,\tilde{j}_{k-4})\] \[=(\mathcal{G}_{\frac{d}{2},2^{k-4}})_{i,j}\]
Hence \(N^{\prime}_{1,1}=N^{\prime}_{2,2}\) and \(N^{\prime}_{1,2}=N^{\prime}_{2,1}=\mathcal{G}_{\frac{d}{2},2^{k-4}}\). Therefore we have \(A^{(d)}_{2^{k-3}}=\begin{pmatrix}A^{(d)}_{2^{k-4}}&\mathcal{G}_{\frac{d}{2},2^ {k-4}}\\ \mathcal{G}_{\frac{d}{2},2^{k-4}}&A^{(d)}_{2^{k-4}}\end{pmatrix}\) for some \((2^{k-4}\times 2^{k-4})\) matrix \(A^{(d)}_{2^{k-4}}\). We will continue the partition procedure (i.e. partition \(A^{(d)}_{2^{k-i}}\) for \(i\in\{2,...,k-1\}\)) and apply the same argument to obtain
\[A^{(d)}_{2^{k-i}}=\begin{pmatrix}A^{(d)}_{2^{k-i}-1}&\mathcal{G}_{\frac{d}{2},2 ^{k-i-1}}\\ \mathcal{G}_{\frac{d}{2},2^{k-i-1}}&A^{(d)}_{2^{k-i-1}}\end{pmatrix}\]
where \(i\in\{2,...k-1\}\). For \(A^{(d)}_{1}\), its entry equals to \((\mathcal{G}_{2^{k}})_{1,1}=(1,...,1)\star(-1,1,...,1)=1\).
Next, we consider \(B^{(d)}_{2^{k-2}}\). First, we partition \(B^{(d)}_{2^{k-2}}\) into four sub blocks as \(B^{(d)}_{2^{k-2}}=\begin{pmatrix}N_{1,1}&N_{1,2}\\ N_{2,1}&N_{2,2}\end{pmatrix}\). For \(i,j\in\{1,...,2^{k-3}\}\), we have
\[(N_{1,2})_{i,j} =(1,1,1,1,\tilde{i}_{k-3})\star(-1,-1,-1,-1,\tilde{j}_{k-3})=(1,1, 1,-1,\tilde{i}_{k-3})\star(-1,-1,-1,1,\tilde{j}_{k-3})=(N_{2,1})_{i,j}\] \[(N_{1,1})_{i,j} =(1,1,1,1,\tilde{i}_{k-3})\star(-1,-1,-1,1,\tilde{j}_{k-3})=(1,1, 1,-1,\tilde{i}_{k-3})\star(-1,-1,-1,-1,\tilde{j}_{k-3})=(N_{2,2})_{i,j}\] \[(N_{1,1})_{i,j} =(1,1,1,1,\tilde{i}_{k-3})\star(-1,-1,-1,1,\tilde{j}_{k-3})\] \[=(1,1,\downarrow\mathcal{M},\tilde{i}_{k-3})\star(-1,-1,-1, \downarrow\mathcal{M},\tilde{j}_{k-3})\] \[=(\overline{\mathcal{G}_{\frac{d}{2},2^{k-3}}})_{i,j}\]
Thus we have \(N_{1,1}=N_{2,2}=\overline{\mathcal{G}_{\frac{d}{2},2^{k-3}}}\) and \(N_{1,2}=N_{2,1}\). Hence we can conclude that \(B^{(d)}_{2^{k-2}}=\begin{pmatrix}\overline{\mathcal{G}_{\frac{d}{2},2^{k-3}}}&B^{(d )}_{2^{k-3}}\\ B^{(d)}_{2^{k-3}}&\overline{\mathcal{G}_{\frac{d}{2},2^{k-3}}}\end{pmatrix}\) for some \((2^{k-3}\times 2^{k-3})\)
matrix \(B_{2^{k-3}}^{(d)}\). Next, we continue to partition \(B_{2^{k-3}}^{(d)}\) into four blocks as \(B_{2^{k-3}}^{(d)}=\begin{pmatrix}N_{1,1}^{\prime}&N_{1,2}^{\prime}\\ N_{2,1}^{\prime}&N_{2,2}^{\prime}\end{pmatrix}\). For \(i,j\in\{1,...,2^{k-4}\}\), we have
\[\begin{split}(N_{1,2}^{\prime})_{i,j}&=(1,1,1,1,1,\vec{i}_{k-4}) \star(-1,-1,-1,-1,-1,\vec{j}_{k-4})\\ &=(1,1,1,1,-1,\vec{i}_{k-4})\star(-1,-1,-1,-1,1,\vec{j}_{k-4})\\ &=(N_{2,1}^{\prime})_{i,j}\\ (N_{1,1}^{\prime})_{i,j}&=(1,1,1,1,1,\vec{i}_{k-4})\star(-1,-1,-1,-1,1,\vec{ j}_{k-4})\\ &=(1,1,1,1,-1,\vec{i}_{k-4})\star(-1,-1,-1,-1,-1,\vec{j}_{k-4})\\ &=(N_{2,2}^{\prime})_{i,j}\\ (N_{1,1}^{\prime})_{i,j}&=(1,1,1,1,1,\vec{i}_{k-4})\star(-1,-1,-1,-1,1,\vec{ j}_{k-4})\\ &=(1,1,1,\downarrow\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Proof.: Let \(d=dim(\mathcal{G}_{2^{k}})=2^{k}\). Recall that
\[A_{2^{k-2}}^{(d)}=\begin{pmatrix}A_{2^{k-3}}^{(d)}&\mathcal{G}_{\frac{d}{2^{k-3}} }\\ \mathcal{G}_{\frac{d}{2^{k-3}}}&A_{2^{k-3}}^{(d)}\end{pmatrix}\]
Observe that \(\mathcal{G}_{2^{k},2^{j}}=A_{2^{j}}^{(d)}\) for all \(j\in\{0,...,k-2\}\). By Lemma 5, \(A_{2}^{(2^{k})}=\begin{pmatrix}1&\mathcal{G}_{2^{k-2},1}\\ \mathcal{G}_{2^{k-2},1}&1\end{pmatrix}\) is diagonal in the Hadamard basis. By Lemma 5 again, since \(A_{2}^{(2^{k})}\) and \(\mathcal{G}_{2^{k-2},2}\) are diagonal in the Hadamard basis, matrix \(A_{2^{2}}^{(2^{k})}=\begin{pmatrix}A_{2}^{(2^{k})}&\mathcal{G}_{2^{k-2},2}\\ \mathcal{G}_{2^{k-2},2}&A_{2}^{(2^{k})}\end{pmatrix}\) is also diagonal in the Hadamard basis. Inductively, we can conclude that \(A_{2^{j}}^{(2^{k})}=\mathcal{G}_{2^{k},2^{j}}\) are diagonal in the Hadamard basis for all \(j\in\{0,...,k-2\}\). Since \(A_{2^{k-2}}^{(2^{k})}\) and \(\mathcal{G}_{2^{k-2}}\) are diagonal in the Hadamard basis, by Lemma 5, matrix \(\mathcal{G}_{2^{k},2^{k-1}}=\begin{pmatrix}A_{2^{k-2}}^{(2^{k})}&\mathcal{G}_ {2^{k-2}}\\ \mathcal{G}_{2^{k-2}}&A_{2^{k-2}}^{(2^{k})}\end{pmatrix}\) is also diagonal in the Hadamard basis. Thus statement (1) is proved.
Recall that
\[B_{2^{k-2}}^{(d)}=\begin{pmatrix}\overline{\mathcal{G}_{\frac{d}{2^{k-3}}}}&B_ {2^{k-3}}^{(d)}\\ \overline{B_{2^{k-3}}^{(d)}}&\overline{\mathcal{G}_{\frac{d}{2^{k-3}}}}\end{pmatrix}\]
Observe that \(\overline{\mathcal{G}_{2^{k},2^{j}}}=B_{2^{k}}^{(d)}\) for all \(j\in\{0,...,k-2\}\). By Lemma 5, \(B_{2}^{(2^{k})}=\begin{pmatrix}\overline{\mathcal{G}_{2^{k-2},1}}&-1\\ -1&\overline{\mathcal{G}_{2^{k-2},1}}\end{pmatrix}\) is diagonal in the Hadamard basis. By Lemma 5 again, since \(\overline{\mathcal{G}_{2^{k-2},2}}\) and \(B_{2}^{(2^{k})}\) are diagonal in the Hadamard basis, matrix \(B_{2^{2}}^{(2^{k})}=\begin{pmatrix}\overline{\mathcal{G}_{2^{k-2},2}}&B_{2}^ {(2^{k})}\\ B_{2}^{(2^{k})}&\overline{\mathcal{G}_{2^{k-2},2}}\end{pmatrix}\) is also diagonal in the Hadamard basis. Inductively, we can conclude that \(\overline{\mathcal{G}_{2^{k},2^{j}}}=B_{2^{k}}^{(d)}\) are diagonal in the Hadamard basis for all \(j\in\{0,...,k-2\}\). Since \(\mathcal{G}_{2^{k-2}}\) and \(B_{2^{k-2}}^{(d)}\) are diagonal in the Hadamard basis, by Lemma 5, matrix \(\overline{\mathcal{G}_{2^{k},2^{k-1}}}=\begin{pmatrix}\mathcal{G}_{2^{k-2}}&B_ {2^{k-2}}^{(d)}\\ B_{2^{k-2}}^{(d)}&\mathcal{G}_{2^{k-2}}\end{pmatrix}\) is also diagonal in the Hadamard basis. Thus statement (2) is proved.
**Corollary 1**.: _Let \(k\geq 2\). Then \(\mathcal{G}_{2^{k},2^{j}}\) and \(\overline{\mathcal{G}_{2^{k},2^{j}}}\) are diagonal in the Hadamard basis for all \(i\in\{0,...,k\}\). In particular, the game matrix \(\mathcal{G}_{2^{k}}\) is diagonal in the Hadamard basis._
Proof.: We will proceed by induction. For the induction step. Assume \(\mathcal{G}_{2^{k},2^{j}}\) and \(\overline{\mathcal{G}_{2^{k},2^{j}}}\) are diagonal in the Hadamard basis for all \(i\in\{0,...,k\}\). By Lemma 6, \(\mathcal{G}_{2^{k+2},2^{j}}\) and \(\overline{\mathcal{G}_{2^{k+2},2^{j}}}\) are diagonal in the Hadamard basis for all \(i\in\{0,...,k+1\}\). Observe that we can express \(\mathcal{G}_{2^{k+2}}\) as
\[\mathcal{G}_{2^{k+2}}=\begin{pmatrix}\mathcal{G}_{2^{k+2},2^{k+1}}&\overline{ \mathcal{G}_{2^{k+2},2^{k+1}}}\\ \overline{\mathcal{G}_{2^{k+2},2^{k+1}}}&\mathcal{G}_{2^{k+2},2^{k+1}}\end{pmatrix}\]
By Lemma 6, since \(\mathcal{G}_{2^{k+2},2^{k+1}}\) and \(\overline{\mathcal{G}_{2^{k+2},2^{k+1}}}\) are diagonal in the Hadamard basis, we can conclude that \(\mathcal{G}_{2^{k+2}}\) is also diagonal in the Hadamard basis, which finishes the induction step. The base cases are \(k=2\) and \(k=3\). Recall the matrix \(\mathcal{G}_{4}\) and \(\mathcal{G}_{8}\) are shown in Example 1. We can check that for \(k\in\{2,3\}\), matrices \(\mathcal{G}_{2^{k},2^{j}}\) and \(\overline{\mathcal{G}_{2^{k},2^{j}}}\) are diagonal in the Hadamard basis for all \(i\in\{0,...,k\}\).
## VI Conclusions and open problems
In this work, we studied the boundary of the quantum set Q through its relationship with the no-signaling set NS and classical set L. Specifically, we explored a class of quantum Bell inequalities that serve to exclude non-local faces of the no-signaling set up to dimension \(4k-4\) in \((2,2,k)\) Bell scenarios. Quantum realizations of the no-signaling boundary being crucial for device-independent randomness amplification, this study of the experimentally friendly two-input inequalities serves to pick out the optimal quantum correlations suited for this task. It would also be very interesting to derive a class of quantum Bell inequalities from SDP approximations of the quantum set going beyond the almost quantum class. The Braunstein-Caves chained Bell
inequality has also found applications in device-independent protocols, specifically in the scenario of achieving security against non-signaling adversaries. We have explored the entire boundary of the quantum set in this scenario and demonstrated their usefulness in self-testing the two-qubit singlet along with appropriate measurements. In the future, it would be interesting to see if such weighted chain inequalities achieve better performance in specific DI tasks [25; 27]. Finally, we have studied a region of the quantum set where it shares a boundary with the classical set. In this respect, we have generalised the class of low-dimensional faces found by Linden et al. in [19] and further studied in [18].
## VII Acknowledgments
The authors thank Pawel Horodecki for useful discussions. The authors acknowledge support from the Early Career Scheme (ECS) grant "Device-Independent Random Number Generation and Quantum Key Distribution with Weak Random Seeds" (Grant No. 27210620), the General Research Fund (GRF) grant "Semi-device-independent cryptographic applications of a single trusted quantum system" (Grant No. 17211122) and the Research Impact Fund (RIF) "Trustworthy quantum gadgets for secure online communication" (Grant No. R7035-21).
|
2303.18161 | Nash equilibria for relative investors with (non)linear price impact | We consider the strategic interaction of $n$ investors who are able to
influence a stock price process and at the same time measure their utilities
relative to the other investors. Our main aim is to find Nash equilibrium
investment strategies in this setting in a financial market driven by a
Brownian motion and investigate the influence the price impact has on the
equilibrium. We consider both CRRA and CARA utility functions. Our findings
show that the problem is well-posed as long as the price impact is at most
linear. Moreover, numerical results reveal that the investors behave very
aggressively when the price impact is beyond a critical parameter. | Nicole Bäuerle, Tamara Göll | 2023-03-31T15:54:36Z | http://arxiv.org/abs/2303.18161v2 | # Nash equilibria for relative investors with (non)linear price impact
###### Abstract.
We consider the strategic interaction of \(n\) investors who are able to influence a stock price process and at the same time measure their utilities relative to the other investors. Our main aim is to find Nash equilibrium investment strategies in this setting in a financial market driven by a Brownian motion and investigate the influence the price impact has on the equilibrium. We consider both CRRA and CARA utility functions. Our findings show that the problem is well-posed as long as the price impact is at most linear. Moreover, numerical results reveal that the investors behave very aggressively when the price impact is beyond a critical parameter.
Key words: Portfolio optimization; Price Impact; Nash equilibrium; Relative investor
JEL subject classifications: C61, C73, G11
## 1. Introduction
In this paper, we determine the optimal investment strategies of \(n\) investors in a common financial market who interact strategically. The strategic interaction is caused by two different factors: a relative component inside the objective function of each investor and by the fact that the stock price dynamic is affected by the arithmetic mean of the \(n\) agents' investments.
We contribute to two strands of literature. The first one is the literature on strategic interaction between agents. Strategic interaction in portfolio optimization problems has been motivated for example by [8] and [25] through competition between agents. Since then, portfolio choice problems including strategic interaction between investors have been widely studied. [3] consider two agents in a continuous-time model which includes stocks following geometric Brownian motions. They use power utility functions and maximize the ratio of the two investors' wealth. [17] also consider stocks driven by geometric Brownian motions and \(n\) agents maximizing a weighted difference of their own wealth and the arithmetic mean of the other agents' wealth. Structurally similar objective functions including the arithmetic mean have been used by [4]. There, the unique Nash equilibrium for \(n\) agents is derived in a very general financial market using the unique solution to some auxiliary classical portfolio optimization problem. [29] consider the case of asset specialization for \(n\) agents. They derive the unique constant Nash equilibrium using both the arithmetic mean under CARA utility and the geometric mean under CRRA utility. Later, their work has been extended by [28] to consumption-investment problems
including relative concerns. [14, 15] use forward utilities of both CARA and CRRA type with and without consumption. More general financial markets (including e.g. stochastic volatility and incomplete information) were, for example, used in [27], [19] and [21].
The second strand of literature focuses on (large) investors whose trades affect the price processes of certain assets. [7] gives an overview of different reasons and methods to incorporate price impact. [22, 23] consider a discrete time market model in which a single large trader affects the price of the risky asset. He finds conditions under which there are no arbitrage opportunities for small traders while the large trader is able to achieve riskless profit using some market manipulation strategy. [1] introduce a discrete-time financial market in which the price process of the risky stock is affected by the investment of a large investor. The impact is divided into _temporary_ and _permanent_ price impact. They minimize risk and transaction costs arising from the price impact simultaneously. In [5], the problem of minimizing the expected cost of liquidating a block of shares over a fixed time interval is solved in a discrete time financial market. Here, the number of shares held by a large trader impacts the stock price process linearly.
[13] assume that the investment of a single large investor affects the interest rate of a riskless asset and the drift and volatility of stock price processes, which are modeled by Ito-diffusions, simultaneously. They allow for general square integrable strategies and extend classical results of hedging contingent claims to their setting. A similar model including stocks paying dividends was used by [10]. In their setting, the volatility of the stock prices does not depend on the large investors portfolio and they determine the optimal consumption strategy of the large investor. [2] use a more general continuous-time model for the stock prices, but only allow for constant portfolio processes. They prove necessary and sufficient conditions for the absence of arbitrage for both small and large investors. [30] consider a Black-Scholes-type stock price dynamic where the investor's impact is modeled by a general price impact function integrated with respect to an Ito process which models the investment of the large agent. After introducing their market model they show how to price European options defined therein. [16] also consider a Black-Scholes-type price process in which the drift is (possibly nonlinearly) affected by the large investors' trades and also contains a stochastic component which depends on the current market state. They maximize expected utility of the large investor under both complete and incomplete information. A problem of optimal liquidation in another Black-Scholes-type market is treated in [20]. Here, the stock price depends linearly on the dynamics of the large investors selling process. [26] maximize expected utility in a financial market similar to the one treated in this paper. They model the price process as a geometric Brownian motion by adding a multiple of the large traders investment to the constant drift.
The majority of literature considers the case of a single large trader. [33], however, consider a continuous time financial market where the price impact - both temporary and permanent - results from the investment of \(n+1\)'strategic players'. Moreover, [12] considers two agents who interact strategically through their linear impact on the return of the risk free asset. Maximizing their terminal wealth under CRRA utility, he derives the unique constant pure-strategy Nash equilibrium.
In the following, we solve an \(n\)-agent portfolio problem with relative performance concerns where we allow that the agents are jointly able to influence the asset dynamics which is reasonable if \(n\) is large and which has not been done before.
This paper is organized as follows. In the next section we introduce the linear price impact financial market. In Section 3 we explicitly solve the problem of maximizing expected utility of exponential type which results in the unique constant Nash equilibrium. The argument of the utility function consists of the difference of some agents' wealth and a weighted arithmetic mean of the other agents' wealth. We also examine the influence of the price impact parameter \(\alpha\) to the Nash equilibrium and the stock price attained by inserting the arithmetic mean of the components of the Nash equilibrium. In Section 4 we substitute the linear impact of the agents arithmetic mean on the stock price process by a nonlinear one. We prove that the problem
of maximizing CARA utility is well-posed as long the influence is sublinear and does not have an optimal solution if the influence is superlinear. In Section 5 we assume that agents use CRRA utility functions (power and logarithmic utility) and insert the product of some agents wealth and a weighted geometric mean of the other agents' wealth into the expected utility criterion. Similar to the CARA case we are able to explicitly determine the unique constant Nash equilibrium.
## 2. Price impact market
Let \((\Omega,\mathcal{F},(\mathcal{F}_{t})_{t\in[0,T]},\mathbb{P})\) be a filtered probability space and \(T>0\) a finite time horizon. Moreover, let \(W\) be a standard Brownian motion therein.
The underlying financial market consists of one riskless bond which will for simplicity be assumed to be identical to \(1\), and one risky asset (a stock). Note that it is straightforward to extend the results below to the case of \(d>1\) stocks instead of just one. However, to keep calculations simple, we only consider one stock.
The price process of the stock, denoted by \((S_{t})_{t\in[0,T]}\), is the solution to the SDE
\[\mathrm{d}S_{t}=S_{t}\left((\mu+\alpha\bar{\pi}_{t})\,\mathrm{d}t+\sigma \mathrm{d}W_{t}\right),\,S(0)=1. \tag{2.1}\]
Here, the drift \(\mu>0\) and volatility \(\sigma>0\) are assumed to be deterministic and constant in time. Our model describes a special case of the models considered by [13], [10] and [26]. Note that, instead of just one large investor, we consider the case of \(n\) agents who collectively act like one large investor.
The expression \(\bar{\pi}_{t}\) will describe the arithmetic mean of the investment of \(n\) investors into the stock at time \(t\in[0,T]\), i.e.
\[\bar{\pi}_{t}\coloneqq\frac{1}{n}\sum_{i=1}^{n}\pi_{t}^{i},\]
where \(\pi_{t}^{i}\) describes either the amount or the fraction of wealth agent \(i\) invests into the stock at some time \(t\in[0,T]\). The strategies \(\pi^{i}\) of the \(n\) investors are assumed to belong to the set \(\mathcal{A}\) of \((\mathcal{F}_{t})_{t\in[0,T]}\)-progressively measurable, square-integrable processes, i.e.
\[\mathcal{A}\coloneqq\left\{\pi:\Omega\times[0,T]\to\mathbb{R}:\,\pi\text{ is }(\mathcal{F}_{t})_{t\in[0,T]}\text{-progressively measurable, }\int_{0}^{T}\pi_{t}^{2}\mathrm{d}t<\infty\,\mathbb{P}\text{-a.s.}\right\}.\]
This assumption ensures that the SDE (2.1) has a unique solution (see e.g. [24]). Let the initial capital of agent \(i\) be given by \(x_{0}^{i}\).
Finally, \(\alpha\in\mathbb{R}\) is some constant that describes the impact of the investment of the \(n\) investors into the stock.
**Remark 2.1**.:
* Some authors argue that \(\alpha\) should take both positive and negative values due to the fact that (large) investors may have both positive and negative impact on stock returns (see e.g. [9], [11]). On the other hand, [2] prove in a more general setting that stock prices need to be increasing in terms of some large investor's investment. Otherwise it would be possible to construct some 'In & Out' arbitrage strategy. Since the optimization problems considered in this paper have finite optimal solutions, our model appears to be free of arbitrage. Hence, we allow for both positive and negative values for \(\alpha\).
* Assuming that the drift of the risky stock depends linearly on the agents' investment makes the model mathematically tractable. However, empirical data suggests that price impact is concave in order size (see [31] and references therein). Thus, we also consider the case of nonlinear price impact if investors use exponential utility functions (see Section 4).
## 3. Optimization under CARA utility with linear price impact
At first, we assume that investors use exponential utility (CARA) functions to measure their preferences. Hence, define
\[U_{i}:\mathbb{R}\to\mathbb{R},\,x\mapsto-\exp\Big{(}-\frac{1}{\delta_{i}}x\Big{)}\]
for some parameters \(\delta_{i}>0\), \(i=1,\ldots,n.\) While using CARA utility functions, it is more convenient to consider the amount invested into the risky stock instead of the fraction of wealth or number of shares. Hence, we interpret \(\pi_{t}^{i}\) as the amount of money agent \(i\) invests into the risky stock at some \(t\in[0,T]\), \(i=1,\ldots,n.\) Thus, the wealth process of agent \(i\) is given by
\[X_{t}^{i,\pi^{i}}=x_{0}^{i}+\int_{0}^{t}\pi_{s}^{i}\left(\left(\mu+\alpha\bar{ \pi}_{s}\right)\mathrm{d}s+\sigma\mathrm{d}W_{s}\right),\ t\in[0,T].\]
In this paper, we want to examine the strategic interaction created by the price impact introduced earlier and a modification of the classical objective function used in expected utility maximization. Hence, we substitute the terminal wealth of a single investor inside the expected utility criterion by a relative quantity which captures the fact that agent \(i\) wants to maximize her terminal wealth while also considering her performance with respect to the other agents. Similar to Section 2 in [29], we use the difference of agent \(i\)'s terminal wealth and a weighted arithmetic mean of the other agents' terminal wealth. Hence, we insert
\[X_{T}^{i,\pi}-\frac{\theta_{i}}{n}\sum_{j\neq i}X_{T}^{j,\pi^{j}}\]
into the argument of the utility function of investor \(i\). The parameter \(\theta_{i}\in[0,1]\) measures how much agent \(i\) cares about her performance with respect to the other agents.
Our goal will therefore be to find all Nash equilibria to the multi-objective optimization problem
\[\left\{\begin{aligned} &\sup_{\pi^{i}\in\mathcal{A}}\mathbb{E} \left[-\exp\left(-\frac{1}{\delta_{i}}\left(X_{T}^{i,\pi^{i}}-\frac{\theta_{i }}{n}\sum_{j\neq i}X_{T}^{j,\pi^{j}}\right)\right)\right],\\ &\mathrm{s.t.}\ \ \ \ X_{T}^{i,\pi^{i}}=x_{0}^{i}+\int_{0}^{T}\pi_{t}^{ i}\left(\left(\mu+\alpha\bar{\pi}_{t}\right)\mathrm{d}t+\sigma\mathrm{d}W_{t} \right),\end{aligned}\right. \tag{3.1}\]
\(i=1,\ldots,n\). A Nash equilibrium for general objective functions \(J_{i}\), \(i=1,\ldots,n,\) is defined as follows.
**Definition 3.1**.: Let \(J_{i}:\mathcal{A}^{n}\to\mathbb{R}\) be the objective function of agent \(i\). A vector \(\left(\pi^{1,*},\ldots,\pi^{n,*}\right)\) of strategies is called a _Nash equilibrium_, if, for all admissible \(\pi^{i}\in\mathcal{A}\) and \(i=1,\ldots,n,\)
\[J_{i}(\pi^{1,*},\ldots,\pi^{i,*},\ldots,\pi^{n,*})\geq J_{i}(\pi^{1,*},\ldots,\pi^{i-1,*},\pi^{i},\pi^{i+1,*},\ldots,\pi^{n,*}).\]
I.e. deviating from \(\pi^{i,*}\) does not increase agent \(i\)'s objective function.
### Solution
In order to solve the best response problem (3.1), we fix some investor \(i\) and assume that the strategies \(\pi^{j}\), \(j\neq i\), of the other agents are given. Under these conditions we can rewrite the optimization problem (3.1) into a classical portfolio optimization problem in a similar (but not identical) price impact market. Afterwards, the Nash equilibria can be determined using the solution to the classical problem.
Define the process \(\left(Y_{t}^{i,\varphi^{i}}\right)_{t\in[0,T]}\) by
\[Y_{t}^{i,\varphi^{i}}\coloneqq X_{t}^{i,\pi^{i}}-\frac{\theta_{i}}{n}\sum_{j \neq i}X_{t}^{j,\pi^{j}},\,t\in[0,T],\,i=1,\ldots,n,\]
where we further define the strategy \(\varphi^{i}\) by
\[\varphi^{i}_{t}\coloneqq\pi^{i}_{t}-\frac{\theta_{i}}{n}\sum_{j\neq i}\pi^{j}_ {t},\,t\in[0,T],\,i=1,\ldots,n,\]
which is still square integrable and progressively measurable (\(\varphi^{i}\in\mathcal{A}\)).
Then we can write \(Y_{T}^{i,\varphi^{i}}\) as
\[\begin{split} Y_{T}^{i,\varphi^{i}}&=X_{T}^{i,\pi^{i} }-\frac{\theta_{i}}{n}\sum_{j\neq i}X_{T}^{j,\pi j}\\ &\eqqcolon y_{0}^{i}+\int_{0}^{T}\varphi_{t}^{i}\left(\left( \tilde{\mu}_{t}^{-i}+\frac{\alpha}{n}\varphi_{t}^{i}\right)\mathrm{d}t+\sigma \mathrm{d}W_{t}\right),\end{split}\]
where we introduced \(y_{0}^{i}\coloneqq x_{0}^{i}-\frac{\theta_{i}}{n}\sum_{j\neq i}x_{0}^{j}\), \(\bar{\pi}_{t}^{-i}\coloneqq\frac{1}{n}\sum_{j\neq i}\pi_{t}^{j}\) and \(\tilde{\mu}_{t}^{-i}\coloneqq\mu+\alpha\frac{n+\theta_{i}}{n}\bar{\pi}_{t}^{-i}\).
Hence, in order to solve the best response problem associated to (3.1), we can equivalently solve the following single investor portfolio optimization problem due to the one-to-one relation between \(\pi^{i}\) and \(\varphi^{i}\)
\[\begin{cases}&\sup_{\varphi^{i}\in\mathcal{A}}\mathbb{E}\left[-\exp\left(- \frac{1}{\delta_{i}}Y_{T}^{i,\varphi^{i}}\right)\right],\\ \text{s.t.}&Y_{T}^{i,\varphi^{i}}=y_{0}^{i}+\int_{0}^{T}\varphi_{t}^{i}\left( \left(\tilde{\mu}_{t}^{-i}+\frac{\alpha}{n}\varphi_{t}^{i}\right)\mathrm{d}t+ \sigma\mathrm{d}W_{t}\right),\,t\in[0,T],\end{cases} \tag{3.2}\]
in a financial market with corrected price impact.
Now assume that \(\varphi^{i,*}=\varphi^{i,*}(\tilde{\mu}^{-i})\) is an optimal solution to (3.2) depending on the drift process \(\tilde{\mu}^{-i}\). Then the optimal solution to the best response problem with respect to (3.1) is uniquely determined by
\[\pi^{i}=\varphi^{i,*}(\tilde{\mu}^{-i})+\frac{\theta_{i}}{n}\sum_{j\neq i}\pi^ {j},\,i=1,\ldots,n. \tag{3.3}\]
Note that we can find a unique Nash equilibrium if and only if problem (3.2) and the fixed point problem for \(\pi^{i}\), given in terms of the system of equations (3.3), are uniquely solvable.
Using the described technique, we are able to find the unique constant Nash equilibrium. Note that the restriction to constant Nash equilibria is necessary since otherwise we would not be able to solve the auxiliary problem explicitly.
At first, we solve the auxiliary problem (3.2) for investor \(i\) under the assumption that the strategies of the other investors are constant in time and deterministic.
**Lemma 3.2**.: _Let \(\theta_{i}\in[0,1]\) and \(\delta_{i}>0\), \(i=1,\ldots,n\). Moreover, assume that \(n\sigma^{2}-2\delta_{i}\alpha>0\) for all \(i=1,\ldots,n\). If, for some \(i\in\{1,\ldots,n\}\), the strategies \(\pi^{j}\), \(j\neq i\), are constant in time and deterministic, the unique optimal solution to (3.2) is given by_
\[\varphi_{t}^{i,*}\equiv\frac{n\delta_{i}\tilde{\mu}^{-i}}{n\sigma^{2}-2\delta_ {i}\alpha},\,t\in[0,T].\]
Proof.: Since \(\pi^{j}\), \(j\neq i\), are constant, the drift \(\widetilde{\mu}^{-i}\) is also constant. The dynamics of the wealth process \(Y^{i,\varphi^{i}}\) are therefore given by
\[\mathrm{d}Y_{t}^{i,\varphi^{i}}=\varphi_{t}^{i}\left(\left(\widetilde{\mu}^{-i }+\frac{\alpha}{n}\varphi_{t}^{i}\right)\mathrm{d}t+\sigma\mathrm{d}W_{t} \right),\,t\in[0,T].\]
To derive the associated HJB equation used to solve the portfolio optimization problem, we define the value function
\[v(t,y):=\sup_{\varphi^{i}\in\mathcal{A}}\mathbb{E}\Big{[}-\exp\Big{(}-\frac{1 }{\delta_{i}}Y_{T}^{i,\varphi^{i}}\Big{)}\Big{|}Y_{t}^{i,\varphi^{i}}=y\Big{]}, \,\,t\in[0,T],\,y\in\mathbb{R}.\]
The maximum value in (3.2) is thus given by \(v(0,y_{0}^{i}).\) The Hamilton Jacobi Bellman (HJB) equation for this problem reads
\[0=v_{t}+\max_{a\in\mathbb{R}}\left\{v_{y}\widetilde{\mu}^{-i}a+\left(\frac{ \alpha}{n}v_{y}+\frac{\sigma^{2}}{2}v_{yy}\right)a^{2}\right\} \tag{3.4}\]
for \(y\in\mathbb{R}\), \(t\in[0,T]\), with terminal condition \(v(T,y)=-\exp(-\frac{1}{\delta_{i}}y).\) Note that we omitted the arguments of \(v\) to keep notation simple. The maximum in (3.4) is attained at
\[a^{*}=-\frac{n\widetilde{\mu}^{-i}v_{y}}{n\sigma^{2}v_{yy}+2\alpha v_{y}}. \tag{3.5}\]
Inserting the maximum point into (3.4) yields
\[0=v_{t}-\frac{1}{2}\frac{n(\widetilde{\mu}^{-i})^{2}v_{y}^{2}}{n\sigma^{2}v_{ yy}+2\alpha v_{y}}. \tag{3.6}\]
We use the ansatz \(v(t,y)=-f(t)\exp(-\frac{1}{\delta_{i}}y)\) for some continuously differentiable function \(f:[0,T]\to\mathbb{R}\) satisfying \(f(T)=1\). Then (3.6) simplifies to the ODE
\[f^{\prime}(t)+\rho f(t)=0,\,f(T)=1,\]
where \(\rho=\frac{1}{2}\frac{n(\widetilde{\mu}^{-i})^{2}}{n\sigma^{2}-2\alpha\delta _{i}}\). The unique solution to this ODE is given by \(f(t)=e^{\rho(T-t)},\,t\in[0,T]\). Finally, \(v(t,y)=-\exp(\rho(T-t)-\frac{1}{\delta_{i}}y),\,t\in[0,T],\,y\in\mathbb{R},\) solves the HJB equation. Inserting \(v\) into (3.5) yields
\[\varphi^{i,*}\equiv\frac{n\delta_{i}\widetilde{\mu}^{-i}}{n\sigma^{2}-2\delta _{i}\alpha}.\]
A standard verification theorem (see for example [6], pp.280-282, [32], [18] for similar versions) concludes our proof.
Lemma 3.2 together with (3.3) introduces a system of linear equations whose solutions constitute Nash equilibrium strategies. The next theorem displays the unique solution to this system and thus, the unique constant Nash equilibrium.
**Theorem 3.3**.: _Assume that \(n\sigma^{2}-2\delta_{j}\alpha>0\) for all \(j=1,\ldots,n.\) If \(1-\hat{\theta}\neq\sum_{j=1}^{n}\frac{n\alpha\delta_{j}}{(n+\theta_{j})(n \sigma^{2}-\delta_{j}\alpha)},\) the unique constant Nash equilibrium to (3.1) is given by_
\[\pi^{i,*}=\frac{n}{n+\theta_{i}}\frac{n\delta_{i}\mu}{n\sigma^{2}-\delta_{i} \alpha}+\left(\frac{\theta_{i}}{n+\theta_{i}}+\frac{n\alpha\delta_{i}}{(n+ \theta_{i})(n\sigma^{2}-\delta_{i}\alpha)}\right)\cdot\frac{\sum_{j=1}^{n} \frac{n}{n+\theta_{j}}\frac{n\delta_{j}}{(n\sigma^{2}-\delta_{j}\alpha)}\cdot \mu}{1-\hat{\theta}-\sum_{j=1}^{n}\frac{n\alpha\delta_{j}}{(n+\theta_{j})(n \sigma^{2}-\delta_{j}\alpha)}},\]
\(i=1,\ldots,n,\) _where \(\hat{\theta}=\sum_{j=1}^{n}\frac{\theta_{j}}{n+\theta_{j}}.\) If \(1-\hat{\theta}=\sum_{j=1}^{n}\frac{n\alpha\delta_{j}}{(n+\theta_{j})(n\sigma^{ 2}-\delta_{j}\alpha)},\) there is no constant Nash equilibrium._
Proof.: Using Lemma 3.2, the unique optimal solution to the auxiliary problem (3.2) is given by
\[\varphi^{i,*}=\frac{n\delta_{i}\widetilde{\mu}^{-i}}{n\sigma^{2}-2\delta_{i} \alpha}.\]
Note that this is obviously a constant and deterministic strategy. Moreover, we defined \(\varphi^{i,*}=\pi^{i}-\frac{\theta_{i}}{n}\sum_{j\neq i}\pi^{j}\) and \(\widetilde{\mu}^{-i}=\mu+\frac{n+\theta_{i}}{n^{2}}\alpha\sum_{j\neq i}\pi^{j}.\) Hence, we need to solve the following system of linear equations to determine the unique constant Nash equilibrium
\[\pi^{i}-\frac{\theta_{i}}{n}\sum_{j\neq i}\pi^{j}=\frac{n\delta_{i}}{n\sigma^{ 2}-2\delta_{i}\alpha}\mu+\frac{\delta_{i}\alpha}{n\sigma^{2}-2\delta_{i} \alpha}\frac{n+\theta_{i}}{n}\sum_{j\neq i}\pi^{j}. \tag{3.7}\]
Rearranging (3.7) and adding \(\pi^{i}\) in the sum yields
\[\pi^{i}=\frac{n}{n+\theta_{i}}\frac{n\delta_{i}}{n\sigma^{2}-\delta_{i} \alpha}\mu+\left(\frac{\theta_{i}}{n+\theta_{i}}+\frac{n\delta_{i}\alpha}{(n+ \theta_{i})(n\sigma^{2}-\delta_{i}\alpha)}\right)\sum_{j=1}^{n}\pi^{j}. \tag{3.8}\]
Summing over all \(i\in\{1,\ldots,n\}\) on both sides then yields
\[\sum_{j=1}^{n}\pi^{j}=\sum_{j=1}^{n}\frac{n}{n+\theta_{j}}\frac{n\delta_{j}}{n \sigma^{2}-\delta_{j}\alpha}\mu+\left(\hat{\theta}+\sum_{j=1}^{n}\frac{n}{n+ \theta_{j}}\frac{\delta_{j}\alpha}{n\sigma^{2}-\delta_{j}\alpha}\right)\sum _{j=1}^{n}\pi^{j}.\]
Solving for \(\sum_{j=1}^{n}\pi^{j}\) (which is possible if and only if \(\sum_{j=1}^{n}\frac{n\alpha\delta_{j}}{(n+\theta_{j})(n\sigma^{2}-\delta_{j} \alpha)}\neq 1-\hat{\theta}\)) yields
\[\sum_{j=1}^{n}\pi^{j}=\frac{\sum_{j=1}^{n}\frac{n}{n+\theta_{j}}\frac{n\delta_{ j}}{(n\sigma^{2}-\delta_{j}\alpha)}\cdot\mu}{1-\hat{\theta}-\sum_{j=1}^{n}\frac{n \alpha\delta_{j}}{(n+\theta_{j})(n\sigma^{2}-\delta_{j}\alpha)}}. \tag{3.9}\]
Finally, we can insert (3.9) into (3.8) to obtain the claimed representation of \(\pi^{i,*}\) which concludes our proof.
**Remark 3.4**.: Theorem 3.3 contains the two special cases \(\alpha=0\) (no price impact) and \(\theta_{i}=0\) for all \(i=1,\ldots,n\) (no relative concerns in the objective function). For \(\alpha=0\) we obtain
\[\pi^{i,*}=\left(\frac{n\delta_{i}}{n+\theta_{i}}+\frac{\theta_{i}}{(1-\hat{ \theta})(n+\theta_{i})}\sum_{j=1}^{n}\frac{n\delta_{j}}{n+\theta_{j}}\right) \cdot\frac{\mu}{\sigma^{2}}>0\]
for \(i=1,\ldots,n\) which coincides (as expected) with the Nash equilibrium in [4] (Remark 4.1). If \(\theta_{i}=0\) for all \(i=1,\ldots,n\), we deduce
\[\pi^{i,*}=\frac{n\delta_{i}\mu}{n\sigma^{2}-\delta_{i}\alpha}+\frac{\alpha \delta_{i}}{n\sigma^{2}-\delta_{i}\alpha}\cdot\frac{n\sum_{j=1}^{n}\frac{ \delta_{j}}{n\sigma^{2}-\delta_{j}\alpha}}{1-\alpha\sum_{j=1}^{n}\frac{ \delta_{j}}{n\sigma^{2}-\delta_{j}\alpha}}\cdot\mu,\]
\(i=1,\ldots,n\).
### Influence of the parameter \(\alpha\)
We consider two different features of our solution that are affected by the choice of \(\alpha\) which regulates the price impact. Throughout this subsection, we assume that \(\alpha\) satisfies the conditions of Theorem 3.3, i.e. \(\alpha<\frac{n\sigma^{2}}{2\delta_{\max}}\eqqcolon\alpha_{\max}\), where \(\delta_{\max}\coloneqq\max\{\delta_{1},\ldots,\delta_{n}\}\) and
\[\hat{s}(\alpha)\coloneqq\sum_{j=1}^{n}\frac{n\alpha\delta_{j}}{(n+\theta_{j}) (n\sigma^{2}-\delta_{j}\alpha)}+\hat{\theta}\neq 1.\]
Indeed, it is possible to show that there exists a unique \(\alpha_{0}\in(0,\alpha_{\max})\) such that \(\hat{s}(\alpha_{0})=1\). This can be seen as follows: First \(\alpha\mapsto\hat{s}(\alpha)\) is strictly increasing and continuous on \((-\infty,\alpha_{\max}]\). Further, we have \(\hat{s}(0)=\hat{\theta}<1\) and \(\hat{s}(\alpha_{\max})>1.\) Thus, the intermediate value theorem implies the statement. We have to exclude this \(\alpha_{0}\) from our considerations. The specific value of \(\alpha_{0}\) does not depend on the type of the agent. It is the same for all investors.
At first, we consider the impact of the choice of \(\alpha\) on the optimal strategy of agent \(i\), i.e. the \(i\)-th entry \(\pi^{i,*}\) of the Nash equilibrium. At first, it can be easily shown that \(\pi^{i,*}>0\), \(i=1,\ldots,n\), if and only if \(\alpha<\alpha_{0}\). Moreover, we can compute the derivative of \(\pi^{i,*}\) with respect to \(\alpha\) and deduce that it is strictly positive on \((-\infty,\alpha_{\max})\setminus\{\alpha_{0}\}.\) Note however that \(\pi^{i,*}\) is only piecewise increasing on \((-\infty,\alpha_{0})\) and \((\alpha_{0},\alpha_{\max})\) due to the discontinuity located at \(\alpha_{0}\).
The second property of \(\alpha\) we want to consider is the influence on the equilibrium stock price \((S_{t}^{*})_{t\in[0,T]}\) that is obtained by inserting the Nash equilibrium from Theorem 3.3 into the stock price dynamic. At first, it is not clear whether \(S_{t}^{*}\) is smaller or larger than the stock price with drift \(\mu\) and volatility \(\sigma\) without the \(n\) investors' impact. It obviously suffices to consider the drift of \(S_{t}^{*}\) compared to \(\mu\) since the volatility does not depend on the \(n\) agents' investments.
From the proof of Theorem 3.3, we know that the arithmetic mean of the components of the Nash equilibrium is given by
\[\frac{1}{n}\sum_{j=1}^{n}\pi^{j,*}=\frac{(\hat{s}(\alpha)-\hat{\theta})\cdot\mu/ \alpha}{1-\hat{s}(\alpha)}.\]
Therefore, the drift of \(S_{t}^{*}\) is equal to
\[\mu+\frac{\alpha}{n}\sum_{j=1}^{n}\pi^{j,*}=\mu\cdot\frac{\hat{s}(\alpha)-\hat {\theta}}{1-\hat{s}(\alpha)}.\]
Since the constant \(\frac{\dot{s}(\alpha)-\hat{\theta}}{1-\dot{s}(\alpha)}\) is strictly positive if and only if \(\alpha\in(0,\alpha_{0})\), we deduce that the drift of \(S_{t}^{*}\) is larger (smaller) than \(\mu\) if and only if \(\alpha\in(0,\alpha_{0})\) (\(\alpha\in(-\infty,0)\cup(\alpha_{0},\alpha_{\max})\)). Moreover, since we already saw that \(\sum_{j=1}^{n}\pi^{j,*}\) is piecewise increasing in terms of \(\alpha\), we infer that \(S_{t}^{*}\) is also piecewise increasing in terms of \(\alpha\) on \((-\infty,\alpha_{0})\) and \((\alpha_{0},\alpha_{\max})\).
Figure 1 shows the behavior of \(\pi^{1,*}\) (cf. Theorem 3.3) in terms of \(\alpha\) for the two different risk aversion parameters \(\delta=1\) and \(\delta=4\). The vertical lines (dotted) show the discontinuity \(\alpha_{0}\) for the different parameter choices. The gray horizontal line (dashed) marks the value zero while the orange and blue horizontal lines (dashed) display the optimal solution to the classical problem of maximizing expected terminal wealth under CARA utility without price impact and relative concerns given by \(\delta\mu\sigma^{-2}\) (Merton ratio). There are two ways the agents may try to influence the stock price to their advantage. By buying the stock they may jointly increase the stock value and thus raise their utility or by jointly short-selling the stock and thus decrease its value. Our analysis shows that in case of a small price impact the agents go for the first option and in case of a larger price impact they go for the latter option. Of course, this is only true under the exponential utility where short-selling is no problem. Under an increasingly negative price impact, the investors engage less in the financial market.
## 4. Optimization under CARA utility with nonlinear price impact
At the beginning of Section 2, we assumed that the price impact of the \(n\) investors in our financial market is given as a linear function in terms of the arithmetic mean of the \(n\) investors' strategies. While the use of the arithmetic mean seems intuitive and reasonable since we assumed that investors are'small', one could ask whether using a different function than a linear one would lead to a different optimization problem and hence also a different optimal investment.
In Theorem 3.3 we were able to find an explicit solution to the associated multi-objective portfolio optimization problem using exponential utility (if the parameters are chosen accordingly). The proof highly relies on the linearity of the price impact, so we will not be able to give an explicit solution to the resulting optimization problem in general. However, we will discuss that using a function \(g\) that grows superlinearly yields a problem that does not have a finite optimal solution while a function \(g\) that grows sublinearly yields a finite optimal solution. If \(g\) is a linear function, it depends on the parameter choices whether or not there exists a finite optimal solution (cf. Theorem 3.3). Since, in the linear case, the optimally invested amount is close to zero for decreasing price impact (i.e. if \(\alpha<0\), see Theorem 3.3 and Figure 1) we only consider price impact which is increasing in order size.
More explicitly, the price impact will now be modeled by some _strictly increasing and continuous function_\(g:\mathbb{R}\to\mathbb{R}\) with \(g(0)=0\). Therefore, the stock price process will be given as the solution to the SDE
\[\mathrm{d}S_{t}=S_{t}\left(\left(\mu+g\left(\bar{\pi}_{t}\right)\right) \mathrm{d}t+\sigma\mathrm{d}W_{t}\right),\,S_{0}=1,\]
which is, of course, still just a stochastic exponential.
As before, we have to restrict ourselves to constant Nash equilibria. Therefore, from the view of investor \(i\), we can rewrite the expression \(g(\bar{\pi}_{t})\) in the previous SDE as follows
\[g(\bar{\pi}_{t})=g\left(\frac{1}{n}\sum_{j=1}^{n}\pi_{t}^{j}\right)=g\left( \frac{1}{n}\pi_{t}^{i}+\frac{1}{n}\sum_{j\neq i}\pi^{j}\right)\eqqcolon\widetilde {g}(\pi_{t}^{i}),\]
where \(\widetilde{g}(p)\coloneqq g\left(\frac{p}{n}+\frac{1}{n}\sum_{j\neq i}\pi^{j} \right),\,p\in\mathbb{R}\). Of course, we assumed that the strategies \(\pi^{j},\,j\neq i\), of the other investors are fixed, constant and deterministic. It also follows that \(\widetilde{g}\) is still strictly increasing and satisfies \(\widetilde{g}\left(-\sum_{j\neq i}\pi^{j}\right)=0\).
Again, strategies \(\pi^{i}\) are restricted to the set \(\mathcal{A}\) of admissible strategies. In the following, we will prove that
\[\left\{\begin{aligned} &\sup_{\pi^{i}\in\mathcal{A}}\mathbb{E} \left[-\exp\left(-\frac{1}{\delta_{i}}\left(X_{T}^{i,\pi^{i}}-\frac{\theta_{i }}{n}\sum_{j\neq i}X_{T}^{j,\pi^{j}}\right)\right)\right],\\ &\mathrm{s.t.}\quad\,X_{T}^{i,\pi^{i}}=x_{0}^{i}+\int_{0}^{T}\pi_ {t}^{i}\left(\left(\mu+g(\bar{\pi}_{t})\right)\mathrm{d}t+\sigma\mathrm{d}W_ {t}\right),\end{aligned}\right. \tag{4.1}\]
has an optimal solution if \(g\) grows sublinearly and there exists no optimal strategy if \(g\) grows superlinearly. Here, \(\pi^{j},\,j\neq i\), are assumed to be constant.
The following theorem summarizes the first assertion of this section, which treats the case that \(g\) grows superlinearly.
**Proposition 4.1**.: _If \(\lim_{x\to\pm\infty}\frac{g(x)}{x}=\infty\), (4.1) does not have an optimal solution._
Proof.: In order to prove that (4.1) does not have an optimal solution, we will prove that, even if we only consider constant strategies for agent \(i\), the optimal value is zero and the associated strategy is infinite. If \(\pi^{j}\) is constant for all \(j\in\{1,\ldots,n\}\), we obtain
\[X_{T}^{i,\pi^{i}}-\frac{\theta_{i}}{n}\sum_{j\neq i}X_{T}^{j, \pi^{j}}\] \[= x_{0}^{i}-\frac{\theta_{i}}{n}\sum_{j\neq i}x_{0}^{j}+\Big{(}\pi^ {i}-\frac{\theta_{i}}{n}\sum_{j\neq i}\pi^{j}\Big{)}(\mu+g(\bar{\pi}))T+\Big{(} \pi^{i}-\frac{\theta_{i}}{n}\sum_{j\neq i}\pi^{j}\Big{)}\sigma W_{T}\] \[=: y_{0}^{i}+\mu(\pi^{i})T+\sigma(\pi^{i})W_{T}.\]
Hence, for fixed \(\pi^{j}\), \(j=1,\ldots,n\), the value of the objective function in (4.1) is given by
\[\mathbb{E}\Big{[}-\exp\Big{(}-\frac{1}{\delta_{i}}\Big{(}y_{0}^{i}+ \mu(\pi^{i})T+\sigma(\pi^{i})W_{T}\Big{)}\Big{)}\Big{]}\] \[= -\exp\Big{(}-\frac{1}{\delta_{i}}y_{0}^{i}\Big{)}\cdot\exp\Big{(} -\frac{1}{\delta_{i}}\Big{(}\mu(\pi^{i})-\frac{\sigma(\pi^{i})^{2}}{2\delta_{ i}}\Big{)}T\Big{)}.\]
Thus, maximizing the objective function of (4.1) with respect to constant strategies \(\pi^{i}\) is equivalent to maximizing \(\mu(\pi^{i})-\frac{\sigma(\pi^{i})^{2}}{2\delta_{i}}.\) Reinserting the definition of \(\mu(\pi^{i})\) and \(\sigma(\pi^{i})\) yields
\[\mu(\pi^{i})-\frac{\sigma(\pi^{i})^{2}}{2\delta_{i}}\] \[= \pi^{i}g(\bar{\pi})-\frac{\sigma^{2}}{2\delta_{i}}(\pi^{i})^{2}+ \pi^{i}\Big{(}\mu+\frac{\sigma^{2}\theta_{i}}{n\delta_{i}}\sum_{j\neq i}\pi^{ j}\Big{)}-\frac{\theta_{i}}{n}g(\bar{\pi})\sum_{j\neq i}\pi^{j}-\frac{\theta_{i}}{n} \sum_{j\neq i}\pi^{j}\Big{(}\mu+\frac{\sigma^{2}\theta_{i}}{2n\delta_{i}}\sum _{j\neq i}\pi^{j}\Big{)}\]
which converges to \(\infty\) if \(\pi^{i}\) converges to \(\pm\infty\) using the assumption that \(g\) grows superlinearly. Therefore,
\[0 \geq\sup_{\pi^{i}\in\mathcal{A}}\mathbb{E}\Big{[}-\exp\Big{(}- \frac{1}{\delta_{i}}\Big{(}X_{T}^{i,\pi^{i}}-\frac{\theta_{i}}{n}\sum_{j\neq i }X_{T}^{j,\pi^{j}}\Big{)}\Big{)}\Big{]}\] \[\geq\sup_{\pi^{i}\in\mathcal{A}\atop\pi^{i}\text{ constant}}\mathbb{E} \Big{[}-\exp\Big{(}-\frac{1}{\delta_{i}}\Big{(}X_{T}^{i,\pi^{i}}-\frac{\theta_{ i}}{n}\sum_{j\neq i}X_{T}^{j,\pi^{j}}\Big{)}\Big{)}\Big{]}=0.\]
Hence, the optimal value of (4.1) is zero, which implies that the argument inside the exponential function needs to be infinite. Hence, the problem does not have an optimal solution.
As a result we cannot hope for a Nash equilibrium in this case. Now we can consider the case of sublinear growth of \(g\). Hence, we assume that
\[\lim_{x\to\pm\infty}\frac{g(x)}{x}=0.\]
Then we can prove that there exists an optimal policy for (4.1). In order to do so, let \(a^{*}\) be a maximum point of
\[a\mapsto\Big{(}a-\frac{\theta_{i}}{n}\sum_{j\neq i}\pi^{j}\Big{)}\Big{(}\mu+ \widetilde{g}(a)\Big{)}-\frac{\sigma^{2}}{2\delta_{i}}\Big{(}a-\frac{\theta_{i }}{n}\sum_{j\neq i}\pi^{j}\Big{)}^{2}. \tag{4.2}\]
Due to our assumption on \(g\), a maximum point \(a^{*}\) exists and is finite. Then we obtain the following result.
**Proposition 4.2**.: _If \(\lim_{x\to\pm\infty}\frac{g(x)}{x}=0\), an optimal strategy for (4.1) is given by \(\pi_{t}^{i}\equiv a^{*}\), where \(a^{*}\) is the maximum point from (4.2)._
Proof.: For the moment we restrict to bounded strategies \((\pi_{t}^{i})\), i.e. there exists a constant \(K>0\) such that \(|\pi_{t}^{i}|\leq K\) for all \(t\in[0,T].\) For constants \(\pi^{j}\), we obtain
\[-\frac{1}{\delta_{i}}\Big{(}X_{T}^{i,\pi^{i}}-\frac{\theta_{i}}{ n}\sum_{j\neq i}X_{T}^{j,\pi^{j}}\Big{)}\] \[= -\frac{1}{\delta_{i}}\Big{(}x_{0}^{i}-\frac{\theta_{i}}{n}\sum_{ j\neq i}x_{0}^{j}\Big{)}-\frac{1}{\delta_{i}}\Bigg{(}\int_{0}^{T}\Big{(}\pi_{t}^{i}- \frac{\theta_{i}}{n}\sum_{j\neq i}\pi^{j}\Big{)}(\mu+g(\bar{\pi}_{t}))\text{d} t+\sigma\int_{0}^{T}\Big{(}\pi_{t}^{i}-\frac{\theta_{i}}{n}\sum_{j\neq i}\pi^{j} \Big{)}\text{d}W_{t}\Bigg{)}\] \[-\frac{\sigma^{2}}{2\delta_{i}^{2}}\int_{0}^{T}\Big{(}\pi_{t}^{i} -\frac{\theta_{i}}{n}\sum_{j\neq i}\pi^{j}\Big{)}^{2}\text{d}t+\frac{\sigma^{2 }}{2\delta_{i}^{2}}\int_{0}^{T}\Big{(}\pi_{t}^{i}-\frac{\theta_{i}}{n}\sum_{j \neq i}\pi^{j}\Big{)}^{2}\text{d}t.\]
Now define a new probability measure \(\mathbb{Q}\) by
\[\frac{\mathrm{d}\mathbb{Q}}{\mathrm{d}\mathbb{P}}=\exp\Bigg{(}-\frac{\sigma^{2}}{2 \delta_{i}^{2}}\int_{0}^{T}\Big{(}\pi_{t}^{i}-\frac{\theta_{i}}{n}\sum_{j\neq i }\pi^{j}\Big{)}^{2}\mathrm{d}t-\frac{\sigma}{\delta_{i}}\int_{0}^{T}\Big{(} \pi_{t}^{i}-\frac{\theta_{i}}{n}\sum_{j\neq i}\pi^{j}\Big{)}\mathrm{d}W_{t} \Bigg{)}.\]
Note that this expression is a density since \(\pi_{t}^{i}\) is bounded. Thus, we can write the (negative) target function of (4.1) as
\[\mathbb{E}\left[\exp\left(-\frac{1}{\delta_{i}}\left(X_{T}^{i,\pi ^{i}}-\frac{\theta_{i}}{n}\sum_{j\neq i}X_{T}^{j,\pi^{j}}\right)\right)\right] =\exp\Bigg{(}-\frac{1}{\delta_{i}}\Big{(}x_{0}^{i}-\frac{\theta_{i}}{n}\sum_{ j\neq i}x_{0}^{j}\Big{)}\Bigg{)}\] \[\cdot\mathbb{E}_{\mathbb{Q}}\left[\exp\Bigg{(}-\frac{1}{\delta_{i }}\Bigg{(}\int_{0}^{T}\Big{(}\pi_{t}^{i}-\frac{\theta_{i}}{n}\sum_{j\neq i} \pi^{j}\Big{)}(\mu+g(\bar{\pi}_{t}))-\frac{\sigma^{2}}{2\delta_{i}}\Big{(}\pi_ {t}^{i}-\frac{\theta_{i}}{n}\sum_{j\neq i}\pi^{j}\Big{)}^{2}\mathrm{d}t\Bigg{)} \Bigg{)}\right].\]
But now in order to minimize the expectation we can do this pointwise under the integral which leads to maximizing (4.2). Since the maximizing point is not at the boundary, the assumption of bounded policies is no restriction. Thus, we have solved the problem.
Whether or not a Nash equilibrium exists in this case depends now on the precise choice of \(g\).
**Remark 4.3**.: The structure of the function (4.2) considered in the proof of Proposition 4.2 implies that there exist at least one and at most two global maxima.
## 5. Optimization under CRRA utility with linear price impact
In this section, we assume that agents use CRRA utility functions (power or logarithmic) to measure their preferences. Hence, we let
\[U_{i}:(0,\infty)\to\mathbb{R},\,x\mapsto\begin{cases}\left(1-\frac{1}{\delta_ {i}}\right)^{-1}x^{1-\frac{1}{\delta_{i}}},&\delta_{i}\neq 1,\\ \ln(x),&\delta_{i}=1\end{cases}\]
for some preference parameter \(\delta_{i}>0\), \(i=1,\ldots,n\). By \(\ln(\cdot)\) we denote the natural logarithm.
While using CRRA utility functions, it is mathematically more convenient to optimize the invested fraction of wealth instead of the amount or number of shares. Thus, throughout this subsection, \(\pi_{t}^{i}\), \(i=1,\ldots,n\), denotes the fraction of agent \(i\)'s wealth invested into the risky stock at some time \(t\in[0,T]\). The wealth process of agent \(i\) is therefore given as the solution to the SDE
\[\mathrm{d}X_{t}^{i,\pi^{i}}=X_{t}^{i,\pi^{i}}\pi_{t}^{i}\left((\mu+\alpha\bar {\pi}_{t})\,\mathrm{d}t+\sigma\mathrm{d}W_{t}\right),\,X_{0}^{i,\pi^{i}}=x_{0} ^{i}.\]
Similar to Section 3 in [29], we include the strategic interaction component into our problem by inserting the product of agent \(i\)'s and a weighted geometric mean of the other agents' terminal wealth into the expected utility criterion of the portfolio optimization problem. Therefore, the portfolio optimization problem of agent \(i\) is given by
(5.1)
In order to find an explicit solution for the Nash equilibrium we need to restrict ourselves to constant strategies. Since the reduction to some auxiliary problem containing only one instead of all \(n\) agents is not possible in this setting, we need to directly solve the best response problem in order to determine the Nash equilibrium. Then the unique constant Nash equilibrium is given in the following theorem.
**Theorem 5.1**.: _Assume that the following assumptions hold_
1. \((n+\theta_{i})\left(n\sigma^{2}-\delta_{i}\alpha\right)-\theta_{i}\delta_{i}\sigma^ {2}>0\) _for all_ \(i=1,\ldots,n\)_,_
2. \(n\sigma^{2}-2\delta_{i}\alpha>0\) _for all_ \(i=1,\ldots,n\)_,_
3. \(1-\sum_{j=1}^{n}\frac{(n-\theta_{j})\alpha\delta_{j}-n\theta_{j}(\delta_{j}-1) \sigma^{2}}{(n+\theta_{j})(n\sigma^{2}-\alpha\delta_{j})-n\theta_{j}\delta_{j }\sigma^{2}}\neq 0\)_._
_Then the unique (up to modifications) constant Nash equilibrium to (5.1) in terms of invested fractions is given by_
\[\pi^{i,*}= \frac{n^{2}\delta_{i}\mu}{(n+\theta_{i})(n\sigma^{2}-\delta_{i} \alpha)-n\theta_{i}\delta_{i}\sigma^{2}}+\frac{(n-\theta_{i})\alpha\delta_{i} -n\theta_{i}(\delta_{i}-1)\sigma^{2}}{(n+\theta_{i})(n\sigma^{2}-\delta_{i} \alpha)-n\theta_{i}\delta_{i}\sigma^{2}}\] \[\cdot\left(1-\sum_{j=1}^{n}\frac{(n-\theta_{j})\alpha\delta_{j}- n\theta_{j}(\delta_{j}-1)\sigma^{2}}{(n+\theta_{j})(n\sigma^{2}-\alpha\delta_{j} )-n\theta_{j}\delta_{j}\sigma^{2}}\right)^{-1}\sum_{j=1}^{n}\frac{n^{2}\delta_ {j}\mu}{(n+\theta_{j})(n\sigma^{2}-\delta_{j}\alpha)-n\theta_{j}\delta_{j} \sigma^{2}}.\]
Proof.: Let \(i\in\{1,\ldots,n\}\) be arbitrary but fixed and assume that the other agents use constant strategies \(\pi^{j}\), \(j\neq i\), which will also be assumed to be arbitrary but fixed. Now define the stochastic process \((Y_{t}^{-i})_{t\in[0,T]}\) by \(Y_{t}^{-i}=\prod_{j\neq i}X_{t}^{j,\pi^{j}}\), \(t\in[0,T]\).
At first, we determine the dynamics of the process \(\left((Y_{t}^{-i})^{-\frac{\theta_{i}}{n}}\right)_{t\in[0,T]}\). To simplify our calculations, we first consider the logarithm of this process. We obtain
\[\ln\left((Y_{t}^{-i})^{-\frac{\theta_{i}}{n}}\right)=-\frac{\theta_{i}}{n} \sum_{j\neq i}\ln\left(X_{t}^{j,\pi^{j}}\right) \tag{5.2}\]
for \(t\in[0,T]\). The Ito-Doeblin formula implies
\[\mathrm{d}\ln\left(X_{t}^{j,\pi^{j}}\right)=\pi^{j}((\mu+\alpha\bar{\pi}_{t}) \mathrm{d}t+\sigma\mathrm{d}W_{t})-\frac{\sigma^{2}}{2}(\pi^{j})^{2}\mathrm{d }t.\]
Hence, using (5.2),
\[\mathrm{d}\Big{(}\ln\left((Y_{t}^{-i})^{-\frac{\theta_{i}}{n}}\right)\Big{)}=- \frac{\theta_{i}}{n}\sum_{j\neq i}\pi^{j}((\mu+\alpha\bar{\pi}_{t})\mathrm{d} t+\sigma\mathrm{d}W_{t})+\frac{\theta_{i}}{n}\frac{\sigma^{2}}{2}\sum_{j\neq i }(\pi^{j})^{2}\mathrm{d}t.\]
Using the Ito-Doeblin formula a second time then yields
\[\mathrm{d}\Big{(}(Y_{t}^{-i})^{-\frac{\theta_{i}}{n}}\Big{)}= \mathrm{d}\Big{(}\exp\Big{(}\ln\left((Y_{t}^{-i})^{-\frac{\theta_{i}}{n}} \right)\Big{)}\Big{)}\] \[= (Y_{t}^{-i})^{-\frac{\theta_{i}}{n}}\left(-\frac{\theta_{i}}{n} \sum_{j\neq i}\pi^{j}((\mu+\alpha\bar{\pi}_{t})\mathrm{d}t+\sigma\mathrm{d}W_{ t})+\frac{\sigma^{2}}{2}\frac{\theta_{i}}{n}\sum_{j\neq i}(\pi^{j})^{2} \mathrm{d}t+\frac{\sigma^{2}}{2}\Big{(}\frac{\theta_{i}}{n}\Big{)}^{2}\Big{(} \sum_{j\neq i}\pi^{j}\Big{)}^{2}\mathrm{d}t\right).\]
Hence, we can use partial integration to find the dynamics of the process associated to the argument of the utility function in (5.1):
\[\mathrm{d}\Big{(}X_{t}^{i,\pi^{i}}(Y_{t}^{-i})^{-\frac{\theta_{i}} {n}}\Big{)}\] \[= X_{t}^{i,\pi^{i}}\left(Y_{t}^{-i}\right)^{-\frac{\theta_{i}}{n}} \Bigg{(}-\frac{\theta_{i}}{n}\sum_{j\neq i}\pi^{j}((\mu+\alpha\bar{\pi}_{t}) \mathrm{d}t+\sigma\mathrm{d}W_{t})+\frac{\sigma^{2}}{2}\frac{\theta_{i}}{n} \sum_{j\neq i}(\pi^{j})^{2}\mathrm{d}t+\frac{\sigma^{2}}{2}\Big{(}\frac{\theta _{i}}{n}\Big{)}^{2}\Big{(}\sum_{j\neq i}\pi^{j}\Big{)}^{2}\mathrm{d}t\] \[+\pi_{t}^{i}((\mu+\alpha\bar{\pi}_{t})\mathrm{d}t+\sigma\mathrm{d }W_{t})-\frac{\theta_{i}}{n}\sigma^{2}\pi_{t}^{i}\sum_{j\neq i}\pi^{j}\mathrm{d }t\Bigg{)}\] \[= X_{t}^{i,\pi^{i}}\left(Y_{t}^{-i}\right)^{-\frac{\theta_{i}}{n}} \Bigg{(}\pi_{t}^{i}(\mu\mathrm{d}t+\sigma\mathrm{d}W_{t})+\frac{\alpha}{n}(\pi_{ t}^{i})^{2}\mathrm{d}t+\Big{(}\frac{\alpha}{n}-\frac{\alpha\theta_{i}}{n^{2}}- \frac{\theta_{i}}{n}\sigma^{2}\Big{)}\pi_{t}^{i}\sum_{j\neq i}\pi^{j}\mathrm{d}t\] \[+\Big{(}\sum_{j\neq i}\pi^{j}\Big{)}^{2}\frac{\theta_{i}}{n}\Big{(} \frac{\theta_{i}}{2n}\sigma^{2}-\frac{\alpha}{n}\Big{)}\mathrm{d}t-\frac{\theta_{i }}{n}\sum_{j\neq i}\pi^{j}(\mu\mathrm{d}t+\sigma\mathrm{d}W_{t})+\frac{\theta_{i }}{2n}\sigma^{2}\sum_{j\neq i}(\pi^{j})^{2}\mathrm{d}t\Bigg{)},\]
where we used the last step to separate the summands depending on \(\pi^{i}\) from the ones that do not depend on \(\pi^{i}\). Now a simple caluculation yields that we can rewrite
\[X_{t}^{i,\pi^{i}}\cdot\big{(}Y_{t}^{-i}\big{)}^{-\frac{\theta_{i}}{n}}=\widetilde {X}_{t}^{i,\pi^{i}}\cdot\big{(}\widetilde{Y}_{t}^{-i}\big{)}^{-\frac{\theta_{i} }{n}},\]
where the process \(\widetilde{Y}^{-i}\) does not depend on \(\pi^{i}\). More specifically, the dynamics of \(\widetilde{X}^{i,\pi^{i}}\) and \(\widetilde{Y}^{-i}\) are given by
\[\mathrm{d}\widetilde{X}_{t}^{i,\pi^{i}} =\widetilde{X}_{t}^{i,\pi^{i}}\pi_{t}^{i}\Bigg{(}\Big{(}\mu+\frac{ \alpha}{n}\pi_{t}^{i}+\frac{\alpha}{n}\Big{(}1-\frac{\theta_{i}}{n}\Big{)}\sum _{j\neq i}\pi^{j}\Big{)}\mathrm{d}t+\sigma\mathrm{d}W_{t}\Bigg{)},\] \[\mathrm{d}\widetilde{Y}_{t}^{-i} =\widetilde{Y}_{t}^{-i}\Bigg{(}\sum_{j\neq i}\pi^{j}\Big{(} \Big{(}\mu+\frac{\alpha}{n}\sum_{j\neq i}\pi^{j}\Big{)}\mathrm{d}t+\sigma \mathrm{d}W_{t}\Big{)}+\frac{\sigma^{2}}{2}\Big{(}\sum_{j\neq i}\pi^{j}\Big{)}^ {2}\mathrm{d}t-\frac{\sigma^{2}}{2}\sum_{j\neq i}(\pi^{j})^{2}\mathrm{d}t \Bigg{)}\]
with \(\widetilde{X}_{0}^{i,\pi^{i}}=x_{0}^{i}\), \(\widetilde{Y}_{0}^{-i}=\prod_{j\neq i}x_{0}^{j}\).
The previously introduced processes \(\widetilde{X}^{i,\pi^{i}}\) and \(\widetilde{Y}^{-i}\) simplify the derivation of the HJB-equation in this setting. In order to derive a HJB-equation we define the following value function (\(t\in[0,T]\), \(x,y\in(0,\infty)\))
\[v(t,x,y)\coloneqq\sup_{\pi^{i}\in\mathcal{A}}\mathbb{E}\left[\frac{\delta_{i}} {\delta_{i}-1}\left(\widetilde{X}_{T}^{i,\pi^{i}}\left(\widetilde{Y}_{T}^{-i} \right)^{-\frac{\theta_{i}}{n}}\right)^{\frac{\delta_{i}-1}{\delta_{i}}} \Bigg{|}\widetilde{X}_{t}^{i,\pi^{i}}=x,\,\widetilde{Y}_{t}^{-i}=y\right].\]
We can derive a HJB equation using classical arguments (see e.g. [32], [6], [18]) and obtain
\[0= v_{t}+yv_{y}\Bigg{\{}\sum_{j\neq i}\pi^{j}\Big{(}\mu+\frac{\alpha}{n} \sum_{j\neq i}\pi^{j}\Big{)}+\frac{\sigma^{2}}{2}\Big{(}\sum_{j\neq i}\pi^{j} \Big{)}^{2}-\frac{\sigma^{2}}{2}\sum_{j=1}^{n}(\pi^{j})^{2}\Bigg{\}}+\frac{ \sigma^{2}}{2}y^{2}v_{yy}\Big{(}\sum_{j\neq i}\pi^{j}\Big{)}^{2}\] \[+\sup_{\pi^{i}\in\mathbb{R}}\Bigg{\{}xv_{x}\pi^{i}\mu+\left(\frac{ \alpha}{n}\left(1-\frac{\theta_{i}}{n}\right)xv_{x}+\sigma^{2}xyv_{xy}\right) \pi^{i}\sum_{j\neq i}\pi^{j}+\Big{(}\frac{\alpha}{n}xv_{x}+\frac{\sigma^{2}}{2 }x^{2}v_{xx}\Big{)}(\pi^{i})^{2}\Bigg{\}},\]
where we omitted the arguments of \(v\) and its derivatives for notational convenience. The supremum is attained at
\[\pi^{i,*}=-\frac{xv_{x}\mu+\Big{(}\frac{\alpha}{n}\Big{(}1-\frac{\theta_{i}}{ n}\Big{)}xv_{x}+\sigma^{2}xyv_{xy}\Big{)}\sum_{j\neq i}\pi^{j}}{2\Big{(}\frac{ \alpha}{n}xv_{x}+\frac{\sigma^{2}}{2}x^{2}v_{xx}\Big{)}},\]
which reduces the HJB equation to the PDE
\[0= v_{t}+yv_{y}\Bigg{\{}\sum_{j\neq i}\pi^{j}\Big{(}\mu+\frac{\alpha}{n} \sum_{j\neq i}\pi^{j}\Big{)}+\frac{\sigma^{2}}{2}\Big{(}\sum_{j\neq i}\pi^{j} \Big{)}^{2}-\frac{\sigma^{2}}{2}\sum_{j=1}^{n}(\pi^{j})^{2}\Bigg{\}}+\frac{ \sigma^{2}}{2}y^{2}v_{yy}\Big{(}\sum_{j\neq i}\pi^{j}\Big{)}^{2}\] \[-\frac{\Big{(}xv_{x}\mu+\Big{(}\frac{\alpha}{n}\Big{(}1-\frac{ \theta_{i}}{n}\Big{)}xv_{x}+\sigma^{2}xyv_{xy}\Big{)}\sum_{j\neq i}\pi^{j}\Big{)} ^{2}}{4\Big{(}\frac{\alpha}{n}xv_{x}+\frac{\sigma^{2}}{2}x^{2}v_{xx}\Big{)}}\]
with terminal condition
\[v(T,x,y)=\frac{\delta_{i}}{\delta_{i}-1}\Big{(}xy^{-\frac{\theta_{i}}{n}} \Big{)}^{\frac{\delta_{i}-1}{\delta_{i}}},\,x,y>0.\]
For the solution, we make the following ansatz for \(v\)
\[v(t,x,y)=f(t)\frac{\delta_{i}}{\delta_{i}-1}\Big{(}xy^{-\frac{\theta_{i}}{n}} \Big{)}^{\frac{\delta_{i}-1}{\delta_{i}}}\]
for some continuously differentiable function \(f:[0,T]\to(0,\infty)\) with \(f(T)=1\). Hence, inserting the ansatz for \(v\) reduces the HJB equation to the ODE
\[0=f^{\prime}(t)+\rho f(t) \tag{5.3}\]
with terminal condition \(f(T)=1\), where we defined the constant
\[\rho= -\frac{\theta_{i}}{n}\frac{\delta_{i}-1}{\delta_{i}}\Bigg{(}\sum_ {j\neq i}\pi^{j}\Big{(}\mu+\frac{\alpha}{n}\sum_{j\neq i}\pi^{j}\Big{)}+\frac{ \sigma^{2}}{2}\Big{(}\sum_{j\neq i}\pi^{j}\Big{)}^{2}-\frac{\sigma^{2}}{2}\sum _{j=1}^{n}(\pi^{j})^{2}\Bigg{)}\] \[+\frac{\sigma^{2}}{2}\frac{\theta_{i}}{n}\frac{\delta_{i}-1}{ \delta_{i}}\Big{(}1+\frac{\theta_{i}}{n}\frac{\delta_{i}-1}{\delta_{i}}\Big{)} \Big{(}\sum_{j\neq i}\pi^{j}\Big{)}^{2}-\frac{\Big{(}n\delta_{i}\mu+\Big{(} \alpha\delta_{i}\Big{(}1-\frac{\theta_{i}}{n}\Big{)}-\sigma^{2}\theta_{i}( \delta_{i}-1)\Big{)}\sum_{j\neq i}\pi^{j}\Big{)}^{2}}{4\Big{(}n\sigma^{2}-2 \alpha\delta_{i}\Big{)}^{2}}.\]
The unique solution to (5.3) is given by
\[f(t)=e^{\rho(T-t)},\,t\in[0,T].\]
Inserting the solution \(v\) of the HJB equation into the maximizer \(\pi^{i,*}\) yields
\[\pi^{i,*}=\frac{n\delta_{i}\mu+\Big{(}\alpha\delta_{i}\Big{(}1-\frac{\theta_{i }}{n}\Big{)}-\sigma^{2}\theta_{i}(\delta_{i}-1)\Big{)}\sum_{j\neq i}\pi^{j}}{n \sigma^{2}-2\alpha\delta_{i}}. \tag{5.4}\]
Application of a standard verification theorem (see for example [32], [18], [6] for similar arguments) implies that \(\pi^{i,*}\) is the unique solution to the best response problem. Moreover, since \(\pi^{j}\) were assumed to be constant, \(\pi^{i,*}\) is constant, as well. To conclude the proof, we need to solve the system of linear equations defined by (5.4) for \(i=1,\dots,n\). By adding an appropriate multiple of \(\pi^{i}\) on both sides and simplifying the equation, we obtain
\[\pi^{i}=\frac{n\delta_{i}\mu}{(n+\theta_{i})\Big{(}\sigma^{2}-\frac{\delta_{i }\alpha}{n}\Big{)}-\sigma^{2}\theta_{i}\delta_{i}}+\frac{\alpha\delta_{i} \Big{(}1-\frac{\theta_{i}}{n}\Big{)}-\sigma^{2}\theta_{i}(\delta_{i}-1)}{(n+ \theta_{i})\Big{(}\sigma^{2}-\frac{\delta_{i}\alpha}{n}\Big{)}-\sigma^{2} \theta_{i}\delta_{i}}\sum_{j=1}^{n}\pi^{j}. \tag{5.5}\]
Summing over all \(i\in\{1,\dots,n\}\) on both sides and solving for \(\sum_{j=1}^{n}\pi^{j}\) then yields
\[\sum_{j=1}^{n}\pi^{j}=\Bigg{(}1-\sum_{j=1}^{n}\frac{\alpha\delta_{j}\Big{(}1- \frac{\theta_{j}}{n}\Big{)}-\sigma^{2}\theta_{j}(\delta_{j}-1)}{(n+\theta_{j} )\Big{(}\sigma^{2}-\frac{\delta_{j}\alpha}{n}\Big{)}-\sigma^{2}\theta_{j} \delta_{j}}\Bigg{)}^{-1}\sum_{j=1}^{n}\frac{n\delta_{j}\mu}{(n+\theta_{j}) \Big{(}\sigma^{2}-\frac{\delta_{j}\alpha}{n}\Big{)}-\sigma^{2}\theta_{j} \delta_{j}} \tag{5.6}\]
Finally, inserting (5.6) into (5.5) yields the unique constant Nash equilibrium given by (\(i=1,\dots,n\))
\[\pi^{i,*}= \frac{n\delta_{i}\mu}{(n+\theta_{i})\Big{(}\sigma^{2}-\frac{ \delta_{i}\alpha}{n}\Big{)}-\sigma^{2}\theta_{i}\delta_{i}}+\frac{\alpha\delta _{i}\Big{(}1-\frac{\theta_{i}}{n}\Big{)}-\sigma^{2}\theta_{i}(\delta_{i}-1)}{ (n+\theta_{i})\Big{(}\sigma^{2}-\frac{\delta_{i}\alpha}{n}\Big{)}-\sigma^{2} \theta_{i}\delta_{i}}\] \[\cdot\Bigg{(}1-\sum_{j=1}^{n}\frac{\alpha\delta_{j}\Big{(}1-\frac {\theta_{j}}{n}\Big{)}-\sigma^{2}\theta_{j}(\delta_{j}-1)}{(n+\theta_{j}) \Big{(}\sigma^{2}-\frac{\delta_{j}\alpha}{n}\Big{)}-\sigma^{2}\theta_{j} \delta_{j}}\Bigg{)}^{-1}\sum_{j=1}^{n}\frac{n\delta_{j}\mu}{(n+\theta_{j}) \Big{(}\sigma^{2}-\frac{\delta_{j}\alpha}{n}\Big{)}-\sigma^{2}\theta_{j} \delta_{j}}.\]
**Remark 5.2**.: Similar to Remark 3.4, Theorem 5.1 contains the special cases \(\alpha=0\) and \(\theta_{i}=0\), \(i=1,\dots,n\). For \(\alpha=0\) (no price impact), we deduce
\[\pi^{i,*}=\Bigg{(}\frac{n\delta_{i}}{n+\theta_{i}(1-\delta_{i})}+\frac{\theta_{ i}(1-\delta_{i})}{n+\theta_{i}(1-\delta_{i})}\cdot\frac{\sum_{j=1}^{n}\frac{n \delta_{j}}{n+\theta_{j}(1-\delta_{j})}}{1-\sum_{j=1}^{n}\frac{\theta_{j}(1- \delta_{j})}{n+\theta_{j}(1-\delta_{j})}}\Bigg{)}\cdot\frac{\mu}{\sigma^{2}},\]
\(i=1,\ldots,n\). In the special case without relative concerns inside the objective function, we obtain
\[\pi^{i,*}=\frac{n\delta_{i}\mu}{n\sigma^{2}-\delta_{i}\alpha}+\frac{\alpha \delta_{i}}{n\sigma^{2}-\delta_{i}\alpha}\cdot\frac{\sum_{j=1}^{n}\frac{n \delta_{j}\mu}{n\sigma^{2}-\alpha\delta_{j}}}{1-\sum_{j=1}^{n}\frac{\alpha \delta_{j}}{n\sigma^{2}-\alpha\delta_{j}}},\]
\(i=1,\ldots,n.\) A comparison with Remark 3.4 shows that the Nash equilibria in the special case of \(\theta_{i}=0\) for all \(i=1,\ldots,n\) are actually the same, although \(\pi^{i,*}\) represents the invested amount for exponential and the invested fraction for power utility.
## 6. Conclusion
In this paper we derive Nash equilibria for agents with relative performance measures in financial markets with price impact. We show that as long as the price impact is not more than linear, the individual optimization problems are well-defined. Whereas without price impact, the agents would always invest a positive amount in the stock in our model, the situation changes dramatically when the price impact is beyond a certain threshold. Then the agents massively short-sell the stock and try to take advantage of falling prices. Thus, a larger price impact leads to a more aggressive behavior of the agents in our model.
_Acknoledgement:_ The authors would like to thank Dirk Becherer and Johannes Muhle-Karbe for helpful discussions and hints to literature.
_Statements and Declarations:_ The authors have no relevant financial or non-financial interests to disclose.
|
2309.04513 | Old and new anomalies in charm | The recent LHCb determination of the direct CP asymmetries in the decays $D^0
\to K^+ K^-, \pi^+ \pi^-$ hints at a sizeable breaking of two approximate
symmetries of the SM: CP and U-spin. We aim at explaining the data with BSM
physics and use the framework of flavorful $Z^\prime$ models. Interestingly,
experimental and theoretical constraints very much narrow down the shape of
viable models: Viable, anomaly-free models are electron- and muon-phobic and
feature a light $Z^\prime$ of 10-20 GeV coupling only to right-handed fermions.
The $Z^\prime$ can be searched for in low mass dijets or at the LHC as well as
dark photon searches. A light $Z^\prime$ of $\sim$ 3 GeV or $\sim$ 5-7 GeV can
moreover resolve the longstanding discrepancy in the $J/\psi, \psi^\prime$
branching ratios with pion form factors from fits to $e^+ e^- \to \pi^+ \pi^-$
data, and simultaneously explain the charm CP asymmetries. Smoking gun
signatures for this scenario are $\Upsilon$ and charmonium decays into pions,
taus or invisbles. | Rigo Bause, Hector Gisbert, Gudrun Hiller, Tim Höhne, Daniel F. Litim, Tom Steudtner | 2023-09-08T16:19:11Z | http://arxiv.org/abs/2309.04513v1 | # Old and new anomalies in charm
###### Abstract:
The recent LHCb determination of the direct \(CP\) asymmetries in the decays \(D^{0}\to K^{+}K^{-},\pi^{+}\pi^{-}\) hints at a sizeable breaking of two approximate symmetries of the SM: \(CP\) and U-spin. We aim at explaining the data with BSM physics and use the framework of flavorful \(Z^{\prime}\) models. Interestingly, experimental and theoretical constraints very much narrow down the shape of viable models: Viable, anomaly-free models are electron- and muon-phobic and feature a light \(Z^{\prime}\) of 10-20 GeV coupling only to right-handed fermions. The \(Z^{\prime}\) can be searched for in low mass dijets or at the LHC as well as dark photon searches. A light \(Z^{\prime}\) of \(\sim\) 3 GeV or \(\sim\) 5-7 GeV can moreover resolve the longstanding discrepancy in the \(J/\psi,\psi^{\prime}\) branching ratios with pion form factors from fits to \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) data, and simultaneously explain the charm \(CP\) asymmetries. Smoking gun signatures for this scenario are Y and charmonium decays into pions, taus or invisibles.
Report number: DO-TH 23/14
## 1 Introduction
Recently, LHCb determined the direct \(CP\) asymmetries in \(D^{0}\to\pi^{+}\pi^{-}\), \(K^{+}K^{-}\) decays to [2]
\[a_{K^{+}K^{-}}^{d}=(7.7\pm 5.7)\cdot 10^{-4},\qquad a_{\pi^{+}\pi^{-}}^{d}=(23.2 \pm 6.1)\cdot 10^{-4}. \tag{1}\]
These results are puzzling for two reasons. Firstly, a SM explanation of \(a_{\pi^{+}\pi^{-}}^{d}\) requires higher-order contributions \(h\) to be enhanced over the tree-level amplitude \(t\) by \(\frac{h}{t}\sim 2\). This is significantly larger than the estimations from [3, 4]. Secondly, the fit implies a \(2.7\sigma\) violation of the approximate SM U-spin symmetry [5], that is \(a_{\pi^{+}\pi^{-}}^{d}=-a_{K^{+}K^{-}}^{d}\), which is a factor \(\sim\!4\) larger than the naively expected U-spin breaking in the SM of \(\mathcal{O}\left(\frac{m_{s}-m_{d}}{\Lambda_{QCD}}\right)\sim 30\%\), see Fig. 2. This constitutes the U-spin-\(CP\) anomaly in charm, which we aim to explain with a flavorful \(Z^{\prime}\) model [1].
## 2 Explaining the anomaly with a flavorful \(Z^{\prime}\)
The \(Z^{\prime}\) contributes to the \(CP\) asymmetries in \(D^{0}\to\pi^{+}\pi^{-}\), \(K^{+}K^{-}\) decays via Fig. 2 as
\[a_{\pi^{+}\pi^{-},K^{+}K^{-}}^{d}=\frac{g_{4}^{2}}{M_{Z^{\prime}}^{2}}\Delta \widetilde{F}_{R}\left[c_{\pi,K}F_{Q_{1,2}}+d_{\pi,K}F_{d_{1,2}}\right] \tag{2}\]
where \(g_{4}\) and \(M_{Z}^{\prime}\) are the \(U(1)^{\prime}\) gauge coupling and \(Z^{\prime}\) mass, respectively, and \(c_{\pi,K},d_{\pi,K}\) are hadronic parameters. Moreover, \(\Delta\widetilde{F}_{R}=\sin\theta_{u}\cos\theta_{u}(F_{u_{2}}-F_{u_{1}})\) contains the right-handed \(c\)-\(u\) mixing angle \(\theta_{u}\). Explaining the \(CP\) data (1) requires the \(U(1)^{\prime}\)-quark charges to obey \(F_{u_{2}}\neq F_{u_{1}}\) and \(|F_{d_{1}}|\gg|F_{d_{2}}|\) due to the hierarchy \(a_{\pi^{+}\pi^{-}}^{d}\gg a_{K^{+}K^{-}}^{d}\) (1), along with \(\theta_{u}\neq 0\) and sizeable relative weak and strong phases.
The shape of viable benchmark models in Tab. 2 is further narrowed down by demanding anomaly cancellation which might require adding \(U(1)^{\prime}\) charged right-handed neutrinos \(\nu_{R}\). Additional constraints arise from Kaon FCNCs, (semi-)leptonic and (semi-)invisible \(D\to(\pi)\ell^{+}\ell^{-},\nu\nu\) decays as well as Drell-Yan searches. Viable models also predict \(A_{CP}(\pi^{0}\pi^{+})\simeq A_{CP}(\pi^{0}\pi^{0})\simeq+10^{-3}\).
Moreover, strong constraints from \(D\)-mixing combined with \(CP\) data (1) surpisingly point to a sub-electroweak \(Z^{\prime}\) mass of a few \(\times 10\) GeV. The \(Z^{\prime}\) coupling to \(d\)-quarks leads to collider signals in low mass dijets with initial state radiation, resulting in a mass bound of \(M_{Z^{\prime}}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt \hbox{$<$}}20\) GeV [6].
## 3 A hadrophilic \(Z^{\prime}\) of \(\mathcal{O}\)(10 GeV)?
Light \(Z^{\prime}\) models are also constrained by dark photon searches [7], which imply a strict bound on lepton charges of \(|F_{L_{4},2,e_{1},2}|\lesssim 10^{-3}|F_{d_{1}}|\). Hence, the \(Z^{\prime}\) has to be leptophobic.
The \(Z^{\prime}\) can also mediate quarkonia decays. In BM IV, mass bounds arise from \(\Upsilon(1s)\) decays via the \(b_{R}\)-coupling. Moreover, the \(Z^{\prime}\) contributes to charmonium decays \(\psi_{i}\to Z^{\prime*}\to\pi^{+}\pi^{-},(\tau^{+}\tau^{-},\nu\nu)\) with \(\psi_{i}=J/\psi,\psi^{\prime}\), see Fig. 4. In particular, the electrophobic \(Z^{\prime}\) enhances \(\mathcal{B}(\psi_{i}\to\pi^{+}\pi^{-})\) with respect to \(\mathcal{B}(\psi_{i}\to e^{+}e^{-})\), see Fig. 4. Thereby, for \(M_{Z^{\prime}}\simeq 3\) GeV (5-7 GeV) in BM III (BM IV) the model is able to resolve the longstanding discrepancy between the pion form factor \(F_{\pi}(q^{2})\) extracted from \(J/\psi\to\pi^{+}\pi^{-}\)[8] and \(e^{+}e^{-}\to\pi^{+}\pi^{-}\)[9]. In this case, in BM III the \(Z^{\prime}\) mass range of \(M_{Z^{\prime}}\lesssim 2.2\) GeV allowed by \(\mathcal{B}(\psi^{\prime}\to\tau^{+}\tau^{-})=(3.1\pm 0.4)\cdot 10^{-3}\)[10] almost coincides with the pion form factor window, whereas \(\mathcal{B}(J/\psi\to\text{nothing})<7\cdot 10^{-4}\)[10] implies that the decay to BSM neutrinos should be kinematically forbidden by \(M_{\nu}>M_{J/\psi}/2\) which can be achieved e.g. via the Dirac inverse see-saw mechanism.
## 4 Conclusion
Recent charm data (1) imply sizeable violation of \(CP\) and U-spin, possibly hinting new physics. We obtain a viable explanation from a flavorful \(Z^{\prime}\) boson which is light of \(\mathcal{O}\)(10 GeV), leptophobic and couples only to \(SU(2)_{L}\) singlets. Moreover, a \(Z^{\prime}\) of a few GeV can resolve the pion form factor discrepancy between \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) and \(J/\psi\to\pi^{+}\pi^{-}\) extractions. In this scenario, hadronic, tauonic and invisible quarkonia decays are smoking gun signatures of the model. Longstanding anomalies such as the large isospin breaking between \(\psi(3770)\to D^{+}D^{-}\) and \(D_{0}\bar{D}_{0}\)[10] could potentially also be addressed.
## Acknowledgments
TH would like to thank the organizers for the invitation to such a stimulating conference. This work is supported by the _Studienstiftung des deutschen Volkes_ (TH), the _Bundesministerium fur Bildung und Forschung (BMBF)_ under project number 05H21PECL2 (HG), and the Science and Technology Research Council (STFC) under the Consolidated Grant ST/T00102X/1 (DFL).
|
2309.05619 | Effective Proxy for Human Labeling: Ensemble Disagreement Scores in
Large Language Models for Industrial NLP | Large language models (LLMs) have demonstrated significant capability to
generalize across a large number of NLP tasks. For industry applications, it is
imperative to assess the performance of the LLM on unlabeled production data
from time to time to validate for a real-world setting. Human labeling to
assess model error requires considerable expense and time delay. Here we
demonstrate that ensemble disagreement scores work well as a proxy for human
labeling for language models in zero-shot, few-shot, and fine-tuned settings,
per our evaluation on keyphrase extraction (KPE) task. We measure fidelity of
the results by comparing to true error measured from human labeled ground
truth. We contrast with the alternative of using another LLM as a source of
machine labels, or silver labels. Results across various languages and domains
show disagreement scores provide a better estimation of model performance with
mean average error (MAE) as low as 0.4% and on average 13.8% better than using
silver labels. | Wei Du, Laksh Advani, Yashmeet Gambhir, Daniel J Perry, Prashant Shiralkar, Zhengzheng Xing, Aaron Colak | 2023-09-11T17:07:01Z | http://arxiv.org/abs/2309.05619v2 | Effective Proxy for Human Labeling: Ensemble Disagreement Scores in Large Language Models for Industrial NLP
###### Abstract
Large language models (LLMs) have demonstrated significant capability to generalize across a large number of NLP tasks. For industry applications, it is imperative to assess the performance of the LLM on unlabeled production data from time to time to validate for a real-world setting. Human labeling to assess model error requires considerable expense and time delay. Here we demonstrate that ensemble disagreement scores work well as a proxy for human labeling for language models in zero-shot, few-shot, and fine-tuned settings, per our evaluation on keyphrase extraction (KPE) task. We measure fidelity of the results by comparing to true error measured from human labeled ground truth. We contrast with the alternative of using another LLM as a source of machine labels, or'silver labels'. Results across various languages and domains show disagreement scores with a mean average error (MAE) as low as 0.4% and on average 13.8% better than using silver labels to measure performance.
## 1 Introduction
We have recently seen significant progress on many natural language processing (NLP) tasks using the latest generative pretrained models such as GPT (OpenAI, 2023; Ouyang et al., 2022), PaLM (Chowdhery et al., 2022), and many others (Touvron et al., 2023; Bai et al., 2022; Penedo et al., 2023; Taori et al., 2023). This new generation of models opens up many new possibilities including competitive performance in zero-shot and few-shot settings for tasks that have typically been modeled using a supervised setting (OpenAI, 2023). More established language models (BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), XLM-Roberta (Conneau et al., 2020), etc.) provide a strong balance of inference cost and task performance for such systems. This broad class of large language models (LLMs) used for complex supervised NLP tasks share the problem of how to effectively assess performance in production settings where we don't yet have human labels due to cost or urgency.
The ability to judge model capability becomes important for production settings where we often have to decide whether to launch a model in a new domain or for a new language where we have few or no labels ready. This is also known as few-shot and zero-shot performance, respectively. Scaling models up to new domains and new languages quickly becomes an expensive proposition in terms of labeling. For example, if we have two new domains and ten languages, this results in twenty new label sets that need to be generated. Having the capability to guide that investment or possibly eliminate the need for extensive human labeling for some subset of those domains/languages becomes very valuable.
There have been many approaches to assess the performance of LLMs without human labels, including efforts to assess the performance of task-specific models. (Kamath et al., 2020) explored evaluating fine-tuned question answering models on out of domain data, relevant to question answering problems. More recently, (Fu et al., 2023) creates a meta-model responsible for predicting the accuracy of the LLM model using the model's confidence scores as features. Methods from the computer vision (CV) domain to assess unlabeled data more generally have, for example, proposed the average threshold confidence method that learns a threshold over the model's confidence, predicting accuracy as the fraction of unlabeled examples exceeding that threshold (Garg et al., 2022), or iteratively learn an ensemble of models to identify misclassified data points and perform self-training to improve the ensemble with the identified points (Chen et al., 2021). However, the metrics and hyperparameters in previous works are specifically for classification tasks and cannot be easily extended to more complex tasks.
We propose adapting _disagreement scores_ in
(Jiang et al., 2022; Kirsch and Gal, 2022), also from the CV domain, to assess model quality for these supervised NLP tasks. A _disagreement score_ is computed by first training a _well-calibrated_ ensemble of models and then measuring how similar their respective predictions are on the same input. The intuition is that models will agree on highly confident (likely correct) predictions and disagree on less confident (likely wrong) predictions. One way to develop a _well calibrated_ ensemble is to train the same model on the same dataset but changing initial random seed among the ensemble members, as proposed in (Jiang et al., 2022) for the CV domain.
In this paper, we adapt the same approach for the NLP tasks to understand the prediction performance across different domains (survey responses, conversation text, and social media chats) and languages. Inspired by the latest work on LLMs, as another alternative to human labeling, we explore leveraging a few-shot GPT-4 as an oracle model to provide a'silver label'. We find that disagreement scores of a well-calibrated ensemble work better at predicting a single model's performance for a complex keyphrase extraction task (KPE) than GPT-4 as an oracle model. Our evaluation comparing XLM-Roberta (Conneau et al., 2020), GPT-3 (Brown et al., 2020), and GPT-4 models (OpenAI, 2023) shows that disagreement scores provide estimation of model performance with mean average error (MAE) as low as 0.4% and on average 13.8% better than using silver labels.
## 2 Approach: Assessing error without human labels
### Adapting Disagreement for Natural Language Tasks
We define \(\mathcal{D}\) be a distribution over \(\mathcal{X}\times\mathcal{Y}\), where \(\mathcal{X}\) is the space of input features to the model and \(\mathcal{Y}\) the space of output values from the model. Let \((X,Y)\) denote the random variable from \(\mathcal{D}\) and \((x,y)\) be sampled values taken from \(\mathcal{D}\). Let \(h:\mathcal{X}\rightarrow\mathcal{Y}\) denote a hypothesis from a hypothesis space \(\mathcal{H}\). We assume \(\mathcal{A}\) be a stochastic training algorithm that induces a distribution \(\mathcal{H_{A}}\) from \(\mathcal{H}\). Let \(h\in\mathcal{H_{A}}\) and \(h^{\prime}\in\mathcal{H_{A}}\) be two random hypotheses output by two independent runs of the training algorithm \(\mathcal{A}\). We denote the test error and disagreement score for \(h\in\mathcal{H_{A}}\) and \(h^{\prime}\in\mathcal{H_{A}}\) over \(\mathcal{D}\) as the following:
\[Test^{err}_{\mathcal{D}}(h)=\mathbb{E}_{\mathcal{D}}[h(X)\neq Y] \tag{1}\]
\[Dis_{\mathcal{D}}(h,h^{\prime})=\mathbb{E}_{\mathcal{D}}[h(X)\neq h^{\prime}( X)] \tag{2}\]
The relationship between \(Test^{err}_{\mathcal{D}}(h)\) and \(Dis_{\mathcal{D}}(h,h^{\prime})\) is described as the following Theorem 1 (Jiang et al., 2022).
**Theorem 1**: _Given a stochastic learning algorithm \(\mathcal{A}\),if its corresponding ensemble satisfies class-wise calibration, then we have:_
\[\mathbb{E}_{h,h^{\prime}\sim\mathcal{H_{A}}}[Dis_{\mathcal{D}}(h,h^{\prime})] =\mathbb{E}_{h\sim\mathcal{H_{A}}}[(Test^{err}_{\mathcal{D}}(h)). \tag{3}\]
In this paper, we focus on a sequence-to-sequence task, keyphrase extraction (KPE). We use the F1 score instead of test error to measure model quality and agreement instead of disagreement to measure model disparity. These choices are justified due to the mathematical relationship of model error to F1 score and agreement to disagreement (see Appendix A). For the computation of KPE agreement, for each sentence we extract the keyphrases using the two models and compute the agreement score as the ratio of common keyphrases extracted to the total number of keyphrases extracted. The disagreement score is simply \(1-\alpha\), where \(\alpha\) is the agreement score.
To estimate model error on unlabeled data, we first train a set of KPE models using different random seeds on the training set. Then we compute both the disagreement score and the error on a labeled test set to collect all data pairs (F1 score, agreement score). Based on these data pairs, we fit a simple linear regression model for error prediction, similar to that employed in (Jiang et al., 2022).
### LLM as an Oracle
We have witnessed impressive performance of recent LLMs like GPT-4 on a wide variety of tasks in a zero-shot manner, leading to an increased demand and interest in using them as both a label source for testing data as well for their representation abilities. Utilizing a model for labeling can result in significant costs savings (Tornberg, 2023). We include labeling from few-shot prompted GPT-4 as an alternative approach to measure model performance.
## 3 Models and Data
### Models and Tasks
We explore using three types of models, all trained for the same KPE task: XLM-Roberta, GPT-3, and GPT-4. The KPE task is representative of many typical industrial NLP tasks, because it is a fundamental and complex problem (Song et al.,
2023). The KPE task consists in taking an input text and and producing a set of textual spans, if any, representing keyphrases as output, which is typically modeled as a sequence to sequence model. Consistent with existing approaches Jiang et al. (2022), we use mean absolute error (MAE), as the primary metric for measuring fidelity of a proxy error method to the true error measured against human label ground truth. In this case
\[\text{MAE}=\frac{1}{n}\sum_{i=1}^{n}|\text{err}_{i}^{\text{proxy}}-\text{err} _{i}^{\text{true}}|, \tag{4}\]
where \(\text{err}_{i}^{\text{proxy}}\) is the proxy or approximated error of the model for the \(i\)-th experiment and \(\text{err}_{i}^{\text{true}}\) the corresponding true error based on ground truth data.
### Datasets
We evaluated our approach on three internal datasets corresponding to three distinct domains namely, survey response data, Twitter data, and recorded customer service conversations. The survey-response data is a corpus of 98,844 pairs of survey questions with their appropriate textual responses across 10 languages which we refer to in standard language abbreviations, see Table A1 in the appendix for details. We reserve 79,634 pairs as training and validation data and the other 19,210 as testing data. The Twitter data corpus and the customer support corpus are a collection of 500 tweets relating to customer support and 500 customer service conversation threads respectively.
## 4 Experimental Results and Analysis
We evaluated the the disagreement scoring approach for the KPE task on 10 different languages and three domains using the three models: XLM-R, a fine-tuned GPT-3, and a few-shot prompted GPT-4 model. In the following two sections, we look at evaluations when languages and domains are held out during fine-tuning. In 4.3, we look at the case when GPT-4 is used as an oracle for ground truth in a zero-shot manner, without any fine tuning. Table 1 shows a summary of the results on the anonymized survey data.
### Language change for LLM
**XLM-R**. We fine-tuned the XLM-R base models, with 125M parameters, on all 10 languages with anonymized survey data (Section 3.2). For each language, we trained four models on that language using the same data but with different seeds, recording F1 scores on the respective language-specific test data. We compute the disagreement score with the other models, giving us six total disagreement scores per language which are then averaged to arrive at the average disagreement score per language. Since we have 10 languages and 4 models, we have 40 (F1 score, disagreement score) pairs for making a prediction. Taking JA as an example, we we use the other 9 languages (36 points) to fit the curve and derive its final prediction (F1 score) as \(y=0.809x+0.09631\), where \(x\) is the agreement score variable. The MAE for JA is then 3.7% (first row in Table 1 denoted as XLM-R-JA).
**Curie**. We use the same training data as XLM-R to fine-tune a GPT-3 model with 13B parameters, known as Curie, through the API provided by OpenAI.1. To understand Curie's performance on Asian vs. all languages, we consider two scenarios: one only focusing on European (EU) languages, and second with all the languages (EU + Asian languages).
Footnote 1: [https://platform.openai.com/docs/guides/fine-tuning](https://platform.openai.com/docs/guides/fine-tuning)
**GPT-4**. We explored using zero-shot and various sizes of few-shot training for GPT-4 and found that 100-shot training did the best. We randomly sample 100 data records from the anonymized survey data for each language for prompting, and use the same test data as used for XLM-R and Curie. The results in Table 1 are using 100-shot prompting and our experiments were limited to EN, ES, FR, and IT due to time constraints.
We make the following observations. First, all LLMs, whether fine-tuned or used as zero-shot, are bounded by 12.9% MAE on average, encouraging their use for labeling and evaluation needs. The average performance of XLM-R is 2.49% MAE using
\begin{table}
\begin{tabular}{l c c c} \hline
**Language** & **Avg F1** & **Avg Predicted F1** & **MAE** \\ \hline XLM-R-JA & 0.567 & 0.530 & 0.037 \\ XLM-R-FR & 0.765 & 0.781 & 0.016 \\ XLM-R-K & 0.714 & 0.721 & 0.007 \\ Curie-K-ALL & 0.160 & 0.448 & 0.288 \\ Curie-FR-ALL & 0.674 & 0.577 & 0.097 \\ Curie-KO-ALL & 0.395 & 0.305 & 0.080 \\ Curie-FE-EU & 0.674 & 0.639 & 0.035 \\ Curie-ES-EU & 0.441 & 0.443 & 0.002 \\ GPT-ET & 0.427 & 0.595 & 0.168 \\ GPT-ET & 0.319 & 0.301 & 0.018 \\ GPT-4FR & 0.956 & 0.426 & 0.170 \\ GPT-4TT & 0.356 & 0.373 & 0.017 \\ \hline \end{tabular}
\end{table}
Table 1: Prediction performance of language change for XLM-R, Curie and GPT-4. Avg F1: average groundtruth F1; Avg Predicted F1: average predicted F1 from fitted linear function.
all 10 languages (XLM-R-All), 2.39% using EUV-only (XLM-R-EU), that of Curie is 12.9% MAE using all languages (Curie-All), 2.09% using EUV-only (Curie-EU), while GPT-4 has 9.38% MAE using the 4 languages tested. Second, comparing performance on subsets of languages, we find that LLMs struggle on Asian languages, likely due to the differences in pre-training corpora and our test datasets. Finally, LLMs like GPT-4, when used in zero-shot manner, lead to suboptimal performance as compared to ones that are fine-tuned.
### Domain change for LLM
We used a test set based on Twitter data and anonymized conversation (conv) data for testing disagreement scoring approach across different domains. We had both datasets annotated by our internal professional annotators and compared the predicted F1 scores from the XLM-R, Curie and GPT-4 models with the actual F1 scores from the human annotations. Table 2 shows the results.
First, the prediction performance of XLM-R and Curie models on conv and Twitter data is better as compared to GPT-4 models, with an average of 4.9% MAE vs. GPT-4's average of 13.8% MAE. It is not surprising because XLM-R and Curie have more data points to fit the prediction function, making them more accurate. Note that we only used data points from European languages for Curie due to the distribution gap we observed in Asian languages in Section 4.1. Second, the average MAE of the conv data across all three models is 5.3%, which is lower than that for Twitter data having 10.3% MAE. We conjecture that this is likely due to the fact that Twitter data is much more noisy, indicating larger domain shift.
### GPT-4 few-shot prompt silver label for XLM-R and Curie
To study how well GPT-4 can be used as a silver label generator for the KPE tasks, we fine-tuned a XLM-R model and a Curie model. We measured error using human labels referred to as _gold labels_ and measured error using GPT-4 generated labels or _silver labels_, summarized in Table 3. Appendix E shows how we prompt GPT-4 models.
Overall, we observe poor prediction capabilities using 100-shot GPT-4 as a label source. With XLM-R, we observe a MAE of 31.3%, 29.1%, 10.4%, and 19.3% for EN, ES, FR and IT respectively. For a practitioner, this MAE is too high to make a confident decision about whether a language requires more human training labels or whether a model is ready for launch. For Curie, we see a much lower MAE of 9.38% on average. While these error rates are more reasonable, we are concerned that this may be an artifact of both models having a low F1 score overall. We conclude that using GPT-4 does not work very well as a source of silver labels to assess model performance on unlabeled data for the XLM-R KPE model as compared to our proposed disagreement scores approach.
## 5 Conclusion
We conclude that disagreement scoring is a promising approach to predict model performance of LLMs. LLMs like GPT-4 that use few-shot prompting as a source for silver labels have high MAE and may not be useful in practice. In this paper, we explored the effects over three LLM models, XLM-R, GPT-3, and GPT-4 across 10 languages and 3 domains. Overall we recommend against measuring model performance on complex NLP tasks using LLMs as a few-shot Oracle, in our experiments we observe GPT-4 derived labeling results in F1 prediction with MAE of 15.7% on average (Table 3), with some MAE as high as 31.3%. Instead we recommend using disagreement scores and related techniques, from our experiments we observe MAE across various languages and domains to be 1.91% on average, with some as high as 9%.
## 6 Limitations
We observe that the performance of our proposed GPT-based approaches work better on European
\begin{table}
\begin{tabular}{l c c c} \hline
**Language** & **Avg F1** & **Avg Predicted F1** & **MAE** \\ \hline XLM-R-conv & 0.647 & 0.669 & 0.022 \\ XLM-R-Twitter & 0.370 & 0.452 & 0.082 \\ Curie-conv-EU & 0.286 & 0.255 & 0.031 \\ Curie-Twitter-EU & 0.210 & 0.271 & 0.061 \\ GPT-4-conv & 0.368 & 0.476 & 0.108 \\ GPT-4-Twitter & 0.292 & 0.459 & 0.167 \\ \hline \end{tabular}
\end{table}
Table 2: Prediction performance of domain change for XLM-R, Curie and GPT-4. Avg F1: average groundtruth F1; Avg Predicted F1: average predicted F1 from fitted linear function.
\begin{table}
\begin{tabular}{l c c c} \hline
**Language** & **F1 (silver label)** & **F1 (golden label)** & **MAE** \\ \hline XLM-R-EN & 0.392 & 0.705 & 0.313 \\ XLM-R-ES & 0.368 & 0.659 & 0.291 \\ XLM-R-FR & 0.661 & 0.765 & 0.104 \\ XLM-R-IT & 0.378 & 0.571 & 0.193 \\ Curie-EN & 0.410 & 0.480 & 0.070 \\ Curie-ES & 0.306 & 0.441 & 0.135 \\ Curie-FR & 0.590 & 0.674 & 0.084 \\ Curie-IT & 0.298 & 0.384 & 0.086 \\ \hline \end{tabular}
\end{table}
Table 3: Silver label for XLM-R and Curie.
languages than Asian languages. We believe this could be improved upon by using different base LLMs that have been trained on more non-EU data and studying in more detail the trade-off of using more or less regression points to predict an unknown F1. Our experiments are also limited to a single but complex NLP task, KPE. We also note that the theoretical error bound of this approach in terms domain shift is not guaranteed, as described in Kirsch and Gal (2022). In future work we hope to expand our study of these methods on additional models and tasks to further increase confidence and understand where these methods may fail and potentially work towards methods with stronger theoretical bounds.
## 7 Ethics Statement
In this section, we hope to address any ethical considerations that may arise regarding the use of our internal and private dataset. The dataset was labeled by an internal labeling team that was competitively compensated for their time. The data was sampled across a large variety of brands within each industry in order to limit biases that may exist in specific domains. Lastly, the data was doubly anonymized to redact any brand sensitive or personal identifiable information (PII): first by an internally developed text anonymization algorithm, and then by human annotators.
|
2309.12365 | An Efficient Intelligent Semi-Automated Warehouse Inventory Stocktaking
System | In the context of evolving supply chain management, the significance of
efficient inventory management has grown substantially for businesses. However,
conventional manual and experience-based approaches often struggle to meet the
complexities of modern market demands. This research introduces an intelligent
inventory management system to address challenges related to inaccurate data,
delayed monitoring, and overreliance on subjective experience in forecasting.
The proposed system integrates bar code and distributed flutter application
technologies for intelligent perception, alongside comprehensive big data
analytics to enable data-driven decision-making. Through meticulous analysis,
system design, critical technology exploration, and simulation validation, the
effectiveness of the proposed system is successfully demonstrated. The
intelligent system facilitates second-level monitoring, high-frequency checks,
and artificial intelligence-driven forecasting, consequently enhancing the
automation, precision, and intelligence of inventory management. This system
contributes to cost reduction and optimized inventory sizes through accurate
predictions and informed decisions, ultimately achieving a mutually beneficial
scenario. The outcomes of this research offer | Chunan Tong | 2023-09-13T02:53:43Z | http://arxiv.org/abs/2309.12365v1 | # An Efficient Intelligent Semi-Automated Warehouse Inventory Stocktaking System
###### Abstract
In the context of evolving supply chain management, the significance of efficient inventory management has grown substantially for businesses. However, conventional manual and experience-based approaches often struggle to meet the complexities of modern market demands. This research introduces an intelligent inventory management system to address challenges related to inaccurate data, delayed monitoring, and overreliance on subjective experience in forecasting. The proposed system integrates bar code and distributed flutter application technologies for intelligent perception, alongside comprehensive big data analytics to enable data-driven decision-making. Through meticulous analysis, system design, critical technology exploration, and simulation validation, the effectiveness of the proposed system is successfully demonstrated. The intelligent system facilitates second-level monitoring, high-frequency checks, and artificial intelligence-driven forecasting, consequently enhancing the automation, precision, and intelligence of inventory management. This system contributes to cost reduction and optimized inventory sizes through accurate predictions and informed decisions, ultimately achieving a mutually beneficial scenario. The outcomes of this research offer a valuable reference model for enterprises transitioning towards intelligent inventory management.
stocktaking inventory stock count RFID semi-automated warehouse
## 1 Introduction
In recent years, fully automated inventory technologies have experienced significant advancements, with the utilization of next-generation ultra-high-frequency RFID (Radio-Frequency Identification) employing radio waves for real-time identification and tracking of tagged objects. The application of RFID in warehouse management traces back to the 1990s, with initiatives by major players such as Walmart and the US Department of Defense. However, it wasn't until the 2000s that RFID technology gained widespread adoption due to the establishment of standards and cost reductions [13]. While new RFID systems have demonstrated impressive accuracy in real-time inventory monitoring and complex tasks like expiration date tracking, the full extent of their potential remains unpredictable. This raises questions about the limited adoption of these powerful systems in large manufacturing factories.
Despite the potential of fully automated inventory technologies, the warehouse management field seems inadequately prepared for a complete transformation. Various scholars have explored the possibilities of enhanced efficiency through new technologies like RFID [14]. Others have examined the application of RFID in mitigating disruptions caused by epidemics [21]. Furthermore, there are proposals for integrated automated inventory systems that combine drones and RFID [18]. While these perspectives are valuable, they do not fully account for potential drawbacks of fully automated inventory systems. Two main issues arise: First, new ultra-high-frequency RFID systems require more infrastructure and cannot be immediately employed, increasing the barrier to entry. However, excessive reliance on automation might lead to skill degradation among warehouse staff [2]. Second, current RFID technology applications in inventory checking suffer from problems like inaccurate read rates and high tag damage rates, hindering the wider adoption of RFID-based inventory management. The integration of RFID technology may bring challenges in terms of operational complexity, effectiveness, and maintenance costs, which are often overlooked when promoting the concept of "RFID inventory control" [6], as well as the idea of achieving fully automated warehouse management with RFID, which may not align with practical realities [7].
In this paper, I provide a new theoretical perspective by developing a semi-automated inventory system using handheld scanning hardware and software. This system offers a highly efficient approach that completely addresses the shortcomings of traditional blind inventories. The validation process significantly reduces unnecessary repetitive labor and receives high praise from warehouse users in experiment. I argue that future RFID inventory research should not merely focus on full automation but should instead consider adjusting integration strategies according to practical constraints and addressing fundamental issues. This research explores how warehouse managers view the application of RFID in inventory checking through a comparison of experiences and data. I identify several limitations of RFID in practical warehouse inventory checks. For instance, complete reliance on RFID for inventory checks can result in inefficient disassembly and installation times for RFID equipment, as well as issues with accurately identifying inventory behind obstacles. I define warehouse interruptions as events that require adjustment and adaptation, which disrupt the flow of goods and impact efficiency. "Integration" refers to the widespread use of RFID systems for inventory operations. This framework demonstrates that the degree of RFID technology integration determines the types of disruptions that may arise. Semi-automated inventory checks aid warehouse enterprises in smoothly transitioning and upgrading from traditional inventories to fully automated RFID inventories, reducing technological upgrade costs and risks, facilitating seamless upgrading.
Relevant Research and Study: Although RFID has gained popularity in recent years, it has not yet been widely adopted in large manufacturing enterprises. Griffiths et al. [8] highlighted that RFID tags can be negatively affected by environmental factors, particularly in the presence of liquids and metals. They argue that barcoding remains unaffected by material or electromagnetic emissions and offers improved accuracy [12]. Numerous studies emphasize the benefits and cost savings that companies can achieve by implementing RFID, from waste reduction to more accurate inventory information and faster scanning of incoming products. However, it's important to note that while RFID presents solutions, it also introduces new challenges.
In the evolution of warehouse management, semi-automated inventory has become a crucial research focus. With the application of mobile terminals and emerging technologies, various semi-automated inventory solutions have emerged. However, many existing solutions require operators with extensive inventory experience, making them less adaptable to diverse environments and time-consuming. This research introduces a semi-automated inventory system based on mobile terminals, characterized by the following aspects: The system employs portable digital assistant terminals with built-in rapid barcode scanning capabilities, enabling the reading of inventory encoding information. By integrating data from the backend ERP system, the system determines whether inventory is correctly placed. Recognizing that full automation may not always be feasible, the system employs a semi-automated mode, where operators rapidly scan and verify inventory information, and software compares this data with backend information, yielding inventory check results. This approach leverages the advantages of human-machine collaboration, balancing accuracy and efficiency.
To evaluate the system, we conducted on-site testing. Eleven experienced operators used the system to perform inventory checks, covering over two million items in inventory. Collaborating with monitoring software allowed remote supervision of the entire inventory process. Results indicate that compared to RFID automated inventory checks, this system offers better usability, lower costs, and ensured accuracy.
This research demonstrates that semi-automated systems can effectively address inventory checking challenges without necessarily pursuing full automation. Future work will further optimize the system, incorporating additional barcode technologies to assist with inventory checks and enhancing monitoring capabilities. This study provides insights for other warehousing scenarios and will drive the generation of innovative inventory technology solutions.
## 2 II. System Overview
Figure 1 illustrates the overall architecture of the developed system, encompassing the Flutter software framework, handheld PDA terminals, the system's data analysis and monitoring component, and operator usage scenarios. We employed PADs with a one-key scanning feature for QR and barcode scanning, and the distributed inventory software synchronized inventory data and operator actions with the master node. The handheld terminals operate on the Android operating system with 4GB of memory and an infred laser scanning module. The backend Linux server features 16GB of memory, a 2-core CPU, and connects to a Hadoop HDFS database.
### Algorithm Comparison and Process of the Flutter-based Stocktaking App
The primary objective of the stocktaking app is to obtain physical QR CODE information from inventory through the PAD's one-key laser scanning feature. This information includes location, batch numbers, and unique inventory codes. Furthermore, by querying the SAP ERP system on the backend, the system can ascertain whether the current batch of inventory is accurately placed. Specifically, the inventory process involves operators scanning and verifying unique codes, with software subsequently comparing this data to backend information. This process enables the identification of correct batch quantities, shortages, and surplus inventory. Correct and shortage data can be temporarily disregarded, as the focus is on identifying surplus items that need to be returned to their designated SAP locations, while keeping track of their unique codes. Additionally, only when all batch numbers under each location are listed, and the quantities are verified and signed off by operators, can the submission process be completed.
### Permission Management and Operator Monitoring Mechanism
Our stocking app features permission management for the server database, along with permissions for the app's administrative personnel to interact with inventory scan data. These permissions encompass data import and clearing permissions for SAP data, as well as the import and clearance of historical inventory process data by warehouse operators. The monitoring mechanism ensures that only administrators can enter the PDA's admin interface. Furthermore, at the completion of each inventory task, the system archives all data for historical reference.
### Mobile App Interface, Interaction Design, and Programming Language
The interface of the PDA-installed stocktaking app adopts the Flutter framework and employs the Dart programming language. The interactive interface consists of three layers: 1. Location interface; 2. Batch determination interface; 3. Validation of surplus inventory (inventory not assigned to the location). The language used for database operations
Figure 1: The overall structure of the deployed stocktaking system, which consists of identical object, Android Handheld Mobile Terminal PDA, and server system.
primarily supports Hive-like SQL.
### Data Storage and Security Mechanism
We selected Hadoop's HDFS as the storage database, featuring tables for SAP ERP reference data and inventory scanning data input by operators. Analytical data tables include the results that require adjustments. To ensure data security, all APP and database ports are monitored. Historical data is also stored in a distributed storage node to prevent data loss and ensure data security and stability.
### Stocktaking Monitoring and Analytical System
For warehouse monitoring and management, we employed Superset, a business intelligence dashboard installed on our Linux server. This platform offers a visual representation of stocktaking activities and data for warehouse supervisors. Operators' actions during inventory operations impact the final results, serving to prevent laziness or theft and provide administrators with insights into inventory distribution. The system presents charts of SAP data related to inventory locations, batches, and unique inventory codes. Additionally, it visualizes the progress of completed, ongoing, and pending inventory tasks and provides charts for unique code surplus and batch shortage data proposed in the inventory logic.
## 3 III. Analysis and RFID System Difference
### Rfid
Radio Frequency Identification (RFID) is a kind of automatic identification technology, which carries out non-contact two-way data communication through radio frequency mode and uses radio frequency mode to read and write the recording media (electronic tag or radio frequency card), without recognition of optical pattern such as bar code scanning. Thus, target identification and data exchange can be achieved.
Positive RFID: Positive RFID systems consist of tags containing identifying microchips and antennas for wireless communication, as well as readers that emit radio signals to activate tags and read their data. In these systems, information is transmitted between the tag and reader via radio waves [2]. The operating frequency of the RFID system determines the intensity of radio waves for data transfer and impacts overall system performance. Common frequency bands include low frequency (LF), high frequency (HF), ultra-high frequency (UHF), and microwave (MF). Each frequency range has distinct characteristics and is suitable for different applications. For instance, LF systems have short read ranges but work well near metals and liquids, while UHF systems feature longer ranges but are more likely to experience interference [9]. Selecting appropriate frequencies is crucial for optimizing RFID functionality.
Passive RFID systems have become widely adopted for diverse identification and tracking applications. In contrast to active configurations, passive RFID systems rely on readers generating a strong electromagnetic field that induces current in the tag's antenna, providing power for the microchip to transmit its ID code [10]. Since passive tags do not contain batteries, they offer the advantages of a smaller form factor, lighter weight, and virtually unlimited operational lifetime. However, their read range is more limited, typically under 20 feet. Passive ultra-high frequency (UHF) RFID
\begin{table}
\begin{tabular}{p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline \hline System & Tag/Media Types & Re-writability and Data Exchange & Working Principle and Data Exchange & Read Range and Adaptability & Data Capacity \\ \hline RFID & Electronic tags/cards & Rewritable tags/cards & Wireless communication & Varies by frequency, range & High capacity \\ Semi-automated Stock-taking System & Printed 1D/2D barcode & Unchangeable code & Optical scanning to decode & Limited scanner, range & Lower capacity \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of Inventory Systems
systems operating at 860-960MHz offer increased read ranges between 20 to 30 feet [4]. While passive RFID was traditionally restricted to short ranges, modern developments have expanded possibilities for longer-range detection. This makes passive RFID suitable for supply chain management, retail inventory, asset tracking, and other uses requiring cost-effective and durable tags [11]. Careful consideration of operating frequencies, read ranges, and environmental factors can allow organizations to benefit from the affordability and versatility of passive RFID technology.
### Proposed semi-automated stocktaking system
Barcodes have become the standard for product identification and inventory tracking [1]. Traditional linear barcodes encode data in the widths and spacings of printed parallel lines, which can be optically scanned and decoded [12]. Compared to one-dimensional barcodes, two-dimensional (2D) barcodes utilize both horizontal and vertical dimensions to store denser product information in a square or rectangle format [13]. Our proposed semi-automated inventory system utilizes handheld scanners that can capture data from both linear and 2D matrix barcodes. By leveraging 2D barcodes, our system can collect comprehensive product details, enhancing the accuracy and efficiency of inventory auditing. The increased data density of 2D barcodes elevates our system beyond manual visual verification approaches that rely on human interpretation of linear barcodes. With the machine readability of higher capacity 2D barcodes, our system achieves rapid, accurate product identification and inventory tracking.
### Comparison of the two system
Compared to RFID systems, the barcode-based system developed in this research presents greater advantages when managing enormous inventory quantities, since barcodes can be affixed to each product package. Deploying RFID tags on every inventory unit incurs substantial costs, so many firms install RFID on each pallet or shelf instead, while attaching and detaching RFID tags on individual items is time-consuming. Regarding stocktaking approaches, this semi-automated system employs full inventory counting, blind counts, and periodic counting [14]. In contrast, RFID has stringent environmental requirements, and metallic shelves can impede identification [15] The merit of RFID lies in its capacity for real-time inventory auditing, albeit necessitating stable network infrastructure, rendering deployment challenging. Unlike fully automated systems, semi-automated stocktaking systems excel by optimally balancing automation with human involvement. By retaining manual operations, they prevent skill loss [16] and promote user-friendliness [17]. Overall, semi-automated systems maximize human-machine collaboration to deliver efficient, reliable, low-cost stocktaking. They encompass both the efficiency of automation and the flexibility and controllability of manual work (Liu et al., 2021). Through prudently blending automated and human elements, semi-automated stocktaking systems become the foremost option for warehouse digital transformation [18]. Relevant studies demonstrate semi-automated systems can achieve lower costs, greater flexibility and usability than fully automated systems, making them a cost-effective automation transition choice [19]
## 4 Experimental Research and Study
We conducted testing on the semi-automated inventory process, involving 10 warehouse operators equipped with PDAs and the Flutter stocktaking app, to assess the efficiency and accuracy enhancements brought about by the proposed methods. Additionally, we sought to compare the performance of our solution with previous RFID inventory methods.
### Experimental Description
For this research experiment, participants each utilized PDAs equipped with the Flutter-based stocktaking app. Operators were tasked with locating various inventory locations and performing inventory checks. All historical data, including scanned locations, batches, batch quantities, and surplus unique inventory codes, were recorded. The inventory encompassed over two million items, and we closely observed the occurrence times and overall efficiency of inventory operations through the Stocktaking Monitoring and Analytical system. Products with same batch number might be placed on different bins. The QR code for each handling unit includes the bin code, batch code, and dandling unit code
### Participants
Eleven participants took part in the experiment. These individuals had over 10 years of experience as warehouse operators, with familiarity in RFID inventory procedures and manual recordkeeping. The participants had an average
age of 24.4 years, ranging from 21 to 31. The sample included both experienced users and novices who had received brief training. One participant used the Stocktaking Monitoring and Analytical system, while the remaining 10 operators used individual PDAs for the experiment. Although 10 participants had experience with scanning goods using similar PDAs, they had not used the specific app tested in the experiment.
### Results and Analysis
In a comprehensive research endeavor, our primary objective was to assess the cost implications of implementing the new app development, as well as to evaluate its impact on system reliability, user satisfaction, usability, efficiency, and accuracy enhancements. Our investigation revealed variations in operational efficiency and accuracy across different levels of operators, prompting an exploration of potential factors influencing these disparities.
As Figure2 shown, we considered the entire warehouse, which encompasses a total of 500 bin locations housing 1000 batches of products. These batches collectively account for 37,000 handling units, with each handling unit's inner packaging containing a staggering 2 million individual products.
However, to specifically validate the efficiency enhancements brought about by the new system, we opted to randomly select 100 representative bins as experimental groups, each featuring distinct characteristics.Furthermore, for comparative purposes, we established a control group consisting of an additional 100 bins and 10 workers. This control group maintained their previous inventory methods to enable a comprehensive assessment of the new app development's impact.
According to Figure 3, the results of the experiment revealed that the average time required to complete the counting for each bin was 160.76 seconds. The median completion time was 117 seconds, while the standard deviation of completion times was 127.52 seconds. The large standard deviation indicates a high dispersion in the completion times, suggesting noticeable differences in efficiency between the Operators. In comparison with our controlled trials, our mean time,
median time, and standard deviation are respectively 575.97 seconds, 382 seconds, and 447.53 seconds, and physical fatigue of the workers increased with time. On Figure 4, it is evident that when the initial workers' physical exertion remains unused, the efficiency of semi-automatic inventory management surpasses manual inventory procedures by a significant margin. Moreover, the slope of the former is notably steeper, with a sixfold increase compared to the latter.
In assessing the precision of inventory placement, which signifies the degree of conformity between the warehouse management system and the physical inventory, we quantified and monitored instances of handling units that exhibited disparities when juxtaposed against the inventory data stored in the SAP ERP system. Through a comprehensive inventory audit of our entire stock, it has been determined that our actual inventory aligns with the records in our system with an accuracy rate exceeding 99%. Subsequent research involving survey with selected personnel revealed that the minor discrepancies in stock placement arose due to instances where items were inadvertently stored in incorrect locations during the shipping process. It is important that through the Stocktaking Monitoring and Analytical System we found that missed inventory have been returned to its designated location in SAP ERP.
### Optimization of Inventory Checking Through Divide- and-conquer Algorithms
Based on our observation and analysis of Figure Y, it is evident that the distribution of each bin is characterized by different features. Some of them are single batch but with a huge amount of handling units; Others are multiple-batch; The rest of them are less-batch. Therefore, based on the existing data on SAP ERP inventory, we used a divide-and-conquer algorithm to assign employees different inventory locations for better performance. In warehouse inventory checking, adopting targeted algorithms based on the differences in batch quantity and variety distribution within each bin can optimize the efficiency of checking.
Specifically, when the number of batches in a bin is large, using dynamic programming algorithm to optimize batch-centered checking order fully utilizes the association between batches.
\[F(i)=\min_{k<i}F(k)+c_{i}F(n) =\min F(i)\]
For storage bins containing a wide variety of batch types, rule-based algorithms that sort batches by product category or shelving time can help minimize inefficient switching between categories during inventory checking. As demonstrated in Algorithm 1, batches can be sorted into A, B, C types, thereby avoiding unnecessary transitions across batch types and improving the overall inventory checking efficiency.
```
1:Input: Bins \(B=\{b_{1},b_{2},...,b_{n}\}\)
2:Initialize:
3:\(S\leftarrow\emptyset\) (Set to store selected bins)
4:while\(B\) is not empty do
5:\(best\_bin\leftarrow\arg\min_{b\in B}\text{score}(b)\) {Select bin with lowest score}
6:\(S\gets S\cup\{best\_bin\}\) {Add best bin to selected set}
7:\(B\gets B\setminus\{best\_bin\}\) {Remove best bin from remaining bins}
8:endwhile
9:Output: SortedBins \(S\)
```
**Algorithm 1** Greedy Algorithm for Prioritizing Bins
For bins with few batches' quantity and variety, simply checking by storage location order is sufficient without complex optimization algorithms.
In summary, by applying appropriate algorithm combinations according to the specific conditions of different bins, optimal checking routes can be achieved in complex environments, significantly improving the efficiency of inventory checking. Empirical results also demonstrate that this scenario-adaptive optimization approach for checking algorithms has achieved remarkable results.
Conclusions and Insights for Future Applications
Inventory management is an indispensable component of warehouse administration for businesses, warranting utmost attention and efficient management. Particularly in today's corporate landscape that underscores the importance of efficient logistics, inventory management assumes a paramount role. We proposed a novel human-machine interaction system for inventory checks based on PDA devices and a Flutter application. This system allows operators to conduct inventory checks at different levels, with a focus on providing a semi-automated approach that accommodates user-specific actions. The experimental outcomes from both the system and operators reveal room for improvement in both the system and operator performance, which can enhance inventory processes. However, for new technologies such as RFID still need to resolve technological barriers and constraints using system integration and new design thinking, and the semi-automated stocktaking system can help small and medium-sized retails and wholesalers to complete identifying massive inventory if they cannot afford the expensive infrastructures such as a series of positive RFID or automation warehouse.
Future work will include the incorporation of 1D/2D infrared barcodes or QR codes for product information feedback, shifting from sensor-based identification to logic-based data collection. This approach guides operators to make judgments quickly by collecting data through simple logic before proceeding. Furthermore, we intend to enhance the authenticity of data during blind inventory processes, recording each operator action and time to allow real-time supervision by warehouse managers and prevent loss and theft of goods. While the current experiment involved remote testing on inventories exceeding two million items, the next phase will involve comprehensive integration testing in a real warehouse setting with warehouse staff.
In summary, this research introduces a semi-automated inventory system that effectively addresses inventory challenges, emphasizing that full automation is not always necessary. The study aims to contribute to inventory processes and promote further innovations in inventory technology solutions. It contributes to ensuring the integrity and precision of a company's assets and inventory, allowing for timely identification of issues, root cause analysis, and conflict resolution. Consequently, it provides robust assurance for the accurate computation of accounting information. Regular inventory checks facilitate the accurate assessment of the stock levels of various commodities, enabling precise calculation of a company's actual gains and losses based on the total monetary value of warehouse materials. Furthermore, for obsolete materials within the warehouse, swift disposal can be executed, preventing unnecessary occupation of storage space, and reducing warehousing costs. In the context of small and medium-sized enterprises (SMEs) that may lack the financial resources to invest in RFID-based inventory management facilities, the semi-automated inventory management system recommended in this research can be leveraged. This system can be implemented with minimal personnel, at a low cost, and utilizing readily available basic equipment. The result is a highly efficient, accurate, and visually intuitive inventory management system.
|
2301.00143 | An implementation of the density functional perturbation theory in the
PAW framework | Quantifying materials' dynamical responses to external electromagnetic fields
is central to understanding their physical properties. Here we present an
implementation of the density functional perturbation theory for the
computation of linear susceptibilities using the projector augmented-wave
method. The Sternheimer equations are solved self-consistently through a nested
iterative procedure to compute the first-order wavefunctions, from which the
linear susceptibilities are obtained. As a demonstration, we compute the spin
wave spectral functions of two magnetic metals. The computed magnon spectra for
half-metallic CrO$_2$ and a Heusler intermetallic Cu$_2$MnAl show gapless
Goldstone modes when spin rotation symmetry is preserved and display reasonable
agreement with available experimental data. The Landau damping is computed to
be small in CrO$_2$, but significant in Cu$_2$MnAl producing an asymmetric
Lorentzian spectral lineshape. The access to linear susceptibilities as well as
first-order wavefunctions offers a range of novel possibilities in quantitative
understanding of materials' electronic properties from \textit{ab initio}
methods. | Xiaoqiang Liu, Yihao Lin, Ji Feng | 2022-12-31T07:29:51Z | http://arxiv.org/abs/2301.00143v1 | # An implementation of the density functional perturbation theory
###### Abstract
Quantifying materials' dynamical responses to external electromagnetic fields is central to understanding their physical properties. Here we present an implementation of the density functional perturbation theory for the computation of linear susceptibilities using the projector augmented-wave method. The Sternheimer equations are solved self-consistently through a nested iterative procedure to compute the first-order wavefunctions, from which the linear susceptibilities are obtained. As a demonstration, we compute the spin wave spectral functions of two magnetic metals. The computed magnon spectra for half-metallic CrO\({}_{2}\) and a Heusler intermetallic Cu\({}_{2}\)MnAl show gapless Goldstone modes when spin rotation symmetry is preserved and display reasonable agreement with available experimental data. The Landau damping is computed to be small in CrO\({}_{2}\), but significant in Cu\({}_{2}\)MnAl producing an asymmetric Lorentzian spectral lineshape. The access to linear susceptibilities as well as first-order wavefunctions offers a range of novel possibilities in quantitative understanding of materials' electronic properties from _ab initio_ methods.
## I Introduction
A microscopic understanding of electrical and magnetic characteristics of materials plays a key role in condensed matter physics, furnishing a unified insight into a wide range of phenomena. Indeed, the generalized density response functions (a.k.a. susceptibilities) of a many-electron system to external electromagnetic fields, [1] in a broad sense encompasses the information of the usual longitudinal dielectric function and magnetic permeability, as well as the cross terms for electromagnetic coupling. The properties of collective excitations, e.g. plasmon and magnon, can also be procured from the susceptibilities. Therefore, computing the susceptibilities, which involve both charge and spin degrees of freedom of electrons, is essential to a full characterization of electronic properties. Though relatively straightforward for a non-interacting system, computing the susceptibilities for an interacting many-electron system is a nontrivial task due to interaction effects.
The Kohn-Sham density-functional theory (DFT) [2] is by far the most widely employed ground state electronic structure method for materials and molecules. By mapping the ground state energy to a non-interacting system described by Kohn-Sham equations, the Kohn-Sham ansatz enables the variational formulation of the static responses of a many-electron system. The time-dependent (td) DFT [3] is subsequently developed, in which the electrodynamics is described by td Kohn-Sham equations. In these theories, the Hartree and exchange-correlation potentials are functionals of density, and treated as self-consistent fields. The self-consistent first-order perturbation in td DFT leads to the density functional perturbation theory (DFPT), [4] which is then the machinery for linear response calculations leading directly to the full response functions.
DFPT-based methods have been successfully applied to calculate the dielectric function [5; 6] and phonon dispersion, [7; 8] with the results in quantitative agreement with experimental observations. In the computations of dielectric function, the first-order wavefunctions with respect to \(\mathbf{k}\) are solved, while deformation potential from frozen atomic displacements is used as the external field to compute phonon dispersions. Dynamical responses to external electromagnetic fields from DFPT, which account for the screening of both charge and spin, have attracted considerable interest recently. [9; 10; 11; 12; 13; 14; 15; 16] In addition to quantifying electrical and magnetic properties of materials, the linear susceptibilities computed from DFPT also find applications in the \(GW\) approximations. [17; 18; 19]
Approaches to the DFPT can be broadly grouped into two categories: a Dyson-like equation is solved in the first, [10; 11; 16] whereas the Sternheimer equations are solved in the second. [9; 12; 13; 14; 15] The Dyson-like equation approach starts with response functions computed for the Kohn-Sham ground state. Though formally transparent and amenable to various iterative techniques, the Dyson-like equation approach suffers from two shortcomings. It requires a large number of unoccupied states and huge planewave bases for adequate convergence. [16] More serious is the subtle basis set incompatibility between the Kohn-Sham states and the DFPT process, which gives rise to an artifactual spin excitation gap in systems with spin rotation symmetry. The latter problem can only be partially amended with delicate engineering of the interaction kernel. [11; 20] In the second category, the first-order wavefunctions are procured (often iteratively) by solving the Sternheimer equations, from which the density is updated with charge mixing iterations and the response functions are computed upon convergence. In this case, one is forced to deal with wavefunctions and various pseudopotentials with all the technicalities, [21; 22; 23; 24]
in addition to the nested iterative procedure. Since the full Kohn-Sham response functions are never required, this method is free of the burden of summation over a huge number of empty states. In addition, the first-order wavefunctions and densities computed are actually bonus, which can be useful for computing a variety of properties.
A particularly popular planewave-based approach to DFT is based on the projector augmented-wave (PAW) method. [23; 24] Combining the formal simplicity of pseudopotentials and the versatility of the linearized augmented-planewave method, the PAW method offers both efficiency and accuracy to Kohn-Sham DFT calculations on extended solids, and a wide range of capabilities in various implementations. [25; 26; 27] Despite its popularity, DFPT in the PAW framework has remained to be developed, which is accomplished in this work by solving the td Sternheimer equations to compute the linear susceptibilities of crystalline materials accounting for both charge and spin degrees of freedom. The paper is organized as follows. In Sec. II.1 the general theory of DFPT is introduced, from the viewpoint of the dressed spin in td external electromagnetic fields. The screening built on the notion of dressed spin leads to the Sternheimer equations in the frequency and momentum domain and explicit formula for the linear susceptibilities. In Sec. II.2 the PAW method is reviewed, based on which the formulation of DFPT in the PAW framework is described, along with a few implementation details. As a first calibration our implementation, the spin wave spectral functions are extracted from the computed linear susceptibilities. Two examples are presented: half-metallic CrO\({}_{2}\) (Sec. III.1) shows a clean spin wave spectrum and minimal Landau damping, and a full Heusler intermetallic Cu\({}_{2}\)MnAl (Sec. III.2) shows significant Landau damping in the spin excitations that can be quantified with a simple asymmetric Lorentzian lineshape. Lastly, a summary is provided with an eye on room for development from algorithm and physics points of view.
## II Theory and implementation
### Density functional perturbation theory
The td DFT offers an efficient description of the dynamics of an interacting many-electron system in the presence of external fields. [3] As a self-consistent perturbation theory of td DFT, DFPT is introduced in this section, wherein the Sternheimer equations are specialized to crystalline systems under a monochromatic, periodic external electromagnetic field.
In td DFT, to account for both charge and spin degrees of freedom, the generalized density is given by
\[\begin{split}\rho(\mathbf{r},t)&=\sum_{n}\theta_{n}\, \mathrm{tr}\{\psi_{n}^{\dagger}(\mathbf{r},t)\sigma\psi_{n}(\mathbf{r},t)\}\\ &=(\rho_{0},\mathbf{m})=(\rho_{0},\rho_{1},\rho_{2},\rho_{3})\end{split} \tag{1}\]
where \(\theta_{n}\) is the occupancy of the spinor single-particle state \(\psi_{n}\), and the four-vector spin \(\sigma=(\sigma_{0},\sigma_{1},\sigma_{2},\sigma_{3})\) (\(\sigma_{0}\) is the identity matrix and \(\sigma_{\alpha}\) with \(\alpha=1,2,3\) are the Pauli matrices). The atomic units [28] are adopted in this paper, so \(\rho_{0}\) is the total charge density, and \(\mathbf{m}\) is magnetization density. The dynamics of \(\psi_{n}\) is prescribed by the td Kohn-Sham equations [3]
\[\mathrm{i}\partial_{t}|\psi_{n}(t)\rangle=[H+\delta H(t)]|\psi_{n}(t)\rangle. \tag{2}\]
The ground state Kohn-Sham Hamiltonian [2] in Eq. (2) is \(H=-\frac{1}{2}\nabla^{2}+v[\rho^{(0)}](\mathbf{r})\) where the self-consistent potential is a functional of the ground state density \(\rho^{(0)}\), composed of ionic, Hartree and exchange-correlation (xc) potentials, namely, \(v^{\mathrm{i}}\), \(v^{\mathrm{H}}\) and \(v^{\mathrm{xc}}\). Though the xc potential \(v^{\mathrm{xc}}\) as a functional of density is in principle nonlocal in space and time, [29; 30; 31; 32; 33] the commonly adopted local and adiabatic approximation (ALDA) is assumed in this work. [34; 35; 36]
The first-order Hamiltonian \(\delta H\) comprises two contributions. The first arises from the coupling of the four-vector spin \(\sigma\) with external fields
\[v^{\mathrm{ext}}(\mathbf{r},t)=-B_{\alpha}(\mathbf{r},t)\sigma_{\alpha}, \tag{3}\]
where the four-vector electromagnetic field \(B(\mathbf{r},t)=(-\phi,\frac{1}{2}\mathbf{B})\). [37] The indices \(\alpha,\beta=0,1,2,3\) are implicitly summed over when repeated, but we will keep other summations explicit. The self-consistent inclusion of the density dependence in the Hartree and xc potentials means that \(\delta H\) also includes a second contribution from induced density \(\delta\rho(\mathbf{r},t)\) that screens \(v^{\mathrm{ext}}\). In the adiabatic linear response theory, this can be formulated in terms of a _dressed spin_\(\tau_{\alpha}\)
\[\tau_{\alpha}(\mathbf{r},\mathbf{r}^{\prime},t-t^{\prime})=-\frac{\delta H (\mathbf{r},t)}{\delta B_{\alpha}(\mathbf{r}^{\prime},t^{\prime})}=\sigma_{\alpha} \delta(\mathbf{r}-\mathbf{r}^{\prime})\delta(t-t^{\prime})\\ -\sigma_{\gamma}\int f_{\gamma\beta}(\mathbf{r},\mathbf{r}^{\prime\prime })\chi_{\beta\alpha}(\mathbf{r}^{\prime\prime},\mathbf{r}^{\prime},t-t^{\prime}) \mathrm{d}\mathbf{r}^{\prime\prime}. \tag{4}\]
Here \(\chi_{\alpha\beta}(\mathbf{r},\mathbf{r}^{\prime},t)\) is the linear susceptibility that we pursue in this work, defined via
\[\delta\rho_{\alpha}(\mathbf{r},t)=\int\!\chi_{\alpha\beta}(\mathbf{r},\mathbf{r}^{\prime},t-t^{\prime})B_{\beta}(\mathbf{r}^{\prime},t^{\prime})\mathrm{d}\mathbf{r}^{\prime} \mathrm{d}t^{\prime}. \tag{5}\]
The interaction kernel has two components, \(f_{\alpha\beta}=f_{\alpha\beta}^{\mathrm{H}}+f_{\alpha\beta}^{\mathrm{xc}}\), namely, the Hartree and xc kernels in the ALDA
\[f_{\alpha\beta}(\mathbf{r},\mathbf{r}^{\prime})=\frac{\delta_{\alpha 0}\delta_{\beta 0}}{|\mathbf{r} - \mathbf{r}^{\prime}|}+\frac{1}{2}\delta(\mathbf{r}-\mathbf{r}^{\prime})\,\mathrm{tr}\left[ \sigma_{\alpha}\frac{\partial v^{\mathrm{xc}}}{\partial\rho_{\beta}}\right]. \tag{6}\]
In terms of the dressed spin, the first-order Hamiltonian in unscreened external fields is written
\[\delta H(\mathbf{r},t)=-\int\!\tau_{\alpha}(\mathbf{r},\mathbf{r}^{\prime},t-t^{\prime})B_ {\alpha}(\mathbf{r}^{\prime},t^{\prime})\mathrm{d}\mathbf{r}^{\prime}\mathrm{d}t^{ \prime}. \tag{7}\]
Now with the first-order Hamiltonian, the first-order wavefunctions can be obtained by solving the Sternheimer equations [38; 39; 38; 40; 8]
\[(\mathrm{i}\partial_{t}-H)|\psi_{n}^{(1)}(t)\rangle=\delta H(t)|\psi_{n}^{(0)}( t)\rangle, \tag{8}\]
in which \(\psi_{n}^{(\ell)}(t)\) is the \(\ell\)th-order wavefunction, with \(|\psi_{n}^{(0)}(t)\rangle=e^{-\mathrm{i}\varepsilon_{n}t}|\psi_{n}\rangle\). With the first-order wavefunctions, we will be able to compute the induced density via the variation of Eq. (1), from which the first-order Hamiltonian will be updated.
With the above introduction, a DFPT calculation can then be performed in a nested iterative process depicted in Fig. 1. In the initializing step, \(\delta H\) is constructed from the external electromagnetic fields, whence the Sternheimer equations are solved in the inner iteration. This produces a set of first-order wavefunctions, from which the induced density is calculated. A charge mixing strategy [41; 42] is employed to revise the induced density, with the linear susceptibility and dressed spin computed subsequently. Then a new \(\delta H\) is constructed from the updated spin to enter a second round of the outer iteration. The above process is repeated till convergence.
Now we will explain how the Sternheimer equations are solved in a crystalline solid. For electrons in a crystal, the initial Kohn-Sham states are Bloch functions such that \(|\psi_{n}\rangle\mapsto|\psi_{n\mathbf{k}}\rangle=e^{\mathrm{i}\mathbf{k}\cdot\mathbf{r}}|u _{n\mathbf{k}}\rangle\), where \(n\) is the band index and \(\mathbf{k}\) is the crystal momentum, and \(u_{n\mathbf{k}}\) is the cell-periodic part of the Bloch function. The system is subject to spatially periodic and monochromatic external electromagnetic fields
\[B_{\alpha}(\mathbf{r},t)=\mathcal{B}_{\alpha}e^{\mathrm{i}(\mathbf{p}\cdot\mathbf{r}- \omega t)}+\text{c.c.}, \tag{9}\]
where \(\mathbf{p}=\mathbf{q}+\mathbf{g}\) with \(\mathbf{g}\) being a reciprocal lattice vector for \(\mathbf{q}\) to in the first Brillouin zone. In this case, the following expansion is useful
\[\zeta(\mathbf{r},t)=\sum_{l}e^{\mathrm{i}(\mathbf{l}\cdot\mathbf{r}-\nu t)}\zeta(\mathbf{r},l) \tag{10}\]
for \(\zeta=\delta H,\delta\rho_{\alpha}\), in which \(\zeta(\mathbf{r},l)\) is a cell-periodic function with
\[l\equiv(\nu,\mathbf{l})=\pm(\omega,\mathbf{q}).\]
Then the first-order Hamiltonian in the \(l\) channel is
\[\delta H(\mathbf{r},l)=-e^{-\mathrm{i}\mathbf{l}\cdot\mathbf{r}}\tau_{\alpha}(\mathbf{r}, \mathrm{sign}(\nu\omega)\mathbf{p},\nu)\mathcal{B}_{\alpha}(l), \tag{11}\]
where \(\mathcal{B}_{\alpha}(\omega,\mathbf{q})=\mathcal{B}_{\alpha}\), \(\mathcal{B}_{\alpha}(-\omega,-\mathbf{q})=(\mathcal{B}_{\alpha})^{*}\), and
\[\tau_{\alpha}(\mathbf{r},\mathbf{p}^{\prime},\nu)=\int\tau_{\alpha}(\mathbf{r},\mathbf{r}^{ \prime},t)e^{\mathrm{i}(\mathbf{p}^{\prime}\cdot\mathbf{r}^{\prime}+\nu t)}\mathrm{d }\mathbf{r}^{\prime}\mathrm{d}t. \tag{12}\]
The first-order wavefunction is expanded as
\[|\psi_{n\mathbf{k}}^{(1)}(t)\rangle=\int\frac{\mathrm{d}\nu}{2\pi}e^{-\mathrm{i}( \nu+\varepsilon_{n\mathbf{k}})t}|\psi_{n\mathbf{k}}^{(1)}(\nu)\rangle. \tag{13}\]
For the fields in Eq. (9), \(|\psi_{n\mathbf{k}}^{(1)}(\nu)\rangle\) is nonzero only when \(\nu=\pm\omega\) and can be written as
\[|\psi_{n\mathbf{k}}^{(1)}(\nu)\rangle=e^{\mathrm{i}(\mathbf{k}+\mathbf{l})\cdot\mathbf{r}}|u_{ n\mathbf{k}}^{(1)}(l)\rangle. \tag{14}\]
Then the Sternheimer equations become
\[(\nu+\mathrm{i}\eta+\varepsilon_{n\mathbf{k}}-H_{\mathbf{k}+l})\,|u_{n\mathbf{k}}^{(1)}(l )\rangle=\delta H(\mathbf{r},l)|u_{n\mathbf{k}}\rangle, \tag{15}\]
in which \(H_{\mathbf{k}}=e^{-\mathrm{i}\mathbf{k}\cdot\mathbf{r}}He^{\mathrm{i}\mathbf{k}\cdot\mathbf{r}}\) is the Bloch Hamiltonian. A positive infinitesimal \(\eta\) is introduced on the left-hand side as a convergence factor to embody the causality structure of the linear susceptibility. In actual calculations, \(\eta\) takes finite values to ensure convergence with finite \(\mathbf{k}\)-mesh especially for metals and bestows finite broadening on spectral peaks.
From the first-order wavefunctions, we compute the first-order induced density as
\[\delta\rho_{\alpha}(\mathbf{r},l)=\sum_{n\mathbf{k}}\theta_{n\mathbf{k}}\operatorname{tr} \left\{u_{n\mathbf{k}}^{\dagger}(\mathbf{r})\sigma_{\alpha}u_{n\mathbf{k}}^{(1)}(\mathbf{r},l )+{u_{n\mathbf{k}}^{(1)}}^{\dagger}(\mathbf{r},-l)\sigma_{\alpha}u_{n\mathbf{k}}(\mathbf{r}) \right\}. \tag{16}\]
The linear susceptibility can be extracted from the Fourier components of \(\delta\rho_{\alpha}(\mathbf{r},l)\)
\[\delta\rho_{\alpha}(\mathbf{g}^{\prime},l)=\chi_{\alpha\beta}(\mathbf{g}^{\prime}, \mathrm{sign}(\nu\omega)\mathbf{g},l)\mathcal{B}_{\beta}(l),\]
Figure 1: Flow chart for a nested-loop DFPT calculation in the Sternheimer equation approach. The outer loop depicted here is the charge mixing. The inner loop is incurred in solving the Sternheimer equations in each step of the outer iteration.
whence
\[\chi_{\alpha\beta}(\mathbf{g}^{\prime},\mathbf{g}^{\prime\prime},l)= \int e^{-\mathrm{i}(\mathbf{g}^{\prime}+\mathbf{l})\cdot\mathbf{r}^{\prime}} \chi_{\alpha\beta}(\mathbf{r}^{\prime},\mathbf{r}^{\prime\prime},\nu)e^{\mathrm{i}(\mathbf{g }^{\prime\prime}+\mathbf{l})\cdot\mathbf{r}^{\prime\prime}}\mathrm{d}\mathbf{r}^{\prime} \mathrm{d}\mathbf{r}^{\prime\prime}. \tag{17}\]
It is worth mentioning that the Sternheimer equations in (15) admit the following formal solutions in the \(l\) channel
\[|u^{(1)}_{n\mathbf{k}}(l)\rangle=-\mathcal{B}_{\alpha}(l)\sum_{n^{ \prime}}\frac{|u_{n^{\prime}\mathbf{k}+\mathbf{l}}\rangle\tau_{\alpha}(\mathbf{k},l)_{n^{ \prime}n}}{\nu+\varepsilon_{n\mathbf{k}}-\varepsilon_{n^{\prime}\mathbf{k}+\mathbf{l}}+ \mathrm{i}\eta}, \tag{18}\]
in which the dressed spin matrix element is
\[\tau_{\alpha}(\mathbf{k},l)_{n^{\prime}n}=\int u^{\dagger}_{n^{\prime }\mathbf{k}+\mathbf{l}}(\mathbf{r})e^{-\mathrm{i}\mathbf{l}\cdot\mathbf{r}}\tau_{\alpha}(\mathbf{r}, \mathrm{sign}(\nu\omega)\mathbf{p},\nu)u_{n\mathbf{k}}(\mathbf{r})\mathrm{d}\mathbf{r}. \tag{19}\]
Because it requires the summation over a large number of empty states, this formal solution is not used in practice. [43] The screened susceptibility then has the same expression as the (bare) Kohn-Sham susceptibility, except that the (unscreened) external fields are now coupled to the dressed spin; that is,
\[\chi_{\alpha\beta}(\mathbf{g}^{\prime},\mathrm{sign}(\nu\omega)\mathbf{g},l)=-\sum_{ nn^{\prime}\mathbf{k}}(\theta_{n\mathbf{k}}-\theta_{n^{\prime}\mathbf{k}+\mathbf{l}})\frac{ \langle u_{n\mathbf{k}}|e^{-\mathrm{i}\mathbf{g}^{\prime}\cdot\mathbf{r}}\sigma_{\alpha}| u_{n^{\prime}\mathbf{k}+\mathbf{q}}\rangle\tau_{\beta}(\mathbf{k},l)_{n^{\prime}n}}{\nu+ \varepsilon_{n\mathbf{k}}-\varepsilon_{n^{\prime}\mathbf{k}+\mathbf{l}}+\mathrm{i}\eta}. \tag{20}\]
In arriving at the last expression, we have used the fact that \(\tau_{\alpha}(\mathbf{k},l)_{n^{\prime}n}=\tau_{\alpha}(\mathbf{k}+\mathbf{l},-l)_{nn^{ \prime}}^{*}\).
### DFPT with PAW method
The previous Subsection presents a sketch of the DFPT for periodic systems without recourse to computational details. In practical calculations, however, various technologies have been developed such that one can perform DFT (and therefore DFPT) calculations on valence electrons only for crystalline materials using planewave bases with the aid of pseudopotentials, to yield satisfactory accuracy with high efficiencies. It is well known that the norm-conserving pseudopotential [21] requires large planewave bases particularly for localized orbitals in transition elements, while the application of ultrasoft pseudopotential [22] is partly limited by the rather laborious construction. In contrast, the PAW method, combining the pseudopotential and linearized augmented-plane-wave methods, is free from the above difficulties and has been used widely. Thus, performing DFPT within the PAW framework [23; 24] for inhomogeneous and td electromagnetic fields is evidently useful though notably nontrivial. DFPT with the PAW method has been implemented within the Vienna ab initio simulation package (VASP) [25] for atomic displacements in the static and long-wavelength limit to calculate zone-center phonon energies. The extension to inhomogeneous and td electromagnetic fields is accomplished in this work based on VASP 5.4.4, and a few implementation details warrant further clarification. Here, we will briefly review the PAW method and describe how it is used in our DFPT calculations. Although the formalism for the ground state quantities is identical to those in the literature and notationally notorious, we feel compelled to provide some of these details, particularly in view of the time- and position-dependent external fields involved in our implementation.
The PAW method is based on a linear transformation between the all-electron (AE) Hilbert space orthogonal to core states and pseudo (PS) Hilbert space. The AE and PS wavefunctions are related by
\[|\psi_{n\mathbf{k}}\rangle=T|\tilde{\psi}_{n\mathbf{k}}\rangle, \tag{21}\]
with the linear operator defined as
\[T=1+\sum_{i}(|\phi_{i}\rangle-|\tilde{\phi}_{i}\rangle)\langle\tilde{p}_{i}|. \tag{22}\]
The index \(i\) is a shorthand encapsulating the atomic site located at \(\mathbf{R}_{i}\) as well as the quantum numbers (\(nlm\)) of the local orbitals and spin. \(\phi\), \(\tilde{\phi}\) and \(\tilde{p}\) are AE partial waves, PS partial waves and projector functions, respectively, and should all be understood as spinors. In order to perform the DFPT calculations on a crystalline solid in the electromagnetic fields in Eq. (9), the key is to find the cell-periodic part of each PAW-pseudized quantity, especially the nonlocal ones.
Upon application of the time-independent transformation \(T\) to the first-order wavefunctions the pseudized Sternheimer equations read
\[(\mathrm{i}\partial_{t}S-\tilde{H})|\tilde{\psi}^{(1)}_{n\mathbf{k}}(t)\rangle= \delta\tilde{H}(t)|\tilde{\psi}^{(0)}_{n\mathbf{k}}(t)\rangle, \tag{23}\]
with \(S=T^{\dagger}T\), \(\tilde{H}=T^{\dagger}HT\) and \(\delta\tilde{H}(t)=T^{\dagger}\delta H(t)T\).
According to the implementation of PAW method in
VASP, [24] we have
\[\begin{split} S&=1+\sum_{ij}\left|\tilde{p}_{i}\right>q_ {ij}\left<\tilde{p}_{j}\right|,\\ \tilde{H}&=-\frac{1}{2}\Delta+\tilde{v}^{\text{eff}}+ \sum_{ij}\left|\tilde{p}_{i}\right>D_{ij}\left<\tilde{p}_{j}\right|,\end{split} \tag{24}\]
in which the nonlocal potential \(D_{ij}=\hat{D}_{ij}+\tilde{D}_{ij}\) with \(\tilde{D}_{ij}=D_{ij}^{1}-\tilde{D}_{ij}^{1}\). The quantities \(q_{ij}\), \(D_{ij}^{1}\) and \(\tilde{D}_{ij}^{1}\) are defined in reference. [24] Apparently, the local potential \(\tilde{v}^{\text{eff}}(\mathbf{r})\) is a functional of pseudo density \(\tilde{n}(\mathbf{r})\) and compensation density \(\hat{n}(\mathbf{r})\), while \(\tilde{D}_{ij}\) is a function of density matrix \(\varrho\), i.e.
\[\begin{split}\tilde{v}^{\text{eff}}=&\tilde{v}^{ \text{eff}}[\tilde{n}+\hat{n}],\\ \hat{D}_{ij}=&\int\frac{1}{2}\operatorname{tr}\{ \sigma_{\alpha}\tilde{v}^{\text{eff}}(\mathbf{r})\}Q_{ij}^{\alpha}(\mathbf{r}) \mathrm{d}\mathbf{r},\\ \tilde{D}_{ij}=&\tilde{D}_{ij}(\varrho).\end{split} \tag{25}\]
We have hidden the functional dependence on the pseudized core densities in \(\tilde{v}^{\text{eff}}(\mathbf{r})\), which are kept frozen during the DFPT calculations. \(\tilde{n}(\mathbf{r})\), \(\hat{n}(\mathbf{r})\) and \(\varrho_{ij}\) are given by, respectively,
\[\begin{split}\tilde{n}(\mathbf{r})=&\sum_{n\mathbf{k}} \theta_{n\mathbf{k}}\operatorname{tr}\{\tilde{\psi}_{n\mathbf{k}}^{\dagger}(\mathbf{r}) \sigma\tilde{\psi}_{n\mathbf{k}}(\mathbf{r})\},\\ \hat{n}(\mathbf{r})=&\sum_{i,j}\varrho_{ij}Q_{ij}(\mathbf{r }),\\ \varrho_{ij}=&\sum_{n\mathbf{k}}\theta_{n\mathbf{k}}\langle \tilde{\psi}_{n\mathbf{k}}|\tilde{p}_{i}\rangle\langle\tilde{p}_{j}|\tilde{\psi}_{ n\mathbf{k}}\rangle.\end{split} \tag{26}\]
Here \(Q_{ij}(\mathbf{r})=\sum_{L}\operatorname{tr}\{\sigma\hat{Q}_{ij}^{L}(\mathbf{r})\}\) with \(\hat{Q}_{ij}^{L}(\mathbf{r})\) defined to construct the compensation density \(\hat{n}(\mathbf{r})\) in the reference. [24] It should be noticed that for \(\varrho_{ij}\), only elements with \(\mathbf{R}_{i}=\mathbf{R}_{j}\) are useful in our calculations, while \(D_{ij}\) are nonzero only when \(\mathbf{R}_{i}=\mathbf{R}_{j}\).
Now we derive the expression for the first-order Hamiltonian \(\delta\hat{H}(t)\). For the fields in Eq. (9), the first-order local densities and potentials follow the same expansion as in Eq. (10), while the first-order density matrix \(\delta\varrho_{ij}\) and nonlocal potential \(\delta D_{ij}\) can be expanded as
\[\delta\zeta_{ij}(t)=\sum_{l}e^{\mathrm{i}(\mathbf{l}\cdot\mathbf{R}_{i}-\nu t)}\delta \zeta_{ij}(l). \tag{27}\]
Here, the factor \(e^{\mathrm{i}\mathbf{l}\cdot\mathbf{R}_{i}}\) in the \(l\) channel is introduced such that \(\delta\zeta_{ij}(l)\) is cell-periodic, i.e., \(\delta\zeta_{ij}(l)=\delta\zeta_{i^{\prime}j^{\prime}}(l)\) if the positions of atomic site for \(i,j\) and \(i^{\prime},j^{\prime}\) differ by a lattice vector.
The contribution of external electromagnetic fields in \(\delta\tilde{H}(t)\) can be calculated directly
\[\begin{split}\delta\tilde{H}^{\text{ext}}(\mathbf{r},t)=& \sum_{l}e^{-\mathrm{i}\nu t}\left[e^{\mathrm{i}\mathbf{l}\cdot\mathbf{r}}v^{\text{ext} }(\mathbf{r},l)\,+\sum_{ij}e^{\mathrm{i}\mathbf{l}\cdot\mathbf{R}_{i}}\left|\tilde{p}_{i} \right>D_{ij}^{\text{ext}}(l)\left<\tilde{p}_{j}\right|\right],\end{split} \tag{28}\]
with
\[\begin{split} v^{\text{ext}}(\mathbf{r},l)=&-\mathcal{B} _{\alpha}(l)\sigma_{\alpha}e^{\mathrm{sign}(\nu\omega)\mathrm{i}\mathbf{g}\cdot \mathbf{r}},\\ D_{ij}^{\text{ext}}(l)=&\langle\phi_{i}|e^{\mathrm{i }\mathbf{l}\cdot(\mathbf{r}-\mathbf{R}_{i})}v^{\text{ext}}(\mathbf{r},l)|\phi_{j}\rangle- \langle\tilde{\phi}_{i}|e^{\mathrm{i}\mathbf{l}\cdot(\mathbf{r}-\mathbf{R}_{i})}v^{\text{ ext}}(\mathbf{r},l)|\tilde{\phi}_{j}\rangle.\end{split} \tag{29}\]
The contribution to \(\delta\tilde{H}(t)\) from the induced densities has a similar expression as in Eq. (28) and can be calculated via an explicit finite difference, as \(\tilde{H}\) is a functional of \(\tilde{n}\), \(\hat{n}\) and \(\rho_{ij}\). The first-order densities are found to be
\[\begin{split}\delta\tilde{n}(\mathbf{r},l)=&\sum_{n\mathbf{k} }\theta_{n\mathbf{k}}\operatorname{tr}\{\tilde{u}_{n\mathbf{k}}^{\dagger}(\mathbf{r}) \sigma\tilde{u}_{n\mathbf{k}}^{(1)}(\mathbf{r},l)+\tilde{u}_{n\mathbf{k}}^{(1),\dagger}(\bm {r},-l)\sigma\tilde{u}_{n\mathbf{k}}(\mathbf{r})\},\\ \delta\hat{n}(\mathbf{r},l)=&\sum_{i,j}e^{\mathrm{i}\bm {l}(\mathbf{R}_{i}-\mathbf{r})}\delta\varrho_{ij}(l)Q_{ij}(\mathbf{r}),\\ \delta\varrho_{ij}(l)=&\sum_{n\mathbf{k}}\theta_{n\mathbf{k} }[\langle\tilde{u}_{n\mathbf{k}}|\tilde{p}_{i\mathbf{k}}\rangle\langle\tilde{p}_{j \mathbf{k}+\mathbf{l}}|\tilde{u}_{n\mathbf{k}}^{(1)}(l)\rangle+\langle\tilde{u}_{n\mathbf{k} }^{(1)}(-l)|\tilde{p}_{i\mathbf{k}-\mathbf{l}}\rangle\langle\tilde{p}_{j\mathbf{k}}| \tilde{u}_{n\mathbf{k}}\rangle],\end{split} \tag{30}\]
where we define \(|\tilde{p}_{i\mathbf{k}}\rangle=e^{-\mathrm{i}\mathbf{k}\cdot(\mathbf{r}-\mathbf{R}_{i})}| \tilde{p}_{i}\rangle\). To linear order in external fields, the first order effective local potential is given by
\[\delta\tilde{v}^{\text{eff}}(l)\approx\tilde{v}^{\text{eff}}[\tilde{n}+\hat{n}+ \delta\tilde{n}(l)+\delta\hat{n}(l)]-\tilde{v}^{\text{eff}}[\tilde{n}+\hat{n}]. \tag{31}\]
Similarly, the first-order nonlocal potentials can be approximated as
\[\begin{split}\delta\hat{D}_{ij}(l)\approx&\int e^{ \mathrm{i}\mathbf{l}\cdot(\mathbf{r}-\mathbf{R}_{i})}\frac{1}{2}\operatorname{tr}\{\sigma_{ \alpha}\delta\tilde{v}^{\text{eff}}(\mathbf{r},l)\}Q_{ij}^{\alpha}(\mathbf{r}) \mathrm{d}\mathbf{r},\\ \delta\tilde{D}_{ij}(l)\approx&\tilde{D}_{ij}( \varrho+\delta\varrho(l))-\tilde{D}_{ij}(\varrho).\end{split} \tag{32}\]
Though introduced as forward differences in Eq.(31) and (32), these quantities are evaluated using 4th-order centered finite differences, with a step length of a thousandth the density variables.
With the above results, the final Sternheimer equations become
\[\left(\nu S_{\mathbf{k}+\mathbf{l}}+\varepsilon_{n\mathbf{k}}S_{\mathbf{k}+\mathbf{l}}-\tilde{H}_{ \mathbf{k}+\mathbf{l}}\right)\left|\tilde{u}_{n\mathbf{k}}^{(1)}(l)\right\rangle=\delta \tilde{H}_{\mathbf{k}}(l)|\tilde{u}_{n\mathbf{k}}\rangle, \tag{33}\]
with
\[S_{\mathbf{k}+\mathbf{l}} =1+\sum_{ij}\left|\tilde{p}_{i\mathbf{k}+\mathbf{l}}\right\rangle q_{ij} \left\langle\tilde{p}_{j\mathbf{k}+\mathbf{l}}\right|, \tag{34}\] \[\tilde{H}_{\mathbf{k}+\mathbf{l}} =-\frac{1}{2}\Delta_{\mathbf{k}+\mathbf{l}}+\tilde{v}^{\rm eff}+\sum_{ij} \left|\tilde{p}_{i\mathbf{k}+\mathbf{l}}\right\rangle D_{ij}\left\langle\tilde{p}_{j \mathbf{k}+\mathbf{l}}\right|,\] \[\delta\tilde{H}_{\mathbf{k}}(l) =v^{\rm ext}(l)+\delta\tilde{v}^{\rm eff}(l)+\sum_{ij}\left| \tilde{p}_{i\mathbf{k}+\mathbf{l}}\right\rangle\left[D_{ij}^{\rm ext}(l)+\delta \hat{D}_{ij}(l)+\delta\tilde{D}_{ij}(l)\right]\left\langle\tilde{p}_{j\mathbf{k}} \right|.\]
Here \(\tilde{u}_{n\mathbf{k}}\) and \(\tilde{u}_{n\mathbf{k}}^{(1)}(l)\) are the cell-periodic parts of corresponding pseudo wavefunctions, respectively.
The pseudized Sternheimer equations in \(\pm(\omega,\mathbf{q})\) channels are solved separately in each iteration using a variant of residual minimization method with a direct inversion in the iterative subspace (RMM-DIIS), [44; 45] which is already implemented in VASP 5.4.4. The Lowdin perturbation theory is also performed to correct the first-order wavefunctions in the subspace of occupied states and low-lying excitations to speed up convergence
\[|\tilde{u}_{n\mathbf{k}}^{(1)}(l)\rangle\rightarrow|\tilde{u}_{n\mathbf{k}}^{(1)}(l) \rangle-\sum_{n^{\prime}}|\tilde{u}_{n^{\prime}\mathbf{k}+\mathbf{l}}\rangle\langle \tilde{u}_{n^{\prime}\mathbf{k}+\mathbf{l}}|S_{\mathbf{k}+\mathbf{l}}|\tilde{u}_{n\mathbf{k}}^{(1 )}(l)\rangle+\sum_{n^{\prime}}\frac{|\tilde{u}_{n^{\prime}\mathbf{k}+\mathbf{l}} \rangle\langle\tilde{u}_{n^{\prime}\mathbf{k}+\mathbf{l}}|\delta\tilde{H}|\tilde{u}_{ n\mathbf{k}}\rangle}{\nu+\varepsilon_{n\mathbf{k}}-\varepsilon_{n^{\prime}\mathbf{k}+\mathbf{l}}}. \tag{35}\]
In the last equation above, the summations on \(n^{\prime}\) run over the occupied bands plus a few empty bands.
Apparently, solving Eq. (33) requires a \(\mathbf{k}\)-grid supplemented by two additional grids shifted by \(\pm\mathbf{q}\) when \(\mathbf{q}\) itself is not on the \(\mathbf{k}\)-grid. Doing so, however, not only increases the computational burden but also, more seriously, obliterates the exact cancellation of the contribution of the occupied manifold to the density change due to a \(\mathbf{k}\)-grid discretization error. The latter can be easily avoided by employing a pair of grids with a \(\mathbf{q}\) shift, which also reduces the calculation partly. Then Eq. (33) is solved on the \(\mathbf{k}\)-grid in \(+\mathbf{q}\) channel, and on the \(\mathbf{k}+\mathbf{q}\) grid in \(-\mathbf{q}\) channel. It is observed that in this dual grid setup, the above cancellation is well preserved.
The xc potentials in ALDA are functionals of real-valued densities. Thus, calculating \(\delta\tilde{v}^{\rm eff}(l)\) like in Eq. (31) requires caution as \(\delta\tilde{n}(l)\) and \(\delta\hat{n}(l)\) are usually complex. In fact, the real and imaginary part of \(\delta\tilde{v}^{\rm eff}(l)\) are calculated separately,
\[\mathfrak{F}\delta\tilde{v}^{\rm eff}(l)\approx\tilde{v}^{\rm eff}[\tilde{n}+ \hat{n}+\mathfrak{F}\delta\tilde{n}(l)+\mathfrak{F}\delta\hat{n}(l)]-\tilde{v} ^{\rm eff}[\tilde{n}+\hat{n}] \tag{36}\]
where \(\mathfrak{F}={\rm Re},{\rm Im}\) takes the real or imaginary part, respectively. In the case of nonlocal potential, \(\delta\tilde{D}_{ij}(l)\) and \(\delta\varrho_{ij}(l)\) are first decomposed into two independent Hermitian matrices (i.e. Hermitian part and anti-Hermitian part multiplied by \(-{\rm i}\)), and then finite differenced separately in an analogous fashion.
Symmetry reduction is also performed in our implementation, where the summation over \(\mathbf{k}\) points in Eq. (30) is restricted to the symmetry-irreducible part of the Brillouin zone. The symmetry group here is the subgroup of the magnetic group of the studied crystal in which the external electromagnetic fields in Eq. (9) is invariant.
## III Application to spin-wave spectrum calculation
Our implementation enables computing the linear susceptibilities \(\chi_{\alpha\beta}\) with the self-consistent inclusion of the interaction kernel. Directly inverting \(\chi_{\alpha\beta}\) yields the dielectric tensor, which is composed of the usual charge sector \(\epsilon_{00}\), spin sector \(\epsilon_{\alpha\beta}\), and the spin-charge sector \(\epsilon_{0\beta}\), each embodying unique physics. Computing \(\chi_{\alpha\beta}\) then can have diverse applications in evaluating materials properties pertaining to both charge and spin fluctuations, or in subsequent many-body calculations beyond the Kohn-Sham mean fields. One immediate application that has received considerable attention is the calculation of spin-wave excitation. [9; 10; 11; 12; 13; 14; 15; 16] According to the fluctuation-dissipation theorem, the spin-spin correlation function, directly accessible by various spin-sensitive inelastic scattering probes, [46; 47; 48] is related to the imaginary part of the linear susceptibilities,
\[S_{+-}(\mathbf{p},\omega)=\frac{{\rm Im}\,\chi_{+-}(\mathbf{g},\mathbf{g},\omega,\mathbf{q})}{1- e^{-\hbar\omega/k_{\rm B}T}}. \tag{37}\]
Although for magnetic systems dominated by local moments the magnon can be described effectively by localized spin models, this method is subject to debate when delocalization sets in, and ultimately of questionable validity for itinerant magnetism. In these latter cases, which include a wide range of magnetic materials, the DFPT route becomes invaluable for computing the spin wave spectra _ab initio_. To our knowledge, there have been just a handful of works devoted to implementing DFPT scheme for this purpose by solving the Sternheimer equations, as summarized in Table 1. In these efforts, implementations are limited to the full-potential, [9; 15] or norm-conserving and ultrasoft pseudopotentials. [12; 13; 14]
In this section, we present an initial application of our implementation of DFPT in the PAW framework to the calculations of spin-wave spectra for a couple of magnetic materials. For ferromagnets with the spin polarized along \(z\) direction, transverse magnetic field can be applied by choosing \(\mathcal{B}=(0,1,-\mathrm{i},0)\) in Eq. (9), from which \(\chi_{+-}\) is calculated directly. In these cases, the only remaining symmetry operation keeping the crystal and the transverse magnetic field unchanged is identity transformation. Thus, there is no room for symmetry reduction. For notational convenience, we define \(\chi_{+-}(\mathbf{p},\omega)\equiv\chi_{+-}(\mathbf{g},\mathbf{g},\omega,\mathbf{q})\) as \(\mathbf{p}=\mathbf{q}+\mathbf{g}\).
### Half-metallic chromium dioxide
As shown in the inset in Fig. 2, chromium dioxide, CrO\({}_{2}\), is a ferromagnetic half-metallic oxide with a rutile crystal structure, where each Cr atom is situated at the center of an octahedral cage formed by oxygen atoms. [49; 50] Widely used as a magnetic recording material, CrO\({}_{2}\) also has various potential applications in spintronics and magnetoelectronics [51; 52] due to its half-metallic properties.
The experimental lattice parameters \(a=b=4.4218\) A and \(c=2.9182\) A [50] are used in our calculations. The planewave energy cutoff is set to be 500 eV and a \(13\times 13\times 20\)\(\Gamma\)-centered mesh of \(\mathbf{k}\)-points is used. The spin-resolved density of states of CrO\({}_{2}\) is computed from the collinear spin-polarized calculation and shown in Fig. 2, where the half-metallic band structure is clearly seen. The magnetic moment of Cr is found to be 2 \(\mu_{\mathrm{B}}\). We then turn to the noncollinear calculations, and compute the transverse spin susceptibility \(\chi_{+-}(\mathbf{p},\omega)\) along [100] and [001] directions for \(\omega\leq 400\) meV on 10 meV intervals. The broadening parameter \(\eta\) introduced in Eq. (15) is set to be 50 meV in the calculations in this subsection.
Fig. 3(a) shows the computed \(\mathrm{Im}\chi_{+-}(\mathbf{p},\omega)\), without spin-orbit interaction, along two \(\mathbf{p}\) paths. In general, \(\chi_{+-}(\mathbf{p},\omega)\) is not periodic in \(\mathbf{p}\). Then the branch in the first Brillouin Zone is composed of acoustic magnon modes, while the branch in the second Brillouin zone optical modes. The profile of the magnon peak at a given \(\mathbf{p}\) is nearly perfect Lorentzian over the entire energy range. The extracted half width at half maximum \(\eta_{\mathbf{p}}\) are almost a constant and equal to the artificial broadening parameter \(\eta\), indicating that the Landau damping in CrO\({}_{2}\) is negligible. This is expected given that half-metallic CrO\({}_{2}\) has a large spin-flip gap around 310 meV, as shown in Fig. 2.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Authors (Year) & Potential & Basis set & Software \\ \hline Savrasov (1998) [9] & Full potential & LMTO1 & LMTO Magnons \\ Cao et al. (2018) [12] & NCPP2, USPP3 & Planewave & QE4 \\ Gorni et al. (2018) [13] & NCPP & Planewave & QE \\ Tancogne-Dejean et al. (2020) [14] & NCPP & Real space & Octopus \\ Singh et al. (2020) [15] & Full potential & LAPW5 & Elk \\ \hline \hline \end{tabular}
\end{table}
Table 1: Reported implementations of DFPT for spin wave spectra calculations by solving the Sternheimer equations.
Figure 2: Spin-resolved densities of states of CrO\({}_{2}\). The spin-flip gap is found to be about 310 meV. Inset: the crystal structure of CrO\({}_{2}\), highlighting the CrO\({}_{6}\) octahedra.
The location of the maxima of magnon peaks, \(\omega_{\mathbf{p}}\), are then recorded and folded to the first Brillouin zone (\(\omega_{\mathbf{q}}\)). As shown in Fig. 3(b), we find one acoustic magnon branch and one optical branch, consistent with the fact that the unit cell in CrO\({}_{2}\) contains two magnetic Cr atoms. There is no magnon gap at the Brillouin zone boundaries, which is result of the \(n\) glide symmetry. The energy of long-wavelength acoustic magnons is quadratic in \(q\) with a gapless Goldstone mode, \(\omega_{\mathbf{q}}=D_{\parallel}(q_{x}^{2}+q_{y}^{2})+D_{z}q_{z}^{2}\), as expected for a ferromagnet with spin rotation symmetry. The spin stiffness coefficients are found to be \(D_{\parallel}=82\) meV\(\cdot\)A\({}^{2}\) along [100] and \(D_{z}=92\) meV\(\cdot\)A\({}^{2}\) along [001] directions, respectively. From this, we can estimate the average spin stiffness coefficient to be 85 meV\(\cdot\)A\({}^{2}\), which is close to the experimental measured result (\(\sim 112.5\) meV\(\cdot\)A\({}^{2}\)[53]).
For comparison, additional electron correlation is included statically within the LSDA+U formalism [54] in both the ground state calculation and subsequent DFPT calculations, with \(U^{\rm eff}=2.1\) eV. [55] As shown in the inset of Fig. 3(b), the gapless Goldstone mode is obtained again, accompanied by an average spin stiffness coefficient \(D=391\) meV\(\cdot\)A\({}^{2}\), almost five times as large as the one without Hubbard \(U\) correction. The energy of magnon in CrO\({}_{2}\) seems to be highly overestimated in LSDA+U calculations.
As a further test, we examine the Goldstone gap as a result of breaking the spin rotation symmetry by introducing spin-orbit interaction. The atomic spin-orbit interaction for Cr is fairly weak (on the order of tens of meV). Since the gap in the Goldstone magnon is second order in the spin-orbit coupling, it is small for CrO\({}_{2}\). In order to visualize the effect of spin-orbit coupling, we introduce a parameter \(\lambda\) to artificially tune its strength (or speed of light), as in \(H=H^{0}+\lambda H^{\rm soc}\), where \(\lambda=1\) corresponds to the actual strength of spin-orbit coupling in CrO\({}_{2}\). The calculated \(\text{Im}\chi_{+-}(p=0,\omega)\) as a function of \(\omega\) for different \(\lambda\) values are shown in Fig. 4(a). Apparently there is blue shift of the Goldstone mode with increasing \(\lambda\), indicating the emergence of a Goldstone gap. The Goldstone gap indeed shows a quadratic dependence on \(\lambda\), as demonstrated by the gap-vs-\(\lambda^{2}\) plot in Fig. 4(b). The extrapolated Goldstone gap in CrO\({}_{2}\) is about 0.1 meV.
### Heusler intermetallic Cu\({}_{2}\)MnAl
The ternary intermetallic Cu\({}_{2}\)MnAl is a Mn-based full-Heusler alloy with the \(L2_{1}\) structure type (see inset in Fig. 5). The experimental lattice parameter for the conventional cubic cell (space group \(Fm\bar{3}m\)) is \(a=5.968\) A. [56] Cu\({}_{2}\)MnAl is ferromagnetic below the relatively high Curie temperature (603 K). [56] Apart from being regarded as a prototype for understanding the electronic correlations in Heusler intermetallics, [57] Cu\({}_{2}\)MnAl is also being used as a neutron polarizer and monochromator material. [58; 59]
A planewave energy cutoff of 350 eV and a \(15\times 15\times 15\)\(\Gamma\)-centered \(\mathbf{k}\)-grid are used in our calculations. Spin-orbit coupling is not included. The spin-resolved densities of states of Cu\({}_{2}\)MnAl computed from the collinear
Figure 3: (a) \(\text{Im}\chi_{+-}(\mathbf{p},\omega)\) of CrO\({}_{2}\) along [100] and [001] directions. The black vertical line indicates the center of the first Brillouin zone. (b) The folded magnon energy dispersion \(\omega_{q}\) of CrO\({}_{2}\) in the first Brillouin zone, extracted from \(\text{Im}\,\chi_{+-}\) shown in (a). The inset shows the quadratic fits to \(\omega_{\mathbf{q}}\) at small \(q\) along [100] and [001] directions, respectively, for calculations without Hubbard \(U\) correction and with \(U^{\rm eff}=2.1\) eV. The squares are the calculated data.
Figure 4: (a) \(\text{Im}\chi_{+-}(p=0,\omega)\) as a function of \(\omega\) for different \(\lambda\) values. \(\lambda=1\) corresponds to the actual strength of spin-orbit coupling in CrO\({}_{2}\). The squares are the calculated data and the solid lines are the fits with Lorentzian line shape. (b) The Goldstone gap as a function of \(\lambda^{2}\) in CrO\({}_{2}\). The solid line is the linear fit.
spin-polarized calculation confirms the ferromagnetism of Cu\({}_{2}\)MnAl, as shown in Fig. 5. The magnetic moment is carried primarily by Mn atoms and computed to be 3.4 \(\mu_{\rm B}\)/Mn. The transverse spin susceptibility \(\chi_{+-}(\mathbf{p},\omega)\) along [100], [110] and [111] crystallographic directions is then computed in noncollinear calculations for \(\omega\leq 300\) meV on 5 meV intervals, with a broadening parameter \(\eta\) of 50 meV.
Fig. 6(a) shows the computed magnon spectral function Im\(\chi_{+-}(\mathbf{p},\omega)\) along the three principal directions. The acoustic magnon branch is seen clearly only at low energies near the Brillouin zone center. The spectral peaks of these low-energy modes can be adequately fitted with the Lorentzian lineshape as in the CrO\({}_{2}\) case. The dispersion of the long-wavelength modes is quadratic and isotropic, as demonstrated in the inset of Fig. 7 (a). A spin stiffness coefficient \(D=268\) meV\(\cdot\)A\({}^{2}\) is procured from the quadratic fit, which is about 1.5 times larger than the experimental value of 175 meV\(\cdot\)A\({}^{2}\). [60]
Notably, at higher energies and near Brillouin zone boundaries, the magnon peaks become fuzzier and broader, attesting to substantial Landau damping in this material. In stark contrast to the CrO\({}_{2}\) case with almost no Landau damping, the coupling to the Stoner continuum in Cu\({}_{2}\)MnAl bestows a magnon peak at a given \(\mathbf{p}\) an asymmetric profile that defies a Lorentzian fit, as shown in Fig. 6(b). The stronger Landau damping in this system is consistent with the absence of the spin-flip gap as shown in Fig.5. Viewing the coupling to the Stoner continuum as a Fano-type resonance, we superimpose a linear function on the Lorentzian to describe the asymmetric line shape, as
\[A(\mathbf{p},\omega)=\frac{a_{\mathbf{p}}\eta_{\mathbf{p}}}{(\omega-\omega_{\mathbf{p}})^{2}+ \eta_{\mathbf{p}}^{2}}+\xi_{\mathbf{p}}(\omega-\omega_{\mathbf{p}}) \tag{38}\]
with \(\omega_{\mathbf{p}},a_{\mathbf{p}},\eta_{\mathbf{p}}\) and \(\xi_{\mathbf{p}}\) as fitting parameters.
As it turns out, this simple modification leads to satisfactory fitting for the entire spectrum, as evidenced in Fig.6(c). The extracted magnon dispersion is then shown in Fig. 7(a), which coincides with the calculated results of Buczek et al. [10; 61] and agrees well with the experimental observations along [100] direction. [60] Along the [110] and [111] directions where the Landau damping seems more pronounced, our computed dispersion shows significant discrepancy from the experimental one. For low-energy modes, \(\eta_{\mathbf{p}}\) is dominated by the artificial broadening parameter \(\eta\) and the asymmetry is small, as shown in Fig. 7(b). With increasing energies, however, the broadening quickly exceeds \(\eta\) and the asymmetry becomes pronounced, especially near Brillouin zone boundaries, both providing quantitative characterization of the Landau damping.
## IV Summary and Outlook
In conclusion, we report an implementation of the DFPT method in the PAW framework, which is capable of computing the full linear susceptibilities of real materials. A nested iterative procedure is employed to self-consistently solve the Sternheimer equations, to procure linear susceptibilities along with the first-order wavefunctions and densities in monochromatic and periodic external electromagnetic fields. The time cost of each DFPT calculation (given an external field direction, momentum and frequency) is comparable to that of a corresponding Kohn-Sham DFT calculation.
Figure 5: Spin-resolved densities of states of Cu\({}_{2}\)MnAl. Inset: the crystal structure of full Heusler Cu\({}_{2}\)MnAl, showing a conventional cubic unit cell for the \(L2_{1}\) structure.
Figure 6: (a) Im\(\chi_{+-}(\mathbf{p},\omega)\) of Cu\({}_{2}\)MnAl along [100], [110] and [111] directions. Reciprocal lattice vectors of conventional cubic cell are adopted here. (b,c) Im\(\chi_{+-}(\mathbf{p},\omega)\) as a function of \(\omega\) for \(\mathbf{p}=(1,0,0)\), \((1,1,0)\) and \((1,1,1)\). The squares are the calculated data. The solid lines are the fits with (b) symmetric or (c) asymmetric Lorentzian function.
As a demonstration, we compute the spin wave spectra for CrO\({}_{2}\) and Cu\({}_{2}\)MnAl. Gapless magnon dispersion is obtained for both materials from the calculations without spin-orbit coupling. The spin stiffness coefficient extracted from the quadratic fit is in agreement with experimental value for CrO\({}_{2}\) but 1.5 times larger for Cu\({}_{2}\)MnAl. The Landau damping in CrO\({}_{2}\) is insignificant due to its half-metallic nature, while in Cu\({}_{2}\)MnAl is remarkable at high energies and can be quantified with a simple asymmetric Lorentzian fit. LSDA+U method as well as the effect of spin-orbit coupling are examined in CrO\({}_{2}\), from which the former highly overestimates the magnon energy, while the latter gives rise to a Goldstone gap quadratic in spin-orbit coupling strength \(\lambda\).
There is clearly room for future developments, to make the current implementation more efficient and versatile. From an algorithm viewpoint, the occupied subspace is not projected out in Sternheimer equations in the current implementation. As the contribution to the first-order wavefunctions from the occupied states does not contribute to the first-order densities, projecting out the occupied subspace [8] can potentially improve the efficiency and stability of the nested iteration. As an additional benefit of the projection, it also renders the principle integrals explicit and amenable to analytic techniques, which can further reduce the number of \(\mathbf{k}\)-points required and improve efficiencies. Alternative iterative techniques should be tested in general, for both inner and outer loops, especially in conjunction with the projection.
From a physics viewpoint, a few tasks are on immediate agenda and new possibilities are clearly on the horizon, beyond the initial demonstrations presented herein. For the spin-wave spectral functions, it will be valuable to compare the computed spectra with experimental results for more materials. A particularly interesting comparison can be made between the dispersion relations obtained _ab initio_ from our DFPT implementation and those from Heisenberg models parametrized from constrained DFT energies on the basis of the magnetic force theorem. [62; 63; 64] Such comparisons should be examined in detail for materials in the localized and the itinerant limits, as well as for the continuum falling in between. Further systematic studies for the gradient correction (as in generalized gradient approximations) and for the Hubbard correction in LSDA+U method can reveal the effect of correlation on the spin-wave spectra. As the first-order wavefunctions are also produced in our code, it is also tempting to evaluate other physical properties, related to density and current responses, such as the magnetoelectric coupling and related transport coefficients. A particular connection may be made by observing that
\[W=f+f\chi f \tag{39}\]
is the screened kernel, which now includes the charge, spin and cross screening effects. This will enable analyzing the many-electron effects in magnetic materials with strong spin-orbit coupling, and potentially evaluating novel bound states from the screened charge/spin interactions.
###### Acknowledgements.
We acknowledge the financial support from the National Natural Science Foundation of China (Grant No. 11725415), the National Key R&D Program of China (Grant Nos. 2018YFA0305601 and 2021YFA1400100), and the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302600).
|
2309.04507 | Generating drawdown-realistic financial price paths using path
signatures | A novel generative machine learning approach for the simulation of sequences
of financial price data with drawdowns quantifiably close to empirical data is
introduced. Applications such as pricing drawdown insurance options or
developing portfolio drawdown control strategies call for a host of
drawdown-realistic paths. Historical scenarios may be insufficient to
effectively train and backtest the strategy, while standard parametric Monte
Carlo does not adequately preserve drawdowns. We advocate a non-parametric
Monte Carlo approach combining a variational autoencoder generative model with
a drawdown reconstruction loss function. To overcome issues of numerical
complexity and non-differentiability, we approximate drawdown as a linear
function of the moments of the path, known in the literature as path
signatures. We prove the required regularity of drawdown function and
consistency of the approximation. Furthermore, we obtain close numerical
approximations using linear regression for fractional Brownian and empirical
data. We argue that linear combinations of the moments of a path yield a
mathematically non-trivial smoothing of the drawdown function, which gives one
leeway to simulate drawdown-realistic price paths by including drawdown
evaluation metrics in the learning objective. We conclude with numerical
experiments on mixed equity, bond, real estate and commodity portfolios and
obtain a host of drawdown-realistic paths. | Emiel Lemahieu, Kris Boudt, Maarten Wyns | 2023-09-08T10:06:40Z | http://arxiv.org/abs/2309.04507v1 | # Generating drawdown-realistic financial price paths using path signatures
###### Abstract
A novel generative machine learning approach for the simulation of sequences of financial price data with drawdowns quantifiably close to empirical data is introduced. Applications such as pricing drawdown insurance options or developing portfolio drawdown control strategies call for a host of drawdown-realistic paths. Historical scenarios may be insufficient to effectively train and backtest the strategy, while standard parametric Monte Carlo does not adequately preserve drawdowns. We advocate a non-parametric Monte Carlo approach combining a variational autoencoder generative model with a drawdown reconstruction loss function. To overcome issues of numerical complexity and non-differentiability, we approximate drawdown as a linear function of the moments of the path, known in the literature as path signatures. We prove the required regularity of drawdown function and consistency of the approximation. Furthermore, we obtain close numerical approximations using linear regression for fractional Brownian and empirical data. We argue that linear combinations of the moments of a path yield a mathematically non-trivial smoothing of the drawdown function, which gives one leeway to simulate drawdown-realistic price paths by including drawdown evaluation metrics in the learning objective. We conclude with numerical experiments on mixed equity, bond, real estate and commodity portfolios and obtain a host of drawdown-realistic paths.
keywords: Drawdown, Simulation, Non-Parametric Monte-Carlo, Path Functions, Signatures Msc: [1]91B28, 91B84 +
Footnote †: journal: Notes
## 1 Introduction
Market generators are generative machine learning (ML) models with the specificity of modeling financial markets such as stock price or return time series, order books, implied volatility surfaces, and more. They have gained popularity in recent years (Wiese et al. [1], Koshiyama et al. [2], Cont et al. [3], Bergeron et al. [4]). Arguably the main advantage of generative ML over existing simpler solutions such as standard Monte Carlo engines is being able to simulate realistic financial time series without having to specify a data generating process (DGP) a priori. Scenario realisticness relates to low distributional divergences between the actual and generated return distributions (such as difference in normalized moments or max mean discrepancy, entropy or Kullback-Leibler divergence, optimal transport measures like Wasserstein distance, etc.), and distances between their autocorrelation functions and tail indices. Some striking results in the latter regard have been found in Wiese et al. [1], Cont et al. [3], Buehler et al. [5] and Vuletic et al. [6].
A drawdown is a price fall relative to its historical maximum. In this article, we argue that one can use a market generator as a non-parametric Monte Carlo engine in order to solve the difficult problem of generating realistic drawdown scenarios. With parametric DGP approaches, no explicit analytical link between the underlying DGP parameters and their corresponding drawdown distribution is known, unless under very restrictive assumptions. Parametric approaches to drawdown have commonly relied on Brownian assumptions as to use Levy's theorem to obtain this analytical drawdown distribution. Especially in applications where this measure is crucial one would need theoretical guarantees that upon convergence of the parameters of the generative model drawdowns are explicitly preserved. Examples include pricing drawdown options for max loss insurance (Carr et al. [7], Atteson and Carr [8]) and controlling for drawdown in a portfolio optimization context (Chekhlov et al. [9], [10]).
The main contribution of this paper is overcoming the issues of non-differentiability and numerical complexity inherent to the drawdown function by approximating it with a linear regression on the path's signature terms. Signatures serve as the moment generating function on the path space and are because of their universality (Chevyrev and Oberhauser [11]) the likely candidate for universal approximation of drawdown with a linear function. We prove the required regularity of drawdown function and the consistency of the approximation. Furthermore, we discuss the adequacy of the approximation as a function of the approximation order and for different levels of roughness and sample sizes. We argue that by adding weighted moments of the path to the reconstruction loss of the market generator, one can obtain realistic drawdown scenarios, solving the understatement of adverse drawdown scenarios of traditional generative model architectures or common DGP assumptions.
The article is structured as follows. Section 2 covers the background on market generators and positions our problem setting within this literature. Section 3 discusses drawdowns, notations and its Lipschitz continuity following from its reformulation as a non-linear dynamic system. Section 4 summarizes some key elements from rough path theory and the approximation of functions on paths. We propose a linear approximation of drawdowns in the signature space. Section 5 introduces our market generator model and its implementation. Section 6 covers our numerical study with experiments on fractional Brownian and empirical data. Section 7 concludes.
## 2 Market Generators
This section briefly covers the background on market generators, related papers and their commonalities that lead to the questions posed in this paper.
The potential benefits of generative machine learning for modeling financial time series have been early recognized by Kondratyev and Schwarz [12], Henry-Labordere [13], and Wiese et al. [1], who first applied restricted Boltzmann machines (Smolensky [14]) and generative adversarial networks (Goodfellow et al. [15]) respectively to financial sequences data. Since those papers, a host of use cases have been proposed that include time series forecasting (Wiese et al. [1]), trading strategy backtesting (Koshiyama et al. [2]), hands-off volatility surface modeling (Cont and Vuletic [16]), option pricing and so-called _deep hedging_(Buehler et al. [5], [17]), and more.
Closest related to this work is the paper of Buehler et al. [5] who also use the combination of variational autoencoder and signatures, with the general aim of reproducing some of the stylized facts of financial time series (Cont [18]). However, they use signatures in the input space and then output signature values, which implies that deploying their model requires a scalable inversion of signatures back to paths, which is far from trivial. Similar to Ni et al. [19] who use signatures in the discriminator of a GAN architecture, the generator in this paper does not output signatures but only uses them in the loss evaluation step. Moreover, much work on market generators has revolved around adjusting the loss function such that desired features of the timeseries are reproduced. This is very close to our proposed approach. Cont et al. [3] evaluate the tail risk (value-at-risk and expected shortfall) of the simulations to evaluate their adequacy. Recently, Vuletic et al. [6] included profit-and-loss (P&L) similarity and Sharpe ratios in the loss function to increase the financial realism of the generated scenarios. This paper builds on top of that. Drawdown is a similar metric that in most financial contexts would be a useful feature to reproduce, because it captures both P&L and autocorrelation, but in fact is in some financial contexts the most important metric, such as portfolio drawdown control, optimization, or drawdown insurance. For instance, some investment strategies or fund managers get automatically closed down if they breach certain drawdown limits and it is thus crucial that a simulation of their strategies does not understate the probability of larger drawdowns. However, as a path functional, rather than a function on a P&L distribution like a tail loss or Sharpe ratio, drawdown is much more difficult to evaluate. That is why we introduce the approximation trick in the next sections.
## 3 Drawdowns
This section introduces the concept of a drawdown and its notation used throughout the paper. We briefly touch upon how it is usually approached in the literature, the issues for data-driven simulation of drawdowns and the expression of drawdown as a non-linear dynamic system. The latter insight, combined with the concept of a path signature, will allow us to introduce the approximation in the next sections.
### Introduction and notation
The drawdown is the difference at a point in time \(t\) between a price \(S_{t}\) and its historical maximum up to that point. For a price path (in levels) \(S:[0,T]\rightarrow\mathbb{R}\) define drawdown function \(\Xi\) as:
\[\xi_{t}=\Xi(S)_{t}=\max(\max_{k\sim t}(S_{k})-S_{t},0) \tag{1}\]
We are interested in the distribution \(\mathcal{P}(\xi)\) as a function of the DGP parameters of \(S\), denoted by \(\theta\). It is clear that the drawdown is a non-trivial function of the underlying DGP and captures at least three dimensions. Firstly, the drift or deterministic tendency of achieving a new maximum. Secondly, the dispersion or volatility of \(S\), which determines the probability of a loss vis-a-vis this monotonic drift. Thirdly, and not grasped by standard Markovian assumptions, the autocorrelation of losses or the probability that the nonzero drawdown will persist, i.e. the duration of the drawdown. The analytical link between these components or a generalized DGP and the drawdown distribution is unknown and depends on the specific process. In the drawdown literature that uses parametric simulation (e.g. [7][8][20][21]), one often assumes (geometric) Brownian motion and leverages Levy's theorem (Levy [22]), which determines the joint law of the running maximum of a Brownian and the deviation to this maximum, to find the analytical distribution of drawdown as a function of the drift and volatility of the DGP. This yields closed-form expressions of the distribution \(\mathcal{P}(\xi)\) (e.g. see Douady et al. [21] and more concisely Rej et al. [20]). However, martingale processes like Brownian motion do not exhibit return autocorrelation and one expects a sequence of consecutive losses to result in larger drawdown scenarios. Goldberg and Mahmoud [23] assume an autoregressive process of order 1 (AR(1)) and show that increasing autocorrelation leads to more extreme drawdowns. Van Hemert et al. [24] introduce _drawdown greeks_ and discuss the sensitivity of the maximum \(\xi\) value to the autocorrelation parameter under normal return assumptions and find strong dependence on the assumed level of AR(1) autocorrelation. Hence, standard martingale assumptions only lead to optimistic lower bounds on those worst case drawdown scenarios, as was also emphasized in Rej et al. [20]. Obtaining a simulated \(\mathcal{P}(\xi)\) faithful to the empirical drawdown distribution through selecting DGP parameters is thus hard to obtain, as it is very sensitive to serial dependence and the analytical link between \(\theta\) and \(\mathcal{P}(\xi)\) does not exist except under very restrictive assumptions (cf. [24]). In drawdown papers that use non-parametric simulation (e.g. [9][10][25][24]), the scenarios are limited to the historical sample (e.g. historical blocks in a block bootstrap procedure), which drastically limits available samples. This jeopardizes the convergence of data-hungry models in the non-overlapping block case, and creates multicollinear (or identical) conditions for the overlapping (or oversampled) blocks case. For instance, Chekhlov et al. [10] discuss how multiple scenarios increase the effectiveness of drawdown optimizers over a single historical scenario, but also indicate how oversampling a limited dataset has diminishing returns in terms of out-of-sample risk-adjusted returns of the drawdown optimization strategy. Ideally, one has a vast number of rich drawdown scenarios, but this is limited with historical simulation and an unsolved problem for parametric Monte Carlo. One way to cope with these issues is to simply not embed the parameters that capture drawdown in the DGP, derive its distribution from said DGP assumptions or rely on few, overlapping or identical paths, but to construct a non-parametric simulation approach that relies on learning with flexible mappings rather than calibrating known parameters. We show that one can rather rely on a market generator to abstract both the DGP and the link between DGP and drawdown distribution, and learn to reproduce this distribution in a Monte Carlo. Learning means updating the parameters of an implicit DGP (e.g. the parameters of a neural network), denoted by \(\theta\), over batches of training data of \(S\) to increasingly improve its ability to reproduce the drawdowns in the synthetic samples, denoted by a parametrized path \(S_{\theta}\). This Monte Carlo could be used to devise strategies that control drawdown or as a simulation engine for pricing drawdown insurance in a fully non-parametric way.
### Issues with the drawdown measure for data-driven simulation
The problem with the drawdown measure, compared to simpler measures on the moments of the P&L, is that at first glance it looks unsuited for learning. The reasons are twofold.
* **Differentiability**: \(\Xi\) is non-differentiable w.r.t. a parameterized path \(S_{\theta}\). At first glance it looks impossible to change the DGP parameters \(\theta\) by including 'feedback' on \(\frac{\delta(\Xi(S_{\theta}))}{\delta(\theta)}\), without making simple assumptions on the specification of \(\theta\).
* **Complexity**: evaluating the maximum of a vector of \(n\) numbers takes \(n\) operations or \(O(n)\). Naively evaluating the running maximum takes \(n(n+1)/2\) or \(O(n^{2})\) operations. If one would use a smoothed approximation of the maximum (such as smooth _normalized exponentials_[26] or _softmax transformation_) to resolve non-differentiability, this would be computationally prohibitive (especially for long \(n\) paths). Moreover, accuracy of such naive exponential smoothing would rely on the scale of \(S\) and variation around the local maxima.
The main contribution of this paper is that these issues of non-differentiability and numerical complexity
can be jointly overcome by approximating the drawdown of a path by using linear regression on its signature terms.
### Drawdown as a non-linear dynamic system and Lipschitz continuity
Assume the price path \(S:[0,T]\rightarrow\mathbb{R}\) is a piecewise linear continuous path of bounded variation (e.g. finite variance), such as interpolated daily stock prices, index levels or fund net asset values (NAV)4. Denote by \(\mathcal{V}([0,T],\mathbb{R})\) the (compact) space of all such paths. Firstly, we need to remark that while from (1) we know the max operator makes \(\Xi\) not continuously differentiable, the variation in \(\Xi\) is bounded by the variation in the paths. When taking two bounded paths the distance between their maximum values is bounded by a norm on the distance between their path values:
Footnote 4: For the definition of boundedness in the \(p\)-_variation_ sense see [27].
\[|\max_{0<i<T}(S^{1}_{t})-\max_{0<j<T}(S^{2}_{j})|\leq\max_{i,j\in[0,T]}|S^{1}_{ i}-S^{2}_{j}|=\|S^{1}-S^{2}\|_{\infty} \tag{2}\]
where \(|\cdot|\) denotes the standard absolute value sign (as an \(S_{t}\) is a scalar), while \(\|.\|_{\infty}\) denotes the infinity norm or maximum distance between two paths (as an \(S\) is a full path). In words, the distance between two maxima is capped by the maximum distance between any two values on path \(S^{1}\) and \(S^{2}\). This means that if two paths become arbitrarily close in terms of this distance, their respective maxima will become arbitrarily close. More specifically, the maximum is Lipschitz-\(C\) continous, with distance inf-norm and \(C=1\). Similar arguments can be made for \(\Xi\), i.e. application of the maximum operator and a linear combination of two piecewise linear paths is Lipschitz continuous. In particular, Proposition 3.1 shows that the impact on the drawdown function of a change in the underlying paths is bounded in terms of a defined distance metric (inf-norm).
**Proposition 3.1** (_Lip_-regularity of \(\Xi\)).: _Consider by \(\mathcal{V}([0,T],\mathbb{R})\) the space of continuous paths of bounded variation \([0,T]\rightarrow\mathbb{R}\), two paths \(S^{1},S^{2}\in\mathcal{V}\) and drawdown function \(\Xi(S)_{t}:\mathcal{V}\rightarrow\mathbb{R}=\max_{k\leq t}(S_{k})-S_{t}\). We have the following Lipschitz regularity for inf-norm distance \(\|.\|\) and a regularity constant \(C\):_
\[\|\Xi(S^{1})_{t}-\Xi(S^{2})_{t}\|\leq C\|S^{1}-S^{2}\| \tag{3}\]
Proof.: \[\|\Xi(S^{1})_{t}-\Xi(S^{2})_{t}\|=\|\max_{i\leq t}(S^{1}_{i})-S^{1 }_{t}-\max_{j\leq t}(S^{2}_{j})+S^{2}_{t}\|\] \[\qquad\leq\max_{i,j\in[0,t]}\|S^{1}_{i}-S^{2}_{j}\|+\|S^{2}_{t}-S^ {1}_{t}\|\leq C\|S^{1}-S^{2}\|\] (4)
Through triangle inequality, the Lipschitz condition holds for \(C\) minimal 2 and distance metric \(\|.\|_{\infty}\), which concludes the proof.
The difference in drawdown between two paths is thus bounded by the distance between the paths. This means that if two paths become arbitrarily close according to said distance metric, their drawdowns will become arbitrarily close. In this article, we explore what this regularity implies for local approximation.
Observe below the differentials of \(\xi\) and \(S\) at a specific point in time \(t\), i.e. we treat drawdown as a non-linear dynamic system which is unnatural as non-differentiability implies \(\frac{d\xi_{t}}{dS_{t}}\) is not a continuous function (e.g. what one would depend on for Taylor series-like local approximations). Consider the following dynamic system (also see [21]):
\[d\xi_{t}=f(\xi_{t},dS_{t})=\left\{\begin{aligned} &-dS_{t},\xi_{t}>0\\ &\max(0,-dS_{t}),\xi_{t}=0\end{aligned}\right. \tag{5}\]
Depending on the current level of drawdown the effect of a price change is either linear or none, hence the derivatives are path dependent. This conditionality illustrates the non-differentiability of \(\Xi\) at a time \(t\) and the fact that the differentials \(|d\xi_{t}|\leq|dS_{t}|\) are bounded. This observation also provides intuition as to why local approximation is feasible. It does make sense to do an interpolation between linear and zero effects, and because of the boundedness one would not be arbitrarily off. For instance, one could assume linear dynamics but with a stochastic component derived from the average time in drawdown, i.e. the probability of a linear effect. In practice, however, the solution of this stochastic equation would still depend on our estimate of the average time in drawdown and in that sense not resolve the inherent path dependence. Hence, ideally we would express \(\xi\) as some function of the vector-valued \(S\) (or intervals of \(S\)), rather than a scalar \(S_{t}\) (or increments \(dS_{t}\)) and treat the path dependence in a more natural way. In the next section, this is exactly what rough path theory allows us to do: solving the types of equations Eq. (5) belongs to, offering strictly non-commutative (thus order or path dependent) solutions to these complex non-linear dynamic systems, allowing for a unique solution for the outputs \(\xi_{t}\) given inputs \(S\), even if their effects \(f\) are not continuously differentiable.
## 4 Rough path theory and the approximation of functions on paths
This section briefly recapitulates some of the central ideas of rough path theory, path signatures and the approximation of functions on paths. Next, we discuss the signature approximation of drawdown where we consider the drawdown of a path as an approximate linear weighing of the moments of the path. This will offer the foundation for generating weighted signatures in the market generator of Section 5.
### Path signatures
Rough path theory was developed by Terry J. Lyons (Lyons et al. (2018); Lyons et al. (2019); Lyons et al. (2020)) and concerns solving rough differential equations. Consider the controlled differential equation (Lyons et al. (2018), Eq. (2)):
\[dY_{t}=g(Y_{t})dX_{t} \tag{6}\]
where \(X\) is a path of bounded variation, called the driving signal of the dynamic system. \(g\) is a mapping called the physics that models the effect of \(dX_{t}\) on the response \(dY_{t}\). A controlled differential equation distinguishes itself from an ordinary differential equation in that the system is controlled or driven by a path (\(dX\)) rather than time (\(dt\)) or a random variable (stochastic differential equation, \(de\)). Rough path theory considers solution maps for driving signals that are much rougher (highly oscillatory and potentially non-differentiable) than e.g. a linear path of time or a traditional Brownian driving path. It is more robust to consider Eq. (6) over Eq. (5), and replace \(dS_{t}\) as an input with its integral. We can rewrite Eq. (5) in this form by setting \(Y=\Xi\), \(X_{t}=(t,\int_{t=0}^{t}dS_{t}ds)\) and \(g(y,(t,x))=f\) (e.g. see Liao et al. (2018), Eq. (2)). This allows the effect to be of a broader type and need not even be differentiable for equation (6) to be well defined. The Picard-Lindelof theorem (Lyons et al. (2018), Theorem 1.3) states that if \(X\) has bounded variation and \(g\) is Lipschitz5, then for every initial value \(y_{0}\) in the image of \(Y\), the differential equation (6) admits a unique solution. Importantly, if the effect of \(X\) on \(Y\) is not Lipschitz, we lose the uniqueness of the solution. As shown in Appendix 7, through Picard iteration and under an additional regularity assumption6 on \(g\), one naturally arrives at \(M\)-step Taylor approximation \(\hat{Y}(M)_{t}\) on the path space for the \(Y_{t}\) in Eq (6):
Footnote 6: For simplicity, in analogy to Lyons Lyons (2018), we assumed here that the iterated \(g^{\text{om}}\) takes their values in the space of symmetric multilinear forms, but this generalizes to any Lipschitz continous \(g\) as per Remark 1.22 in Lyons et al. (2018). This simply means that if \(g\) is differentiable, the \(g^{\text{om}}\) are indeed the classical \(m\)-th order derivatives of \(g\), in general they are only polynomial approximations of \(g\) at increasing orders.
\[\hat{Y}(M)_{t}=y_{0}+\sum_{m=1}^{M}g^{\text{om}}(y_{0})\underset{u_{1}\,<\,- \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\
offering a strictly non-commutative exponential function.
Chevyrev and Oberhauser [11] introduce signatures as the moment generating function on the path space, as they play a similar role to normalized moments for path-valued data instead of scalar-valued data. This is the _uniqueness_ property: _paths with identical signatures are identical in the distributional sense as are scalars with the same sequence of normalized moments._
Proof of signature universal non-linearity is due to Lemercier et al. [32] (also see Lyons and McLeod [29] for a more recent redefinition of the result in Theorem 3.4) who prove that for a Lipschitz9 continous \(h\), path of bounded variation \(X\), there exist a vector of linear weights \(L\) such that for any small \(\epsilon\):
Footnote 9: Again this means \(h\) does not need to be \(k\)-times differentiable, but the \(k\)-th order differentials need to be bounded.
\[|h(X)-\langle L,\Phi(X)\rangle|\leq\epsilon \tag{9}\]
where \(\langle x_{1},x_{2}\rangle\) denotes the standard inner product between vectors \(x_{1}\) and \(x_{2}\), and \(\Phi(X)\) the infinite \(M\) signature or infinite collection of iterated integrals of a path. By the Stone-Weierstrass theorem (a crucial theorem in proving the universal approximation capabilities of neural networks, Cotter [33] and Hornik [34]) it is proven that signatures are a universal basis in the sense that they allow us to express non-linear path functions \(h\) as a linear function of signatures \(\Phi(X)\) provided that they have the required regularity. As for classical Taylor series, although a function might not be \(k\)-times differentiable, a high order smooth polynomial approximation can be a quantifiably close (bounded error) approximation for bounded Lipschitz functions, provided \(C\) is small. In this paper, we apply this common insight to path functions, and drawdowns in particular.
### Signature approximation of drawdown
We noted in Section 3 that drawdown is a non-linear, non-differentiable function of its underlying path \(S_{\theta}\), which is smooth in the bounded differentials \(d\xi/dS\) sense. Its outcome can be seen as an interpolation between two types of path dependent effects. In this section, we leverage the boundedness of Section 3 together with the universality of signatures of Section 4 to introduce a smooth local approximation by linear approximation of drawdown on path signatures.
We propose an approximation \(\hat{\xi}(M)_{t}\) of \(\xi_{t}\) of the form:
\[\hat{\xi}(M)_{t}=\xi_{0}+\sum_{m=1}^{M}L_{m}\underbrace{\int\limits_{\begin{subarray} {c}u_{1}<...<u_{m},u_{1},...,u_{m}\in[0,t]\\ \Phi^{m}(S)\end{subarray}}}_{\begin{subarray}{c}u_{1}<...<u dS_{u_{m}}\\ \Phi^{m}(S)\end{subarray}}dS_{u_{1}}\otimes...\otimes dS_{u_{m}}}_{\Phi^{m}(S)} \tag{10}\]
where \(L_{m}\) is a vector of linear coefficients linking the drawdown at \(t\) with the signature terms of order \(m\) of the path \(S\) up to \(t\). As per above, \(L\) could be considered the iterated effects of intervals up to \(t\) on the resulting drawdown \(\xi_{t}\), where the coefficients are _not_ the \(m\)-th order derivatives as they are not defined, but polynomial approximations that are essentially numerical interpolations of the nested effects. Signatures thus offer a strictly non-commutative alternative to the polynomials Taylor series would suggest.
Importantly, for this integration to make sense the path \(S\) has to be continuous, hence in practice augmented from discrete observations into the continuous domain by _time-augmenting_ the path. This means adding time as an axis and assuming piecewise linear paths10. Other options such as _lead-lag_ and _rectilinear_ augmentation exist (see Lyons and McLeod [29]), but are not favored for this application and may add dimensionality to the paths which increases the number of signature terms per level \(M\) and related compute time.
Footnote 10: Note that in the computation of signatures of piecewise linear paths, one can use Chen’s identity [27] to compute the signature as the iterated tensor product of the increments of the path along the time axis, which allows for efficient computations for practical discrete (but assumed piecewise linear) data (cf. practical computation of signatures, Section 5 in [29]).
As the ordered iterated integrals represent the drift, Levy area, and higher order moments of the path distribution (see Chevyrev and Oberhauser [11]), Eq. (10 ) thus argues that _drawdown can be approximated as a linear function of the moments of the path11_. Leveraging factorial decay of the approximation error for Lipschitz functions, we argue with Proposition (3.1) and Eq. (5) that with the full signature \(M\to\infty\) one gets an arbitrarily close approximation of \(\xi\), where the rate decay depends on the roughness, denoted by \(\gamma\), of the underlying price process \(S\) (see proofs in Boedihardjo et al. [31]).
Footnote 11: Already note here the parallel with the link between quantiles and traditional moments, which we will restate in Section 5.1.
In the more compact inner product notation, we propose to apply Eq. (9) to drawdown:
\[|\Xi(S)-\langle L,\Phi(S)\rangle|\leq\mu \tag{11}\]
where the arbitrary precision \(\mu\) is only in theory, because in practice we rely on the truncated signatures of level
\(M\) as the full signature is an infinite collection, and thus there will be an error \(\kappa\) that due to the ordered nature of the coefficients decays factorially in \(M\) (e.g. Eq. 8 for the intuition and Boedihardjo et al. [31] for the proofs):
\[\Xi(S)=\langle\hat{L},\Phi^{M}(S)\rangle+\kappa_{M} \tag{12}\]
\[\hat{\Xi}_{M}(S)=\langle\hat{L},\Phi^{M}(S)\rangle \tag{13}\]
where \(\hat{L}\) are the estimated coefficients for a chosen signature truncation level \(M\), contrasting the theoretical infinite collection of weights \(L\). Similarly, \(\hat{\Xi}_{M}(S)\) is the approximated drawdown for this truncation level \(M\), while \(\Xi(S)\) is the exact value. Note that one could also do an equivalent truncation of the number of linear coefficients \(len(\hat{L})\) rather than the signature order. However, from Eq. (10) we know that it is more natural to choose a set of linear coefficients that corresponds to a number of signature terms following the choice of M.
Proposition 4.1 looks into the consistency behaviour of \(\kappa\) with respect to a sample size of \(K\) sample paths drawn from \(\mathcal{V}\) and next we highlight small sample properties that become apparent from the proof.
**Proposition 4.1** (**Consistency of linear \(\Xi(\mathbf{S})\) approximation on signatures \(\mathbf{\Phi^{M}(S)}\))**.: _Consider by \(\mathcal{V}([0,T],\mathbb{R})\) the space of continuous paths of bounded variation \([0,T]\rightarrow\mathbb{R}\), \(\mathcal{K}\subset\mathcal{V}\) is a compact subset comprising sample paths \(S_{k}\), \(k\in[0,...,K]\), and \(\Xi:\mathcal{V}\rightarrow\mathbb{R}\) is the Lip continuous drawdown function. The approximation error \(\kappa\) of \(\Xi(S)\) for any \(S\) in \(\mathcal{V}\) by \(\hat{\Xi}(S)\) is bounded through the regularity of \(\Xi\) (Proposition 3.1) and the distance in the signature space between \(S\) and any \(S_{k}\), such that for \(K\rightarrow\infty\), \(\kappa\to 0\), or_
\[|\Xi(S)-\hat{\Xi}(S)|\to 0,\text{ for }K\rightarrow\infty \tag{14}\]
Proof.: Universal nonlinearity of signatures (Eq. 9) is due to Theorem 2.1 in Lemercier et al. [32] and states that for defined \(\mathcal{V}\) and \(\mathcal{K}\) there exists a truncation level \(M\in\mathbb{N}\) and coefficients \(\hat{L}\) such that for every \(S_{k}\in\mathcal{K}\) we have that for any \(\iota\)
\[|\Xi(S_{k})-\langle\hat{L},\Phi^{M}(S_{k})\rangle|\leq\iota \tag{15}\]
We decompose \(\kappa=|\Xi(S)-\hat{\Xi}(S)|\) with a triangle inequality as suggested by Eq. (3.6) in Lyons and McLeod [29]:
\[|\Xi(S)-\hat{\Xi}(S)|\leq|\Xi(S)-\Xi(S_{k})|+ \tag{16}\] \[|\Xi(S_{k})-\hat{\Xi}(S_{k})|+|\hat{\Xi}(S_{k})-\hat{\Xi}(S)|\] (17) \[=\mathbf{A}_{K}+\mathbf{B}_{M(K)}+\mathbf{C}_{K} \tag{18}\]
This inequality bounds the error by the regularity of \(\Xi\) in \(\mathbf{A}\), the \(\iota\) of Eq. (15) in \(\mathbf{B}\) and a signature distance in \(\mathbf{C}\).
By Proposition 3.1, we find that \(\mathbf{A}\leq\mathcal{C}\|S-S_{k}\|_{\infty}\). For any \(k\in[1,...,K]\) with \(K\rightarrow\infty\) and the compactness of \(\mathcal{V}\) there exists a \(k\) such that \(\|S-S_{k}\|_{\infty}\to 0\) such that \(\mathbf{A}\) can be reduced to zero. By Eq. (15), \(\mathbf{B}\leq\iota\), such that this term is governed by the rate decay of Eq. (8). Eq. (15) guarantees we can pick an \(M\) high enough such that through Eq. (8), \(\iota\leq C_{\gamma}\frac{|S|_{M\in\mathbb{N}}^{M\in\mathbb{N}}}{M!}\) such that this term can be shrunk arbitrarily small as \(M\) can be set arbitrarily large for \(K\rightarrow\infty\).
As per Eq. (3.7) in Lyons and McLeod [29] the difference between the approximated drawdown of two paths can be bounded by a linear combination of the difference in signatures:
\[\mathbf{C}\leq|\langle L,|\Phi^{M}(S_{k})-\Phi^{M}(S)|\rangle| \tag{19}\]
for an appropriate choice of rough path distance metric [29]12. For any \(k\in[1,...,K]\) with \(K\rightarrow\infty\) and the compactness of \(\mathcal{V}\) there again exists a \(k\) such that \(|\Phi^{M}(S_{k})-\Phi^{M}(S)|\to 0\) such that \(\mathbf{C}\) could be reduced to zero, which concludes the proof.
Footnote 12: More specifically, an \(L^{p}\)-norm in the path space that bounds the \(p\)-variation of these differences (Lyons et al. [28]).
In sum, \(\kappa\) converges to zero for \(K\rightarrow\infty\) provided that Proposition (3.1) holds for \(\mathbf{A}\), while compactness of \(\mathcal{V}\) (\(\mathbf{B}\) and \(\mathbf{C}\)) is a reasonable assumption in practice.
For finite \(K\), we can still use the decomposition in the proof to do error analysis. The drawdown approximation will generalize well as far as (A) the maximum distance between available samples \(S_{k}\) and possible new samples in \(\mathcal{V}\) is small, (B) the signature truncation level \(M\) is chosen appropriately high for the given roughness of the process to let \(\iota\) be small enough, (C) as per (A) the distance between the signatures of observed and unobserved paths is small such that term \(\mathbf{C}\) is small.
Finally, a remaining modelling choice is the specific choice of \(L\). As the approximation is essentially linear, Eq. (12) can be estimated using linear regression (OLS) and higher order polynomials are not required (as one would need to do with e.g. _logsignature_ regression [29]). However, since in practice the sample \(K\) is limited and the number of signature terms scales exponentially with \(M\), one has to be mindful of overfitting on a limited sample size \(K\), e.g. \(len(\Phi^{M})>K\). Therefore, we study the impact of regularized linear regression of \(\Xi(S)\) on \(\Phi^{M}(S)\) with a penalty for the number (absolute shrinkage or selection) and size (proportional shrinkage) of estimated coefficients (i.e. the elastic net (ElNet) regression). We conclude by specifying:
\[\hat{L}=\min_{L}(\|\Xi(S)-\langle L,\Phi^{M}(S)\rangle\|_{2}+\lambda_{1}\|L\|_{ 1}+\lambda_{2}\|L\|_{2}) \tag{20}\]
\[\widehat{\Xi}_{M}(S)=\langle L,\Phi^{M}(S)\rangle \tag{21}\]
where \(\lambda_{1}=\lambda_{2}=0\) corresponds to OLS, \(\lambda_{1}=0\) corresponds to Ridge and \(\lambda_{2}=0\) to LASSO regression [35]. In applications below, we set \(\lambda_{1}\) and \(\lambda_{2}\) using 10-fold cross-validation (CV), such that we can further refer to Eq. (20) as _EINetCV_.
## 5 The model
### Introduction
In this section, we introduce and motivate our generative ML model. The aim of the drawdown market generator, from here on named the \(\xi\)-VAE, is that upon convergence of (train and validation) reconstruction loss terms we have guarantees that the synthetic samples have preserved the drawdown distribution of the original samples. This is generally not the case in market generator models or financial DGPs with standard martingale assumptions (as per Section 3).
Crafting a DGP to obtain a certain level of drift, volatility or higher order moments is mathematically more straightforward, as those are typically the equations that constitute the DGP. With measures on the P&L, such as value-at-risk (VaR) or expected shortfall (ES) (Cont et al. [3]), one can leverage the direct analytical link between quantiles and moments (e.g. Boudt et al. [36]). As per Section 3, one can express the drawdown distribution as a function of the moments of the static P&L described by the DGP as well, but only under very restrictive assumptions (e.g. Douady et al. [21] and Rej et al. [20]). Besides, one would expect drawdown to be a function of the moments of the path (vector-valued) \(S\), rather than the (scalar-valued) \(S_{t}\).
In this article, we proposed to _approximate drawdown as a linear combination of the moments of the path_, as Chevyrev and Oberhauser [11]) define signatures as the moments of the path. As an analogue to quantiles and static moments, we can evaluate these path moments and weigh them according to their importance to simulate realistic drawdowns. Moreover, this approximation implies a smoothing of the drawdown function by the change of basis. The signature is a non-parametric sum of path values. Weighted sums are differentiable13. Indeed, because of linearity the loadings of drawdown to signature terms can also be seen as the sensitivities of a path's drawdown to changing signatures.
Footnote 13: That is why Kidger and Lyons [37] focus on this differentiability property for efficient CPU and GPU implementations in their _signatory_ ([https://pypi.org/project/signatory/](https://pypi.org/project/signatory/)) package, which we use in our Python code.
Measuring the divergence between the moments of a path by means of a maximum mean discrepancy (MMD) was proposed by Buehler et al. [5]. We essentially add that it is useful to weigh these moments according to the \(L\) from Eq. (20) to minimize (and control for) the drawdown divergence between the input and the output samples during training (and validation) epochs. This has the advantages of (1) not requiring signatures in the input or output space, only as part of the objective, and (2) having an explicit drawdown term in the reconstruction loss that allows one to monitor its convergence during training and validation. The next subsection will make this motivation specific and introduce the algorithm.
### The algorithm
_Variational autoencoders_. The core of our algorithm is a variational autoencoder (VAE), which is a general generative machine learning architecture that links an encoder and decoder neural network in order to generate new samples from a noisy, in this case Gaussian, latent space. The idea of a Monte Carlo is to transform noise into samples that are indistinguishable from the actual samples by scaling them, adding drift, etc. In other words, the neural network that constitutes the decoder is our non-parametric DGP. It does contain the parameters of the neural network, but it is non-parametric in the sense that we do not have to specify the dynamics in a handcrafted formula before we can do Monte Carlo. We rather rely on the universal approximation theorems behind even shallow neural networks (Hornik [34] and Cotter [33]) to approximate a realistic DGP by iterating data through the network and updating the parameters \(\theta\) with feedback on the drawdown distribution of the batch, assuring that the approximated DGP converges in train and validation loss to the empirical DGP.
We will not discuss the VAE architecture in depth here, but include an architectural overview in Appendix 7 and refer the interested reader to Kingma and Welling [38] for details on the encoder and decoder networks \(g\), backpropagation, the latent Kullback-Leibler \(\mathcal{L}_{L}\) loss and the standard \(L2\) reconstruction loss \(\mathcal{L}_{R}\).
We should stress here that the main reason for picking a rather standard VAE (over restricted Boltzmann machines, generative adversarial networks, generative moment matching networks or normalizing flow-based VAEs) is their simplicity, speed, flexibility, scalability and stability during training. Boltzmann machines are very efficient to train, but the energy-based loss function and their binary values makes them very inflexible for adjusting objective functions. Through the discriminator mechanism GANs are most flexible and a very
popular choice in related literature, but notoriously expensive to train in terms of required data and speed, and associated instabilities such as _mode collapse_ and _vanishing gradients_ leading to subpar results (Eckerli and Osterrieder [39]).
```
Input Historical price paths \(S:[0,T]\rightarrow\mathbb{R}\), hyperparameters (listed in Appendix 7), signature truncation level M and feature weight \(\alpha\). Output Trained VAE Market Generator \(g_{0}\)
1:procedureTrain
2: Divide historical sample into blocks (index \(b\), \(b\in[1,T-\tau]\)) of length \(\tau\), calculate the signatures of these paths truncated at level \(M\), \(\Phi^{M}(S_{b})\), calculate the drawdowns \(\Xi\) of these paths \(\Xi(S_{b})_{\tau}=\int_{0}^{t}\left(max_{t\in\mathcal{S}}(S_{b})-S_{b},t \right)dt\)
3:\(L=E\Pi NetCV(\Xi(S_{b}),\Phi^{M}(S_{b}))\)
4: Initialize the parameters \(\theta\) of the VAE.
5:for\(i:[1,\ldots,N_{t}]\)do
6: Sample a batch (index \(\mathcal{B}\)) of blocks and pass it through the encoder \(g_{0}\) and decoder network \(g_{g}^{-1}\).
7: Calculate drawdown \(\Xi(S^{\prime})\) of the output sample \(S^{\prime}\) using the differentiable signature approximation: \(\langle\hat{L},\Phi^{M}(S^{\prime})\rangle\)
8: Define the reconstruction loss term as the weighted aver- age of \(L2\) error and drawdown loss: \(\mathcal{L}_{R}=\mathbb{E}_{g}\|S-S^{\prime}\|^{2}+\alpha\mathbb{E}_{g}\| (L,\Phi^{M}(S^{\prime}))-\langle\hat{L},\Phi^{M}(S^{\prime})\rangle\|^{2}\)
9:\(\mathcal{L}=\mathcal{L}_{L}+\mathcal{L}_{R}\)
10:\(\theta=-\theta_{-1}-l^{-1/2}_{-\theta}\)
```
**Algorithm 1** Training \(\xi\)-VAE
Training and sampling from \(\xi\)-VaeThe proposed algorithm is provided in Algorithm 1 and 2. In short, we propose to include the divergence between the observed drawdown distribution of a batch \(\mathcal{B}\), which is a stochastically sampled set of blocks of \(\tau\) subsequent points of \(S\), and the synthetic drawdown distribution (the drawdowns of reconstructed samples \(S^{\prime}\)) in the reconstruction loss function. The market generator can be interpreted as a moment matching network, adding the moments of drawdown rather than returns14.
Footnote 14: For instance, this distributional distance over a batch is identical to the distance between the actual and fitted distributions in Figure.10, which is the drawdown distribution we want to preserve in synthetic samples.
As input \(\xi\)-VAE takes historical price paths \(S\), a signature truncation level \(M\), objective scale \(\alpha\)15, and the VAE hyperparameters listed in Appendix 7. The output is a trained neural DGP (encoder and decoder network) that allows to transform random Gaussian variables into new paths indistinguishable16 from original data.
Footnote 15: Through Grid Search, \(1e^{-4}\) was chosen for \(\xi\)-VAE, while zero corresponds to a standard \(VAE\).
The key steps for training a \(\xi\)-VAE, distinctive from a standard VAE (cf. Appendix 7), are:
* **Signature drawdown approximation** (Step (3)): compute the signatures up to order \(M\) of each block, \(\Phi^{M}(S_{b})\), and regress them on the drawdown of the path, \(\Xi(S_{b})\), using Eq. (20). This results in one set of weights \(\hat{L}\) that are used over the \(N_{t}\) training steps. This approximation overcomes the numerical complexity issue inherent to naive iterative (closed-form \(\Xi\)) evaluation of drawdown.
* **Drawdown divergence evaluation** (Step (8)): drawdown divergence over a particular batch of data is defined as the distance between the original and replicated drawdown distribution: \(\mathcal{L}_{\xi}=\mathbb{E}_{g}\|\langle\hat{L},\Phi^{M}(S)\rangle-\langle \hat{L},\Phi^{M}(S^{\prime})\rangle\|^{2}\). The approximation overcomes the issue that closed-form \(\Xi\) evaluations cannot yield informative gradients in Step (10) and \(\mathcal{L}_{\xi}\) would be ignored (flat) in training.
While standard (\(\alpha=0.0\)) \(\mathcal{L}_{R}\) terms converge sooner, the additional \(\mathcal{L}_{\xi}\) term shifts the distributions towards a \(\mathcal{P}(\xi)\) close to the empirical one. As per below, this can be seen by monitoring the \(\mathcal{L}_{\xi}\) term during training and validation epochs.
## 6 Numerical results
This section comprises the numerical results of the outlined methods. First, we discuss the accuracy of linear approximation of drawdown in the signature space on simulated and real world data, focusing on the error rate decay and consistency. Second, we discuss the accuracy of the drawdown market generator.
### Linear drawdown approximation with signatures
This part analyses the accuracy of the approximation. Section 6.1.1 describes the simulation set up and results for fractional Brownian simulated data. Section 6.1.2 discusses the accuracy of the approximation on empirical data.
#### 6.1.1 Bottom-up simulations
_Simulation set up_. We expect to find uniform decay as a function of truncation level \(M\) of estimated drawdown approximation error \(\hat{\kappa}\) in-sample, where the decay constant is a function of the roughness of the underlying process. Out-of-sample we rely on the error decomposition in Proposition 4.1 and expect the error to shrink if \(K\) grows very large. Since the roughness of empirical data is unknown and has to be estimated, it is useful to test the approximation in an experimental set up where \(\gamma\) can be specified. We thus first test the approximation (20) on simulated fractional Brownian motion (fBM) paths.
Consider first the simplest case of homoskedastic BM (\(dS_{t}=\mu dt+\sigma d\varepsilon\)). In this simple case, the price path \(S\) can thus be seen as the cumulative sum process of a random uncorrelated standard Gaussian \(\varepsilon\), scaled with \(\sigma\) and added a deterministic drift \(\mu\). We consider piecewise linear paths of length \(T=20\) days with values \(\mu=1\%/252\) and \(\sigma=20\%/252\). fBM implements BM where the uncorrelated Gaussian increments are replaced by fractional Gaussian increments that have a long-memory structure. The martingale property that the autocovariance between Gaussian increments has expectation zero, is replaced by a generalized autocovariance function for two increments \(dS^{H}\) at t and s (i.e. lag \(t-s\)):
\[E[dS^{H}_{t}dS^{H}_{s}]=\tfrac{1}{2}[|t|^{2H}+|s|^{2H}-|t-s|^{2H}] \tag{22}\]
where H is the so-called Hurst exponent (\(H\)). Note that a \(H=0.5\) corresponds to Brownian increments, while \(H>0.5\) yields smooth, persistent, positively autocorrelated paths and \(H<0.5\) yields rough, antipersistent negatively autocorrelated paths. Intuition tells us that the smaller \(H\), the more granularity the path has, and the worse the approximation will become for a certain level of \(M\).
There are hence three dimensions to this simulation study. We want to evaluate the estimated error \(\hat{\kappa}\) as a function of (1) roughness \(H\), (2) signature approximation order \(M\) and (3) simulation size \(K\). Therefore, we:
* [noitemsep,topsep=0pt]
* Vary \(H\) between 0.4 and 0.7 with step size 0.05
* Vary \(M\) between 1 and 10 (naturally with unit steps)
* Repeat the experiment for \(K\) in [1000, 5000, 10000, 20000, 50000]
* Fit regression Eq. (20)17 on \(K\) samples (train) and simulate \(p_{\mathit{test}}K\) new samples to evaluate test accuracy (\(p_{\mathit{test}}=0.1\)).
Footnote 17: As per above we assume piecewise linear paths by time-augmenting [29] the paths and do standard scaling of the feature set such that the coefficients and regularization penalties make sense. The chosen \(\lambda_{1}\) and \(\lambda_{2}\) by \(CV\) is sample specific, but was stable around \(4e^{-5}\) regularization (i.e. the scale of both lambdas w.r.t. total objective value) and a \(\lambda_{1}/\lambda_{2}\) ratio of 0.5.
_Results and discussion_. For readibility, the detailed numerical results figures are included in Appendix 7, where Figures.6 to.8 investigate the root-mean-squared (\(RMSE\)) approximation error of the approximation for these individual dimensions. Figure 1 is included in the main text, as it gives a good overview of the expected rate decay as a function of \(H\) and \(M\) and the consistency for large sample sizes \(K\).
We are initially interested in the consistency of the regressions and evaluate the in- and out-of-sample discrepancy as a function of sample size. Figure.6 shows the difference between in- and out-of-sample accuracy as a function of sample size \(log(K)\). We averaged the difference for all simulation sizes over the values of \(M\) to get a line per value of \(H\) (blue line), the opposite (red line) or the overall average per \(K\) (black line).
The conclusions are twofold: (1) when \(K\) grows large the discrepancy disappears, which relates to Proposition 4.1 in the sense that for \(K\rightarrow\infty\) the two error components (terms **A** and **C**) that explain differences in in- and out-of-sample performance shrink to zero, and (2) the in-sample(IS)/out-of-sample(OOS) accuracy divergence depends on confounders \(M\) and \(H\) implying an improperly chosen \(M\) for smaller and/or rougher samples might lead to bad generalization (term **B** in error decomposition).
Next, we look in Figure.7 at the train and test \(RMSE\) as a function of the signature approximation order \(M\). When taking the average performance over \(K\) and \(H\), we find uniform decay in \(RMSE\). When deconfounding for sample size \(K\), which you can find on the right hand side of Figure.7, we find that only for the smallest sample size the improvement in test accuracy stalls before \(M\) reaches its maximum value. This implies that for large samples sizes one would prefer to choose the highest signature order that is possible given computational constraints, i.e. the computing time scales quadratically with the signature order \(M\) while the improvement in accuracy has diminishing returns (also see Figure 2). In sum, this indicates worse generalization abilities for smaller samples driven by the distances discussed in Proposition 4.1, rather than the \(M/K\) ratio as it is small for all experiments.
Next, we check the relationship between the accuracy of the approximation and the roughness \(H\) of the assumed process. In Figure.8, which shows the average \(RMSE\) per level of \(H\), we find that the accuracies improve uniformly with \(H\).
Finally, the factorial error decay in \(M\) and the dependency of its rate on \(H\) is best illustrated in Figure 1. The vertical lines separate the different levels of \(H\), denoted on the x-axis, while in between \(M\) increases from 1 to 10, while \(K\) is fixed at the maximum 50000. It is clear that for rougher processes the error is higher and decreases slower than for smoother processes, which we derived from Eq. (8). For all levels of \(H\), high orders of \(M\) yield negligible approximation errors for this high \(K\). Moreover, the in-sample decay generalizes well to out-of-sample error behaviour.
In summary, the simulation study confirms our initial expectations of rate decay and IS-OOS consistency, but raises some practical warnings as well:
* For a large number \(K\) of sample paths, say \(log(K)>3\), one finds uniform decay in \(\kappa\) with both \(H\) and \(M\), in both test and training fits. In terms of Proposition 4.1, the distances between any \(S\) and the \(S_{k}\) do become smaller with \(K\) and the estimated \(\hat{L}\) generalize better.
* For small samples (\(log(K)<3\)), the unbiased approximation may generalize badly, which will result in worse accuracies for \(ElNetCV\). This relates to Proposition 4.1 in the sense that the distance between any \(S\) and \(S_{k}\) (in inf-norm and signature terms) can be large. In any case, \(M\) needs to be chosen high enough for rough paths, but with small \(K\) the system might become ill-conditioned or even degenerate. High regularization will then result from CV, which comes at the cost of higher bias and lower (even IS) accuracy18. This issue is very dependent on the sample and its roughness, but we generally discourage the approximation for sample sizes \(log(K)<3\).
Footnote 18: The accuracy that is good enough for the application at hand of course depends on the application (e.g. for a market generator with 10% drawdowns, improvements of some bps by the higher order \(M\) might not be worth the extra computational time). Moreover, below we will deal with data sets later that are considered large (in the order of 8000 samples) from these standards.
#### 6.1.2 Empirical data
_Data description and set up_. Consider a universe of \(U\)=4 investible instruments: equity (S&P500), fixed income (US Treasuries), commodities (GSCI index) and real estate (FTSE NAREIT index). We collect price data (adjusted close prices) between Jan 1989 and May 2022, which gives us T=8449 daily observations. Clearly, these 4 different asset classes have different return, volatility and drawdown characteristics, which argues for combination and diversification. This can clearly be seen from Figure.9.
As an investor, we hold a portfolio \(\mathbf{w}\), \(w_{i},i\in 1,...,U\), which allocates a weight \(w_{i}\) to each investible asset. In these experiments, we attach \(P\in\mathbb{N}\) sets of weights to these assets and pick a \(\tau<T\) such that we have \(T-\tau\) overlapping sample paths or a simple buy-and-hold strategy over \(\tau\) days for every \(p\in[0,...,P]\). Here we pick \(\tau=20\), so we model monthly sample paths of these mixed asset class portfolios.
The drawdown distribution of the sample paths of such a hypothetical portfolio is shown in Figure.10. If the investor finds scenarios that breach certain levels of drawdowns over the course of a particular month, she needs to reallocate and increase her weights in lower-drawdown instruments. For instance, through drawdown optimization [10]), rescaling all portfolio weights with a risk-free cash-like fraction \(w_{i}(1-w_{cash})\) that exhibits no drawdowns (CPPI [40] or TIPP [41] strategy), or by buying drawdown insurance instruments such as
Figure 1: Factorial error (RMSE) decay as a function of H
a barrier option [7][8]. However, traditional (G)BM-implied distributions understate these probabilities. In Figure.10, we compare the distribution with the distribution that is implied by standard simulated Brownian motion with the same volatility \(\sigma\) parameter as the portfolios we have constructed. Note that one could also use the closed forms from Douady et al. [21], Rej et al. [20] and alike here. Importantly, we notice that the blue density is merely an optimistic lower bound on the true distribution of portfolio drawdown, and misses out on the tails. This can also be seen in Figure.11. The direct analytical link between generalized stochastic processes and this distribution is far from trivial, as one can not rely on Levy's theorem anymore (in contrast to the theoretical (G)BM case). In the next section, our methodology thus advocates a non-parametric approach to simulate paths that closely resemble the correct drawdown distribution. We will adapt parameterized paths \(S_{\theta}\) such that their drawdown distribution converges towards this empirical drawdown distribution.
In this simulation we will:
* Generate a set of portfolio weights \(\mathbf{w}_{p}\) (index \(p\) for portfolio) and construct \(T-\tau\) paths, based on taking blocks \(b\) from the product of \(w_{p}\) and \(S\)'s cumulative returns, the univariate portfolio paths \(S_{p}\).
* Calculate the drawdown \(\Xi(S_{p}^{b})\) and signatures \(\Phi^{M}(S_{p}^{b})\) for each block and regress the two (for increasing \(M\) in [1,..., 10]), with defining the \((1-p_{test})(T-\tau)\) first blocks as train sample and remaining blocks as test sample. Note that there is a strict train-test separation in time.
* Repeat \(P=100\) times and report average performance.
Results and discussionOur conclusions are analogous to the bottom-up simulation experiments. The accuracies are shown in Table 1. We find consistent train and test \(RMSE\) performance, which is expected given the large \(K\). The average discrepancy in train-test RMSE is \(0.00016041\), which is in line with Figure.6. In line with Figure.7, from \(M>5\) we get accurate approximations of \(\xi\), after which it further improves at a much slower rate.
The average and standard deviation of the computation time of one signature of a certain high order \(M\) taken over 1000 iterations are displayed in Figure 2. One finds diminishing improvements in \(RMSE\) for exponential increase in compute time, which might be prohibitive for very large \(K\). This will depend on the used hardware (the _signatory_ package has built-in parallelization in its C++ backend) and own parallelization choices. For this sample the improvements in accuracy become negligible after \(M>5\), so it is not worth it to include higher orders of \(M\).
This can also be seen from Figures.12 and.13 which show the train and test fit respectively for one set of portfolio weights (equal weighted). It becomes apparent that signatures linearize the relationship between portfolio paths and their drawdowns well and that this applies to both smaller and higher (tail) drawdowns. From these values we can deduce that improving accuracy in the order of a few basis points (bps) does not justify the exponential increase in compute. Moreover, from a market generator application perspective (next section) the replication of a drawdown up to a few bps in the objective function is likely to be spurious precision compared to total objective value.
### Drawdown market generator: results and discussion
The main findings are summarized in Figures 3 and 4. More details can again be found in the appendix on numerical results (7). Figure 3 shows the generated paths of a standard VAE model versus the drawdown market generator, denoted by \(\xi\)-VAE. It is clear
\begin{table}
\begin{tabular}{l l l l}
**M (X-class, \(r=100\))** & **RMSE Test** & **RMSE Test** \\ \hline
1 & 0.000193 & (0.00259) & (0.01134) & (0.00361) \\
2 & 0.000022 & (0.001254) & (0.01238) & (0.00279) \\
3 & 0.00004 & (0.001817) & 0.003781 & (0.002153) \\
4 & 0.00005 & (0.001511) & (0.004634) & (0.001485) \\
5 & 0.000858 & (0.001501) & (0.00514) & (0.001402) \\
6 & 0.000523 & (0.001837) & (0.005497) & (0.000353) \\
7 & 0.000611 & (0.001299) & (0.00611) & (0.000369) \\
8 & 0.000526 & (0.001282) & (0.006699) & (0.00069) \\
9 & 0.000591 & (0.013727) & (0.00549) & (0.007615) \\
10 & 0.000451 & (0.001366) & (0.005744) & (0.00720) \\ \hline \end{tabular}
\end{table}
Table 1: ElasticNet CV(10) fit for empirical data
Figure 2: Average and standard deviation of compute times (in seconds) of a signature of order \(M\) taken over 1000 iterations
that the original VAE generates reasonably realistic scenarios, but is reminiscent of Figure.10 in the sense that the paths are non-Brownian but still too centered around the Brownian-like distribution (the densely colored areas). This is also clear from Figure.14. To summarize our architecture from this perspective, the standard VAE reconstruction term focuses on reproducing the top distribution (\(L^{2}\) loss over a particular batch) such that it misses out on the tails. The \(\xi\)-VAE fits both the top and the bottom distribution such that it includes more tail (drawdown) scenarios. In sum, we find that the standard VAE shows a lack of more extreme adverse scenarios.
This also becomes apparent in Figure.15, which plots the synthetic and actual drawdowns as a scatter. The standard VAE does a good job in capturing most part of the moderate drawdowns, but is scattered towards the higher drawdowns and produces none in the tail, while this is resolved with the \(\xi\)-VAE. Moreover, from the cumulative drawdown distribution the Kolmogorov-Smirnoff test that the synthetic drawdowns come from the same distribution as the original one cannot be rejected for both the standard and \(\xi\)-VAE, as we know
Figure 4: Drawdown QQ plots
Figure 3: Generated paths versus original sample paths
the standard KS statistic is not sensitive to diverging tail distributions. A closely related graph is Figure 4, which plots the same scatter but using ordered observations (i.e. quantiles) and a first bisector line which would fit the data if the synthetic drawdown distribution has exactly the same quantiles as the original distribution. Again we find that the VAE misses out on capturing tail drawdown scenarios, while \(\xi\)-VAE does a much better job. This is not done by just memorizing the training samples and reproducing them exactly as the bottleneck VAE architecture forces a lower-dimensional representation of the training data (i.e. the non-parametric DGP). The autoencoder serves as a dimension reduction mechanism, rather than trivially mirroring the training data. That is why one sees different simulated paths than identical ones to the input samples, and why by using the trained decoder as a non-parametric DGP one can create an infinite amount of genuinely new data with the same distribution as the training data. Moreover, both train and validation losses flat out at convergence (please find Appendix 7 with details on the training and test convergence) while purely mirroring data would imply training losses would be minimal at the cost of high validation losses.
## 7 Conclusion
Learning functions on paths is a highly non-trivial task. Rough path theory offers tools to approximate functions on paths that have sufficient regularity by considering a controlled differential system and iterating the effect of intervals of the (rough) path on the (smooth) outcome function. One key path dependent risk functional in finance is a portfolio value path's drawdown. This paper takes the perspective of portfolio drawdown as a non-linear dynamic system, a controlled differential equation, rather than directly analyzing the solution, or the exact expression of drawdown containing the running maximum operator. It relied on some important insights from the theory of rough paths to pinpoint that by taking this perspective, rather than continuous differentiability, a more general regularity condition of bounded changes in drawdown as a function of changes in input paths is sufficient to use a non-commutative exponential to interpolate between the two types of path dependent effects of a driving path on its resulting drawdown by numerically approximating the average nested effect. This thus allows one to locally approximate the drawdown function without having to evaluate its exact expression and in the meantime leapfrogging the inherent path dependence of its time derivatives. The linear dependence of a path's drawdown on its differentiable signature representation then allows one to embed drawdown evaluations into systems of differentiable equations such as generative ML models. We prove this required regularity w.r.t. drawdown: the boundedness of its time derivatives allows us to write it in a more general controlled differential equation notation, and the Lipschitz regularity of the drawdown function assures bounded errors in its convergence proofs. That is why we then illustrated the consistency of the approximation: on simulated fractional Brownian and on real-world data, regression results exhibit a good fit for penalized linear regression (Elastic Net regression) when one has a reasonable sample size. Finally, our proposed application of the approximation is a so-called market generator model that evaluates the synthetic time series samples in terms of their drawdown. We argue that by including a drawdown learning objective, upon convergence of this reconstruction term in train and validation steps, one gets more realistic scenarios than the standard VAE model that exhibit quantifiably (i.e. the measured loss convergence) close drawdowns to the empirical ones, hence effectively reproducing the drawdown distributions without trivially mapping input paths to output paths.
Future work will focus on extending this application and further applying it to portfolio drawdown optimization, where one can for example test a data-hungry drawdown control strategy over a host of synthetic scenarios rather than the single historical one. In this context, learning drawdown scenarios can be seen as a denoising mechanism to first remove noise (encoding), then adding new noise (decoding new samples), then constructing an ensemble average strategy over a host of noisy scenarios that cancels out by construction instead of by assumption for historical scenarios (e.g. bootstrap methods). This non-parametric Monte Carlo idea could hence robustify one's methodology further as a mathematically principled data augmentation technique. Moreover, the non-parametric nature of our Monte Carlo engine opens possibilities to full non-parametric pricing of path dependent (e.g. barrier) max drawdown insurance contingent claims.
Appendix: Controlled differential equations and path signatures
_Controlled differential equations_
We are generally interested in a CDE of the form:
\[dY_{t}=g(Y_{t})dX_{t} \tag{1}\]
where \(X\) is a continuous path on \([0,T]\rightarrow\mathbb{R}\), called the driving signal of the dynamic system. \(g\) is a \(\mathbb{R}\rightarrow\mathbb{R}\) mapping called the physics that models the effect of \(dX_{t}\) on the response \(dY_{t}\). A controlled differential equation (CDE) distinguishes itself from an ordinary differential equation in the sense that the system is controlled or driven by a path (\(dX\)) rather than time (\(dt\)) or a random variable (stochastic SDEs, \(d\varepsilon\)).
_Signatures_
A series of coefficients of the path that naturally arrives from this type of equation is the series of iterated integrals of the path, or the path signature \(\Phi\). The signature of a path \(X:[0,T]\rightarrow\mathbb{R}\) can be defined as the sequence of ordered coefficients:
\[\Phi(X)=(1,\Phi_{1},...,\Phi_{n},...) \tag{2}\]
where for every integer n (order of the signature):
\[\Phi_{n}(X)=\underset{u_{1}<...<u_{n},u_{1},...,u_{n}\in[0,T]}{\int}dX_{u_{1} }\otimes...\otimes dX_{u_{n}} \tag{3}\]
where we define the \(n\)-fold iterated integral as all the integrals over the \(n\) ordered intervals \(u_{i}\) in [0,T]. The signature is the infinite collection for \(n\rightarrow\infty\), although typically lower level M truncations are used.
\[\Phi^{M}(X)=(1,\Phi_{1},...,\Phi_{M}) \tag{4}\]
_Picard Iterations_
The idea behind a Picard iteration is to define for:
\[dY_{t}=g(Y_{t})dX_{t} \tag{5}\]
a sequence of mapping functions \(Y(n):[0,T]\rightarrow\mathbb{R}\) recursively such that for every \(t\in[0,T]\):
\[Y(0)_{t}\equiv y_{0} \tag{6}\]
\[Y(1)_{t}=y_{0}+\int_{0}^{t}g(y_{0})dX_{s} \tag{7}\]
\[Y(n+1)_{t}=y_{0}+\int_{0}^{t}g(Y(n)_{s})dX_{s} \tag{8}\]
Now by simple recursion one finds that (for a linear \(g\)):
\[Y(n)_{t}=y_{0}+\sum_{k}^{n}g^{\otimes k}(y_{0})\underset{u_{1}<...<u_{n},u_{1 },...,u_{n}\in[0,T]}{\int}dX_{u_{1}}\otimes...\otimes dX_{u_{n}} \tag{9}\]
Such that a solution for \(Y_{t}\) would be:
\[Y_{t}=y_{0}+\sum_{k}^{\infty}g^{\otimes k}(y_{0})\underset{u_{1}<...<u_{n},u_ {1},...,u_{n}\in[0,T]}{\int}dX_{u_{1}}\otimes...\otimes dX_{u_{n}} \tag{10}\]
This result shows how the signature, as an iterative representation of a path over ordered intervals, naturally arises from solving CDEs using Picard iterations, and how it is a natural generalization of Taylor series on the path space when the physics is linear.
\[g^{\circ 1}=g \tag{11}\]
\[g^{\circ n+1}=D(g^{\circ n})g \tag{12}\]
then it is natural to define the \(N\)-step Taylor expansion for \(Y_{t}\) by \(\hat{Y}(N)_{t}\) as:
\[\hat{Y}(N)_{t}=y_{0}+\sum_{n=1}^{N}g^{\circ n}(y_{0})\underset{u_{1}<...<u_{n},u_{1},...,u_{n}\in[0,T]}{\int}dX_{u_{1}}\otimes...\otimes dX_{u_{n}} \tag{13}\]
Clearly, \(\hat{Y}(N)_{t}\) is linear in the truncated signature of X up to order N19.
Footnote 19: Moreover, the error bounds of \(\hat{Y}(N)_{t}\) to approximate \(Y_{t}\) yield a factorial decay in terms of N, i.e. \(|Y_{t}-\hat{Y}(N)_{t}|\leq C\frac{|\lambda_{t}^{N+1}|_{2}}{N+1}\). This result can be extended to p-geometric rough paths where \(g\) is a \(Lip(K)\) where \(\kappa>p-1\)[31].
_Example_. The simplest example of:
\[dY_{t}=g(Y_{t})dX_{t} \tag{14}\]
is a linear physics for a linear path X:
\[dY_{t}=Y_{t}dX_{t} \tag{15}\]
where:
\[g=g^{\circ 1} \tag{16}\]
\[g^{\circ n+1}=D(g^{\circ n})g \tag{17}\]
\[X_{t}=X_{0}+\frac{X_{T}-X_{0}}{T}t \tag{18}\]
and assuming:
\[y_{0}=1 \tag{19}\]
\[X_{0}=0 \tag{20}\]
Indeed, this yields the exponential function \(Y_{t}=\exp(X_{t})\). For non-linear driving signals (where the order of the events matter), one generally gets a non-commutative version of the exponential function in Eq. (10)! For linear time, the order of events does not matter and we generally get the increment of the path raised to the level of the iterated integral, divided by the level factorial (i.e. the area of an \(n\)-dimensional simplex).
This can be seen from:
\[Y_{t}=y_{0}+\sum_{n=1}^{N}Y^{on}\int\limits_{u_{1},\ldots,\,c_{n},u_{1},\ldots,u_{n}\in[0,\mathcal{I}]}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[LReLU(x)=1_{x<0}\alpha x+1_{x\geq 0}x \tag{25}\]
where \(\alpha\) is a small constant called the slope of the ReLU. The \(J\) neurons are linearly combined into the next layer (in this case \(Z\)):
\[Z_{k}:=\sum_{j}^{J}\theta_{j,k}f_{\Theta_{j}} \tag{26}\]
for every \(k\) in \(D_{l}\). The decoder map can formally be written down like the encoder, but in reverse order.
### Loss function
The loss function of a VAE generally consists of two components, the latent loss (\(\mathcal{L}_{L}\)) and the reconstruction loss (\(\mathcal{L}_{R}\)):
\[\mathcal{L}(X,X^{\prime})=\beta\mathcal{L}_{L}+(1-\beta)\mathcal{L}_{R} \tag{27}\]
The latent loss is the Kullback-Leibler discrepancy between the latent distribution under its encoded parametrization, the posterior \(f_{\Theta}(X)=\mathbb{P}_{\Theta}(Z|X)\), and its theoretical distribution, e.g. multi-variate Gaussian \(\mathbb{P}(Z)\). Appendix B in [38] offers a simple expression for \(\mathcal{L}_{L}\). The reconstruction loss is the cost of reproducing \(\mathbb{P}_{\Theta}(X^{\prime})\) after the dimension reduction step, and originally computed by the root of the mean squared error (RMSE or \(L2\)-loss) between X and X'.
\[\begin{split}\mathcal{L}(X,X^{\prime})=\beta\frac{1}{2}\sum_{k} ^{K}&(1+\sigma-\mu^{2}-\exp(\sigma))\\ &+(1-\beta)\mathbb{E}(\|X-X^{\prime}\|^{2})\end{split} \tag{28}\]
### Training
In terms of training, the learning algorithm is analoguous to most deep learning methods. Optimal loss values \(\mathcal{L}^{*}\) are determined by stochastically sampling batches of data and alternating forward and backward passes through the VAE. For each batch the data is first passed through the encoder network and decoder network (forward pass), after which \(\mathcal{L}\) is evaluated in terms of \(\Theta\). At each layer, the derivative of \(\mathcal{L}\) vis-a-vis \(\Theta\) can easily be evaluated. Next (backward pass), we say the calculated loss backpropagates through the network, and \(\Theta\) are adjusted in the direction of the gradient \(\nabla_{\Theta}\mathcal{L}\) with the learning rate as step size. The exact optimizer algorithm we used for this is Adam (Adaptive moments estimation, [43]). Finally, we also use a concept called regularization, which penalizes neural models that become too complex or overparametrized. We used a tool called dropout, that during training randomly sets a proportion of parameters in \(\Theta\) equal to zero, and leaves those connections at zero that contribute the least to the prediction.
### Hyperparameters
In summary, the hyperparameters of this architecture are: (1) the number of neurons in the encoder, (2) the number of neurons in the decoder, (3) the latent dimension \(D_{l}\), (4) the learning rate \(l\), (5) the optimizer algorithm, (6) the dropout rate, (7) the batch size \(N_{b}\), (8) batch length \(\tau\) and (9) number of training steps \(N_{t}\). We opted for the following set up, which was optimized using grid search: 50, 50, 10, 0.001, Adam, 0.01, 50, 20, 200 (with early stopping criteria21).
Footnote 21: Not all \(N_{t}=200\) steps are executed if the objective values (both total and individual terms) are not improved over e.g. the last \(I\) training iterations. For a standard VAE this is the total objective value, latent loss and the reconstruction loss, for the \(\xi\)-VAE drawdown convergence is now an additional criterion. After some fine-tuning \(I\) was set to 3.
### Generation
After training, in the sampling or generation step, we start from a random \(D_{l}\)-dimensional noise \(\epsilon\sim\mathbb{P}(Z)\) which is \(D_{l}\)-variate Gaussian. Now, we simply need one decode step to generate new samples of \(\mathbb{P}_{\Theta}(X^{\prime})\).
## Appendix 3 Numerical results
_Consistency and convergence of linear drawdown approximation in the signature space_
### Empirical data overview
Figure 8: _Train and test RMSE as a function of the roughness \(H\)_
Figure 9: _Data overview: return, volatility and drawdown heterogeneity among asset classes._
Figure.10: _Drawdown distribution of a real-world mixed asset class portfolio versus the fitted theoretical drawdown distribution if the underlying DGP would be Brownian Motion (BM) with the same average volatility as the sample paths._
Figure.11: _Zoom on the tail of the empirical versus theoretical drawdown distribution._
### _Drawdown market generator_
Figure 12: Train fit empirical data for one set of portfolio weights (equal weighted): scatter (left) and QQ plot (right)
Figure 13: Test fit empirical data for one set of portfolio weights (equal weighted): scatter (left) and QQ plot (right)
Figure 14: Generated versus actual drawdowns and returns distribution
Figure 15: Generated versus actual drawdowns scatter
Figure 16: Drawdown market generator: total loss convergence (rescaled)
Figure 17: Drawdown market generator: latent (KL) loss convergence (rescaled)
Figure 19: Drawdown market generator: drawdown reconstruction loss convergence (rescaled)
Figure 18: Drawdown market generator: total reconstruction loss convergence (rescaled) |
2309.10572 | Investigating the impact of quasar-driven outflows on galaxies at
redshift 0.3-0.4 | We present a detailed study of the kinematics of 19 QSO2s in the range
0.3<z<0.41 and [OIII] luminosities $L_{[OIII]} > 10^{8.5}$L$_{\odot}$. We aim
at advancing our understanding of the AGN feedback phenomenon by correlating
outflow properties with the presence of young stellar populations (YSPs) with
ages <100 Myr, the optical morphology and the environment of the galaxies, and
the radio luminosity.
We characterize the ionized gas kinematics using the
[OIII]$\lambda$5007$\r{A}$ profiles, through three different outflow detection
methods: multi-component parametric and flux-weighted and peak-weighted
non-parametric.
We detect ionized outflows in 18 QSO2s using the parametric analysis, and in
all of them using the non-parametric methods. We find higher outflow masses
using the parametric analysis (log M$_{OF}$(M$_{\odot}$)=6.47$\pm$0.50), and
larger mass rates and kinetic powers with the flux-weighted non-parametric
method (\.M$_{OF}$=4.0$\pm$4.4 M$_{\odot}$ yr$^{-1}$ and
log(\.E$_{kin}$)=41.9$\pm$0.6 erg~s$^{-1}$). However, it is when we use the
parametric method and the maximum outflow velocities that we measure the
highest outflow mass rates and kinetic energies (23$\pm$35 M$_{\odot}$
yr$^{-1}$ and 42.9$\pm$0.6 erg s$^{-1}$). We do not find any significant
correlation between the outflow properties and the previously mentioned galaxy
properties.
4 out of 5 QSO2s without a YS<100 Myr show highly disturbed kinematics,
whereas only 5 out of the 14 QSO2s with YSPs show similarly asymmetric [OIII]
profiles. This might be indicative of negative feedback. The lack of
correlation between the outflow properties and the presence of mergers in
different interaction stages might be due to their different dynamical
timescales. Lastly, the small radio luminosity range covered by our sample may
be impeding the detection of any correlation between radio emission and outflow
properties. | K. Hervella Seoane, C. Ramos Almeida, J. A. Acosta Pulido, G. Speranza, C. N. Tadhunter, P. S. Bessiere | 2023-09-19T12:30:54Z | http://arxiv.org/abs/2309.10572v1 | # Investigating the impact of quasar-driven outflows on galaxies at z\(\sim\)0.3-0.4
###### Abstract
Context:
Aims:We present a detailed study of the kinematics of 19 type 2 quasars (QSO2s) with redshifts in the range 0.3\(<\)z\(<\)0.41 and [OIII] luminosities of \(L_{\rm[OII]}>10^{3.5}\)L\({}_{\odot}\). We aim at advancing our understanding of the AGN feedback phenomenon by correlating outflow properties with i) the presence of young stellar populations (YSPs) with ages \(<\)100 Myr, ii) the optical morphology and the environment of the galaxies, and iii) the radio luminosity.
Methods:We characterize the ionized gas kinematics using the [OIII]\(\lambda\)5007 A emission line profiles detected in intermediate spectral resolution (R\(\sim\)1500-2500) optical spectra of the QSO2s. To do so we employed three different outflow detection methods: multi-component parametric, flux-weighted non-parametric, and peak-weighted non-parametric.
Results:We detect ionized outflows in 18 of the 19 QSO2s using the parametric analysis, and in all of them using the non-parametric methods. We find higher outflow masses using the parametric analysis (average log \(\rm{M_{\rm{\odot}}}\)=6.47\(\pm\)0.50), and larger mass rates and kinetic powers with the flux-weighted non-parametric method (\(\rm{M_{\rm{\odot}}}\)=4.0\(\pm\)4.4 M\({}_{\odot}\) yr\({}^{-1}\) and \(\rm{E_{\rm{kin}}}\)=41.9\(\pm\)0.6 erg s\({}^{-1}\)). However, it is when we use the parametric method and the maximum outflow velocities (\(\rm{v_{\rm{max}}}\)) that we measure the highest outflow mass rates and kinetic energies (\(\rm{M_{\rm{\odot}}}\)=23\(\pm\)35 M\({}_{\odot}\) yr\({}^{-1}\) and log(\(\rm{E_{\rm{kin}}}\))=42.9\(\pm\)0.6 erg s\({}^{-1}\)). We do not find any significant correlation between the outflow properties and the previously mentioned AGN and galaxy-wide properties.
Conclusions:Four of the five QSO2s without a YSP of age \(<\)100 Myr show highly disturbed kinematics, whereas only 5 out of the 14 QSO2s with YSPs show similarly asymmetric [OIII] profiles. Despite the small sample size, this might be indicative of negative feedback. The lack of correlation between the outflow properties and the presence of mergers in different interaction stages might be due to their different dynamical timescales (Myr in the case of the outflows versus Gyr in the case of the mergers). Lastly, the small radio luminosity range covered by our sample, log(L\({}_{\rm{5GHz}}\))=[22.1, 24.7] W Hz\({}^{-1}\), may be impeding the detection of any correlation between radio emission and outflow properties.
Conclusions:
## 1 Introduction
All galaxies, or at least the most massive ones, are thought to experience short episodes of nuclear activity, of \(\leq\)100 Myr (Martini, 2004; Novak et al., 2011; Schawinski et al., 2015). These nuclear activity phases are considered key drivers of galaxy evolution because they can potentially regulate black hole and galaxy growth (Di Matteo et al., 2005; Harrison, 2017).
Cosmological simulations require active galactic nuclei (AGN) feedback for quenching star formation and preventing galaxies from becoming over-massive (Di Matteo et al., 2005; Dubois et al., 2016), thereby correctly reproducing the observed galaxy-halo mass relations (Silk & Rees, 1998; Croton et al., 2006; Moster et al., 2010). Furthermore, observational studies have found plenty of evidence of this feedback on different scales, from the central tens to hundreds of parsecs (Garcia-Burillo et al., 2021; Ramos Almeida et al., 2022) to hundreds of kpc (Rupke et al., 2019; Martin-Navarro et al., 2021).
Multi-phase outflows of molecular, neutral, and ionized gas (Rupke & Veilleux, 2013; Cicone et al., 2018; Herrera-Camus et al., 2020; Fluetsch et al., 2021) are one of the primary AGN feedback mechanisms that can quench star formation by heating up, disrupting, and ultimately removing the surrounding gas available to form stars. However, AGN-driven outflows have also been found to have the opposite effect, often referred to as "positive feedback", as they can promote star formation by pressurizing the gas and enhancing fragmentation (Klamer et al., 2004; Cresci et al., 2015; Cresci & Maiolino, 2018; Carniani et al., 2016; Bessiere & Ramos Almeida, 2022). Hence, we still need to advance in our understanding of their actual impact on star formation.
The drivers of these multi-phase outflows have also been a subject of study, with radio jets or AGN-winds as the main potential candidates (Mukherjee et al., 2018; Fischer et al., 2019). For example, jetted Seyfert galaxies (Whittle, 1992; Garcia-Burillo et al., 2014, 2019; Morganti et al., 2015) have been found to show larger outflow velocities than those without jets, suggesting an influence of these jets in launching and/or accelerating the outflows (Wylezalek & Morganti, 2018; Jarvis et al., 2019). Furthermore, even in the case of lower radio-power Seyferts and radio-quiet quasars, recent studies have found evidence that compact and modest radio jets might induce nuclear outflows (Aalto et al., 2016; Audibert et al., 2019; Fernandez-Ontiveros et al., 2020; Garcia-Bernete et al., 2021; Speranza et al., 2022; Audibert et al., 2023). The influence of jets on the ionized gas kinematics has been also studied using low angular resolution data, as e.g., from the Sloan Digital Sky Survey (SDSS): Mul
laney et al. (2013) and Zakamska & Greene (2014) found that the higher the radio luminosity, the more disrupted the [OIII] kinematics. However, it has also been claimed that this correlation disappears if the influence of the host galaxy gravitational potential is taken into account (see Ayubinia et al., 2023 and references therein).
Mergers and interactions have long been known as an AGN triggering mechanism (Canalizo & Stockton, 2001; Hopkins et al., 2008; Ramos Almeida et al., 2011, 2012; Bessiere et al., 2012; Satyapal et al., 2014; Goulding et al., 2018; Pierce et al., 2023). Given that mergers can simultaneously enhance nuclear star formation and nuclear activity (e.g., Satyapal et al., 2014), it might be possible that outflow incidence and properties might show a dependence with galaxy morphology and/or environment. For example, some of the most powerful outflows in the local universe are found in ultra-luminous infrared galaxies (ULIRGs; Cicone et al., 2014; Rose et al., 2018; Lamperti et al., 2022), which are almost uniquely associated with major mergers. Apart from these, the merger-induced gas flows may also provide a rich ISM with plenty of cool gas for the radiation-driven winds and any jets to interact with, thus increasing the coupling with the hosts and making the outflows easier to detect.
The properties of AGN-driven outflows have been shown to depend on AGN luminosity, being faster and more powerful as the AGN luminosity increases (Zakamska & Greene, 2014; Fiore et al., 2017). However, recent works showed that other factors, including nuclear gas concentration and the coupling between the winds/jets and the galaxy discs might also play a key role (Ramos Almeida et al., 2012; Audibert et al., 2023). Type 2 quasars (QSO2s) in the local universe are ideal laboratories to characterize outflows and study their impact on the host galaxies. This is because the outflows are easier to identify that the lower velocity outflows of Seyfert galaxies, and the high obscuration (either nuclear or galaxy-wide; Ramos Almeida & Ricci, 2017) blocks the emission from the broad-line region and the AGN continuum, avoiding dilution of the emission line and stellar absorption features.
The most widely studied gas outflow phase is the warm ionized (T\(\sim\)10\({}^{4}\) K), since it emits strong forbidden emission lines in the infrared and optical range, being the [OIII]\(\lambda\)5007 A one of the strongest. QSO2s are commonly found to present complex emission line profiles, showing large blue asymmetries and deviations from Gaussianity (Greene et al., 2011; Liu et al., 2013; Villar-Martin et al., 2011, 2014; Harrison et al., 2014; Ramos Almeida et al., 2017, 2019), indicating highly-disrupted kinematics. The overall majority of AGN outflow studies characterized the ionized gas kinematics following a parametric approach (Holt et al., 2011; Greene et al., 2011; Villar-Martin et al., 2011; Arribas et al., 2014), considering that each gaseous component that contributes to the kinematics can be described by a Gaussian distribution. They have the advantage of making it possible to characterize the properties of different gas components, yet, for objects with highly disrupted kinematics it becomes difficult to ascribe a physical meaning to each component (Bessiere & Ramos Almeida, 2022). Hence, other studies implemented a non-parametric analysis, based on measuring emission line velocities at fixed fractions of the peak intensity or the cumulative line flux function (Whittle, 1985; Harrison et al., 2014; Zakamska & Greene, 2014; Speranza et al., 2021; Bessiere & Ramos Almeida, 2022). Non-parametric methods are better for characterizing low signal-to-noise spectra with complex emission line profiles. Although they do not allow us to separate different gaseous contributions, they can easily identify high velocity gas in tail ends of emission line wings. Despite all the recent works making use of both parametric and non-parametric methods to characterize AGN-drive outflows in the literature, to the best of our knowledge, no comparison between their results has been performed for a common sample of objects.
Considering the previous, in this work we use three different outflow detection methods (parametric, flux-weighted non-parametric, and the peak-weighted non-parametric from Speranza et al. 2021) for characterizing the ionized gas kinematics of 19 QSO2s with redshifts in the range 0.3\(<\)z\(<\)0.41 and [OIII] luminosities of \(L_{\rm[OIII]}>10^{8.5}\)L\({}_{\odot}\). Furthermore, with the aim of advancing in our understanding of the AGN feedback phenomenon, we study potential correlations between the QSO2s outflow physical properties and different AGN and host galaxy properties, including i) the presence of young stellar populations (YSPs) with ages \(<\)100 Myr, ii) the optical morphology and environment, and iii) the AGN and radio luminosity.
In Section 2 we describe the QSO2 sample and the spectroscopic data used here, as well as the previously studied AGN and host galaxy properties. In Section 3 we explain the three different methods we used for analyzing the ionized gas kinematic and present the results. Section 4 includes the physical outflow properties derived through each of the three methods. In Section 5 we evaluate the possible correlations between the outflow properties and different AGN and host galaxy properties. In Section 6 we discuss the results, and finally in Section 7 we summarize the findings of our work. Throughout this paper we assumed a cosmology with H\({}_{0}\)=70 km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\rm M}\)=0.3 and \(\Omega_{\Lambda}\)=0.7.
## 2 Sample and data
Our QSO2 sample is based on the one studied by Bessiere et al. (2012), which is a subset of the narrow-line AGN catalogue of Zakamska et al. (2003), selected from SDSS (York et al., 2000) using emission line ratio diagnostic diagrams. We specifically selected this QSO2 sample because several host galaxy properties, including the stellar populations, optical morphologies, and environments were already characterized (Bessiere et al., 2012, 2017; Ramos Almeida et al., 2013), allowing us to investigate potential correlations between them and the outflow properties that we report in this work.
Bessiere et al. (2012) selected the 20 QSO2s with right ascension (RA) in the range 23\(<\)RA\(<\)10 h, declination (\(\delta\)) of \(\delta\)\(<\)+20\({}^{\circ}\), redshifts in the range 0.3\(<\)z\(<\)0.41, and emission line luminosities of L\({}_{\rm[OIII]}\)\(>\)10\({}^{8.5}\) L\({}_{\odot}\). The luminosity cut is equivalent to an absolute magnitude of M\({}_{\rm B}\)\(<\) -23 mag, which is the classical definition of quasar. Subsequent updates to the [OIII] luminosities were reported for these QSO2s in Reyes et al. (2008), which are included in Table 1. These updated luminosities implied that six objects from the original selection fall marginally below the quasar L\({}_{\rm[OIII]}\) cut, but Bessiere et al. (2012) chose to retain the original sample of 20 objects. In a more recent work, Bessiere et al. (2017) reported deep optical spectroscopic observations for 19 of the 20 QSO2s. SDSS J015911+14392 (J0159+1439) was excluded from their analysis because they were unable to obtain spectroscopic data of sufficient quality for modelling the stellar populations. This sample of 19 QSO2s is then 95% complete and it constitutes our QSO2 sample. Table 1 includes the AGN bolometric luminosities of the QSO2s, which range from log L\({}_{\rm BOL}\)=44.9 to 46.7 erg s\({}^{-1}\), with a median value of 45.5 erg s\({}^{-1}\). These luminosities were calculated from the extinction-corrected [OIII] luminosities from Kong & Ho (2018) and using the correction factor of 454 from Lamastra et al. (2009).
### Spectroscopic data
The optical spectroscopic data used in this work, described in Bessiere et al. (2017), were mainly obtained using the Gemini Multi-Object Spectrograph (GMOS-S) mounted on the Gemini South telescope at Cerro Pachon, Chile. Long-slit spectra were taken for 16 objects during semesters 2010B and 2011B using a relatively wide slit of 1.5 arcsec, in both the B600 and the R400 gratings. The observations consisted of 4\(\times\)675 s exposures using the B600 grating and 3\(\times\)400 s exposures using the R400 grating. The average spectral resolutions measured from the sky lines were 7.2 and 11.4 A for the blue and red wavelength ranges, respectively. However, since all the observations but one (J2358-00) have a seeing full width at half maximum (FWHM) smaller than the slit width, the actual spectral resolutions range from 4.0-8.2 A for the different spectra.
Different spectroscopic observations were compiled for the other three QSO2s in the sample, either because no GMOS-S observations were obtained (J0217-00 and J0114+00) or because [OIII] was not covered by the GMOS-S spectrum (J0142+00, for which only the blue range was observed). J0217-00 was observed with the Gran Telescopio Canarias (GTC), at the Roque de los Muchachos Observatory, La Palma, Spain. The Optical System for Imaging and low-Intermediate-Resolution Integrated Spectroscopy (OSIRIS) instrument was used in long-slit spectroscopy mode, using a slit of 1.23 arcsec width. The target was observed with the R2000B and R2500R grisms for integration times of 3\(\times\)1800 s and 3\(\times\)1200 s, which lead to spectral resolutions of 4 and 5 A, respectively. Finally, the spectra of J0114+00 and J0142+00 were obtained from SDSS, which uses a standard 3 arcsec fiber covering an observed wavelength range of 3800-9200 A, with a spectral resolution (R) varying from R\(\sim\)1500 at 3800 A to R\(\sim\)2500 at 9000 A. This spectral resolution corresponds to a FWHM=2.78 A at 7000 A.
Here we used the reduced and flux-calibrated data from Bessiere et al. (2017), where further details can be found. Large extraction apertures, of 1.5-2 arcsec (\(\approx\)8 kpc) were used there with the aim of performing stellar population modelling of the QSO2s host galaxies. Here, since our goal is studying the nuclear gas kinematics, we extracted the spectra using a diameter determined by the size of the seeing of each observation. The seeing was measured from the FWHM of the stars detected in the corresponding acquisition images of the QSO2s (0.53-1.13 arcsec \(\approx\) 2.7-6.1 kpc; see Table 1), with the exception of J2358-00, for which we used an aperture of 8.5 kpc given that the seeing of 1.58 arcsec is larger than the slit width. For extracting these nuclear 1D spectra from the reduced and calibrated 2D spectra, we used the task _apall_ within the IRAF package _nwodspec_. First, we summed the flux contribution for the wavelengths centered around the ionized [OIII]\(\lambda\)5007 emission line. Then, by plotting this flux against the spatial axis we located the maximum of the emission, and extracted a seeing-sized aperture centered at this position. In the case of the two QSO2s with SDSS data, the spectra correspond to physical sizes 15.8 kpc, set by the size of the fiber. We chose the [OIII] emission line to perform the kinematic analysis of the ionized gas because it is intense and AGN-dominated in the case of QSO2s, and it is not blended with other emission lines. In the case of the targets for which the [OIII] line was detected in both the red and the blue GMOS-S spectra, we used the blue because of its higher spectral resolution.
### Galaxy properties
The selected sample has been previously studied by Bessiere et al. (2012, 2017) and Ramos Almeida et al. (2013). From these works we can get information about multiple properties of the QSO2s and their host galaxies, including the presence of YSPs, optical morphologies, environments, and radio emission (see Table 1). One of our goals is to search for correlations between these AGN/galaxy properties and those of the ionized outflows that we characterize here, for the first time in these QSO2s.
Bessiere et al. (2017) presented a detailed study of the stellar populations of the host galaxies of the 19 QSO2s in our sample plus another two additional QSO2s. That study is based on the modeling of the spectra described above and extracted in a large aperture of \(\sim\)8 kpc, centred on the peak continuum flux. For fitting the spectra, they used a combination of up to two stellar population models that are representative of viable star formation histories, and a power-law of varying spectral index, to account for the contribution from scattered AGN light. Based on this analysis, Bessiere et al. (2017) concluded that 71% of the 21 QSO2 host galaxies require the inclusion of a YSP with an age\(<\)100 Myr for correctly modelling the stellar features detected in the optical range. From the sample of 19 QSO2s studied here, 14 host galaxies (74%) need the inclusion of this YSP (see Table 1). In the following, we just focus on the detection/non-detection of these YSPs in the QSO2s because their ages are comparable with the current phase of quasar activity (Martini 2004; Hickox et al. 2014).
A full analysis of the optical morphologies of our QSO2 sample is presented in Bessiere et al. (2012). They visually inspected deep optical broad-band images, also observed with the GMOS-S instrument on the Gemini South telescope. They claimed that 15 of the 19 QSO2s (79% of the sample) show signs of galaxy interactions in the form of tails, shells, fans, irregular features, amorphous haloes, and/or double nuclei. Based on the presence or absence of these structures, Bessiere et al. (2012) classified the host galaxies in four groups that correspond to different stages of the interaction between two galaxies. These groups are the following.
(1) The QSO2 host is part of a galaxy pair in tidal interaction.
(2) The galaxy presents other signs of morphological disturbance such as fans, shells, and/or tails.
(3) The system has multiple nuclei separated by \(\leq\)10 kpc.
(4) Isolated galaxy with no signs of morphological disturbance.
Bessiere et al. (2012) also reported the 5 GHz radio luminosities of the QSO2s, which are listed in Table 1. They were calculated using the integrated flux at 1.4 GHz, obtained either from the FIRST or NVSS surveys, assuming a spectral index of \(\alpha\)=-0.75 when unknown. These 5 GHz luminosities range from 22.1 to 24.7 W Hz\({}^{-1}\), with a median value of 22.35 W Hz\({}^{-1}\), excluding the six QSO2s with upper limits (\(<\)22.6 W Hz\({}^{-1}\)).
Lastly, Ramos Almeida et al. (2013) characterized and quantified the environment of the QSOs by means of the angular clustering amplitude, B\({}_{gw}\), and using the deep GMOS-S optical imaging data from Bessiere et al. (2012). The spatial clustering amplitude measures the excess in the number of galaxies around the target as compared with the expected number of background galaxies, assuming that the galaxy clustering is spherically symmetric around each target. Both the neighbours and the background galaxies were counted as the total number of galaxies surrounding the QSO2 (or its corresponding position in the offset fields) within a projected distance of a 170 kpc radius and having magnitudes between m-1 and m+2, with m being the magnitud of a generic galaxy at the redshift of the
target. More information about the procedure can be found in Ramos Almeida et al. (2013). In Table 1 we show the values of \(\mathrm{B}^{av}_{gw}\), calculated using all the offset fields (i.e., not including the QSO2) to determine the average number of background galaxies. Most of the QSO2s have low values of \(\mathrm{B}^{av}_{gw}\), corresponding to low-density environments. The average value reported by Ramos Almeida et al. (2013) for the QSO2s is 151\(\pm\)76, and for comparison, values of \(\mathrm{B}^{av}_{gq}\)\(\geq\)400 are typical of cluster environments. Indeed, Ramos Almeida et al. (2013) compared the values of \(\mathrm{B}^{av}_{gq}\) obtained for the QSO2s and for a control sample of quiescent early-type galaxies and did not find significant difference between them.
## 3 Kinematic analysis
We analyzed the nuclear [OIII]\(\lambda\lambda\)4959,5007 A profiles in order to investigate the kinematic properties of our sample, specially the possible presence of outflows. As explained in Section 2.1, we extracted a nuclear spectrum of each source by choosing an aperture comparable to the seeing of each observation (see Table 1). These apertures correspond to physical sizes of \(\sim\)3-6 kpc, except for J2358-00, which is 8.5 kpc. For the two QSO2s for which only SDSS spectra are available, J0114+00 and J0142+00, the fiber size corresponds to \(\sim\)15.8 kpc.
For characterising the ionized gas kinematics, the main methods applied in the literature are based either on a parametric approach or a non-parametric one. At every spatial location different gaseous components with different kinematics can contribute to the flux and modify the shape of the line profiles: gas in the narrow-line region (NLR), outflowing gas, contributions from companion objects, etc. Parametric methods are based on the assumption that the different kinematic components can be described as a Gaussian distribution. They have the advantage of making it possible to characterize the properties of the different kinematic components found, including density and reddening (Holt et al., 2011; Villar-Martin et al., 2014; Arribas et al., 2014). The challenge is that in some objects with disrupted kinematics, it is difficult to ascribe a physical meaning to all the fitted kinematic components (Bessiere & Ramos Almeida, 2022). On the other hand, the non-parametric analysis consists of measuring emission line velocities at various fixed fractions of the peak intensity (Speranza et al., 2021) or the line cumulative flux function (Whittle, 1985; Harrison et al., 2014). This method is most appropriate for characterizing gas properties in galaxies with multiple kinematic components, as it permits to easily identify high velocity gas in tail ends of emission line wings. However, non-parametric methods do not allow us to characterize the relative contribution of the different gaseous components at play.
Considering the previous, here we adopt both parametric and non-parametric methods to characterize the ionized gas kinematics of the QSO2s. By doing this we can investigate the dependence of the outflow properties, when present, on the method employed to measure them. First, we modelled the [OIII]\(\lambda\lambda\)4959,5007 A emission line profiles using a parametric method (i.e., fitting Gaussians to the line profiles). Second, taking advantage of the parametric fit of the [OIII]\(\lambda\)5007 A, we used both flux-weighted and peak-weighted non-parametric methods to characterize the line profile. In Section 4 we compare the results obtained with the different methods that we describe below.
### Parametric method
We fitted the [OIII]\(\lambda\lambda\)4959,5007 profiles with multiple Gaussian components, using a Python program that we developed, based on the _Spcutils_ and the _Astropy_ packages.
Initially, before fitting the emission lines, the underlying continuum is subtracted by linearly interpolating the spectrum between two line-free regions close to the [OIII] doublet, redwards and bluewards (see the left panel of Figure 1). To select the number of Gaussians fitted to each of the QSO2s, we added new components until the increment of the reduced \(\chi^{2}\) (\(\Delta\chi^{2}\)) of the residuals is lower than 10%, following Bessiere & Ramos Almeida (2022). We also imposed that any component must be broader than the instrumental width (see Section 2.2), and its corresponding parameters larger than their associated uncertainties. An example of fit including two Gaussians is shown in the right panel of Figure 1.
Each kinematic component fitted to the [OIII] doublet corresponds to a set of two Gaussians simultaneously fitted (i.e., sharing the same width and velocity shift relative to systemic), with fixed wavelength separation (taking into consideration the cosmological spread of the wavelength shift between the two lines) and an amplitude ratio of [OIII]\(\lambda\)4959/[OIII]\(\lambda\)5007 = 1/3 (Dimitrijevic et al., 2007). It is noteworthy that the GMOS-S spectra reduced and analyzed by Bessiere et al. (2017) have residual atmospheric absorption around the spectral region 6863-6913A. When this residual absorption lies on top or close to one of the emission lines, as it is the case of the QSO2s J0114+00, J0142+14, J0217-01, J0218-00, J0320+00, and J0923+01, we first fit just the unaffected [OIII] line, and then use the corresponding parameters to force the fit of the affected line. In the case of J0924+01, where the atmospheric absorption lies between the two emission lines, the affected wavelength range is masked before fitting the emission line profiles (see right panel of Figure 1).
All the fits and corresponding parameters derived from them (number of Gaussians, flux, FWHM, velocity shift from systemic, luminosity, and percentage luminosity of each component relative to the total luminosity of the line) are shown in Figure A.1 and Table A.1 in Appendix A. We note that the values of the FWHM have not been corrected from instrumental width, with the aim of keeping them comparable with W80 (see Section 3.2). Nevertheless, the individual instrumental FWHMs are listed in Table A.1. The uncertainties of the parameters were computed through a Monte Carlo simulation, creating 1000 mock spectra by varying the flux at each wavelength adding random values from a normal distribution with an amplitude given by the noise of the continuum. The errors were then computed as the 1\(\sigma\) of each parameter distribution generated from the mock spectra.
Following the methodology described above, we fitted n Gaussian components to the [OIII] profiles of the QSO2s and classified them as narrow, intermediate, and broad depending on the FWHM.
* Narrow (n): FWHM \(<\) 800 km s\({}^{-1}\).
* Intermediate (i): 800 km s\({}^{-1}\)\(\leq\) FWHM \(\leq\) 2000 km s\({}^{-1}\).
* Broad (b): FWHM \(>\)2000 km s\({}^{-1}\).
We used these values trying to ascribe a physical meaning to the different kinematic components. Here we assume that the narrow components are associated with gas in the NLR, whose emission lines typically present FWHMs\(\sim\)400-600 km s\({}^{-1}\)(Netzer, 1990; Groves, 2007). The intermediate components are broader than the typical NLR FWHMs, but narrower than the broad components from the broad-line region (BLR),
which have FWHM\(>\)2000 km s\({}^{-1}\). We note, however, that the broad components that we are measuring here for the [OIII] emission lines cannot be associated with the BLR because they are forbidden lines. Thus, the intermediate and broad components would be most likely associated with turbulent/outflowing ionized gas. With this is mind, the velocity shifts reported in Table 1 were computed relative to the narrow component, or to the centroid of the multiple narrow components when present. We consider these velocity shifts as the outflow velocities, relative to the average kinematics of the NLR. Since these are conservative estimates, we also calculate the outflow maximum velocities as v\({}_{\rm max}\)=v\({}_{\rm s}\)+2\(\sigma\)(Rupke & Veilleux, 2013), where \(\sigma\) is FWHM/2.355.
In addition to the (n), (i), and (b) components, the fits of the QSO2s J0217-01 and J0320+00 also include a narrow and redshifted (r) component (see Table 1 in Appendix A) that, although do not meet all the criteria described at the beginning of this Section, were necessary to successfully reproduce the [OIII] profiles. However, these red components are not included in the outflow analysis.
We found intermediate and broad components in 18 of the 19 QSO2s, which implies an outflow incidence rate of \(\sim\)95%. The only QSO2 without intermediate or broad components is J0218-00. The outflow components that we measured for the other 18 QSO2s present an average FWHM of 1800 km s\({}^{-1}\), with a standard deviation of 1100 km s\({}^{-1}\). They are mainly blueshifted, with an average value of v\({}_{s}\) = -370 km s\({}^{-1}\) and a standard deviation of 400 km s\({}^{-1}\) (see Table 2), and a maximum velocity of v\({}_{\rm max}\) = -1800\(\pm\)1400 km s\({}^{-1}\).
We find that 14 of the QSO2s (74% of the sample) require the inclusion of more than two Gaussian components (2 QSO2s with four and 12 with three Gaussians) to correctly model the emission line profiles. The remaining 5 QSO2s (26% of the sample) are well fitted with just two Gaussian components. Such diversity of kinematic components detected in the [OIII] lines is common in QSO2s (Villar-Martin et al., 2011, 2016; Harrison et al., 2014; McElroy et al., 2015). However, the heterogeneous results from our parametric analysis make it difficult to characterize the outflow properties of the sample. For this reason, and taking advantage of the parametric analysis here described, we performed a non-parametric analysis of the emission line profiles. This analysis, described in Section 3.2, provides a more homogeneous set of results, therefore allowing us to easily evaluate possible correlations between the outflow and host galaxy properties (see Section 5).
### Non-parametric analysis
Here we describe two different non-parametric methods to characterize the [OIII]\(\lambda\)5007 A emission line: flux-weighted and peak-weighted. The two of them require of a noiseless isolated emission line profile, and hence we use the models resulting from the parametric method described in Section 3.1 (i.e., the sum of the Gaussian components used to fit the [OIII]\(\lambda\)5007 A emission line of each QSO2; see the right panel of Figure 1 for an example).
Figure 1: Parametric fit of the [OIII] doublet for the QSO2 J0924+01. The left panel shows the line-free regions, in blue, selected for fitting the underlying continuum, in orange. The right panel shows the two kinematic components fitted in the case of this QSO2, with the residuals included in the inset at the bottom. The green shaded region in the residuals corresponds to the region affected by atmospheric absorption.
Figure 2: Example of the flux-weighted non-parametric method employed in this work. The blue solid line is the parametric fit to the [OIII]\(\lambda\)5007 Å emission line shown in Figure 1 for J0924+01, with the continuum subtracted. The non-parametric velocity definitions included in this figure are the median velocity (v\({}_{\rm med}\)), the velocities at the 5, 10, 90, and 95% points of the normalized cumulative function of the emission line flux (v\({}_{\rm 05}\), v\({}_{\rm 10}\), v\({}_{\rm 90}\), and v\({}_{\rm 95}\)), and the width of the line containing 80% of the total flux (W\({}_{\rm 80}\)=v\({}_{\rm 90}\)v\({}_{\rm 10}\)). The grey region corresponds to the area containing 80% of the total flux (i.e., between v\({}_{\rm 10}\) and v\({}_{\rm 90}\)), and the blue and red regions correspond to high-velocity gas, which we consider outflowing.
#### 3.2.1 Flux-weighted non-parametric analysis
We first use a flux-weighted non-parametric approach (Heckman et al., 1981; Harrison et al., 2014; Zakamska and Greene, 2014) to describe the kinematic properties of the modelled [OIII]\(\lambda\)5007 A emission line. The velocities \(v_{\rm OS}\), \(v_{\rm 10}\), \(v_{\rm 90}\), and \(v_{\rm 95}\) are defined as the velocities at the 5th, 10th, 90th, and 95th percentiles of the normalised cumulative function of the emission line flux. Other quantities based on this analysis that we use hereafter are the following.
* The peak velocity, \(v_{\rm p}\). It corresponds to the peak flux of the emission line, and it is representative of the narrow component(s) fitted in Section 3.1 (i.e., of NLR gas).
* The median velocity, \(v_{\rm med}\), corresponding to the 50th percentile of the velocity, also representative of NLR gas.
* The line width, \(\rm W_{80}\), defined as the width of the line containing 80% the total emission line flux, \(\rm W_{80}\)=\(v_{\rm 90}\)-\(v_{\rm 10}\).
* The velocity offset, \(\Delta v\)=\((v_{\rm 50}+v_{\rm 95})/2\). It quantifies the velocity offset of any blueshifted/redshifted component relative to systemic (i.e., the peak velocity, \(v_{p}\)).
* The asymmetry, a=\(\mid\)\(v_{\rm 90}-v_{\rm med}\mid-\mid\)\(v_{\rm 10}-v_{\rm med}\mid\). Negative (positive) asymmetry values indicate the presence of a blue (red) tail in the line profiles.
In the following, we consider that only the gas having velocities larger than \(v_{\rm 95}\) and \(v_{\rm 95}\) is outflowing, and define the corresponding outflow velocities (\(v_{\rm OF}\)) as the median velocity (\(v_{\rm 50}\)) of the blueshifted and redshifted wings (blue and red areas in Figure 2). In the example shown in Figure 2, the median velocity of the blue wing is -1926 \(\pm\) 37 km s\({}^{-1}\), and 789 \(\pm\) 21 km s\({}^{-1}\) for the red wing. For the whole sample, we measure \(v_{\rm OF}\)=-1120\(\pm\)510 km s\({}^{-1}\) for the blue wing, and 680\(\pm\)200 km s\({}^{-1}\) for the red. All the parameters and figures derived using this method can be seen in Appendix B. As in the case of the parametric method, the uncertainties are computed as the 1\(\sigma\) of each parameter distribution generated with a Monte Carlo simulation of 1000 mock spectra.
We find a wide range of emission line widths, with an average \(\rm W_{80}\) = 940 \(\pm\) 280 km s\({}^{-1}\). 12 of the 19 QSO2s present broad [OIII] line profiles, with \(\rm W_{80}\) \(>\)800 km s\({}^{-1}\). Regardless of this, here we consider that all the QSO2s have outflows of ionized gas, since by definition, every emission line profile will include velocities larger than \(v_{\rm 50}\) and \(v_{\rm 95}\). The average values of the asymmetry, \(v_{\rm 95}\), and \(v_{\rm 95}\) are -120\(\pm\)130 km s\({}^{-1}\), -820 \(\pm\) 320 km s\({}^{-1}\), and 530 \(\pm\) 140 km s\({}^{-1}\). All the QSO2s show negative asymmetry values except J0227+01. These negative values are related to the presence of blue tails in the profiles (see Figure 2 and Table 2 in Appendix B). Detecting blueshifted wings is more common than redshifted wings because usually we see the bulk of the outflowing gas that is coming toward us, whilst the receding side is partly hidden behind the host galaxy. In cases where the outflows subtend a small angle relative to the galaxy discs, it is possible to detect both the blueshifted and redshifted outflow components, or depending on the galaxy inclination, a dominant redshifted component. This might be the case of J0227+01. Lastly, it should be mentioned that the quasar J0218-00, that did not need the inclusion of an intermediate or broad component in the parametric model, presents low values of the asymmetry and the velocity offset.
#### 3.2.2 Peak-weighted non-parametric analysis
As we mentioned in Section 3.2.1, the flux-weighted non-parametric method likely overestimates the outflow mass because by definition, a certain proportion of the gas is outflowing, both blueshifted and redshifted. In addition, Speranza et al. (2021) argued that \(v_{\rm 05}\) might still be representative of rotating gas, which generally constitutes most of the emission line flux, and hence may not be fully representative of outflowing gas. Consequently, Speranza et al. (2021) proposed a different non-parametric analysis, based on the detection of asymmetries in the emission lines (see Figure 3 for an example), to measure the outflow properties.
For this method we also use the model of the [OIII]\(\lambda\)5007 A emission line profile from the parametric method. The core of the emission line is defined as the region between the peak of the line and one-third of the peak (i.e., the region that corresponds to \(\sim\)90% of the total flux of the line considering a Gaussian profile). By subtracting a mirror image of the less prominent wing from the most prominent one, the symmetric component of the emission line is removed, leaving just the asymmetric wing. Such residual is what we associate with outflowing gas (orange area in Figure 3). For characterizing the outflow we use two parameters: the median velocity of the residual, \(v_{\rm OF}\), (the 50th percentile of its total flux) and the flux of the residual wing \(\rm F_{w}\). As in Section 3.2.1, the uncertainties were estimated as 1\(\sigma\) of each parameter distribution from 1000 mock spectra. All the plots and parameters derived from the peak-weighted non-parametric analysis can be seen in Figure 3 and Table 3 of Appendix C.
Using this method we find that all the QSO2s in the sample show ionized outflows in the form of asymmetric wings. 18/19 QSO2s, including those with low values of \(\rm W_{80}\), like J0218-00, present blue wings. J0227+01 presents a red wing instead, in agreement with its positive asymmetry value. The average value of the outflow velocity computed using this method, considering the 18 QSO2s having blue wings, is \(v_{\rm OF}\)=-840\(\pm\)260 km s\({}^{-1}\), while for J0227+01 we obtain a value of 676\(\pm\)21 km s\({}^{-1}\) (see Tables 2 and 3).
## 4 Physical outflow properties
In Section 3 we described the three methods used here to characterize the ionized outflows of the QSO2s, and the corresponding results. The parameters derived from these fits are direct measurements (e.g., FWHM, \(\rm W_{80}\), a, \(\rm v_{\rm med}\)), but we can use them to derive physical properties, such as the outflow mass, outflow
Figure 3: Same as in Figure 2, but using the peak-weighted non-parametric method from Speranza et al. (2021). The black solid line in the blue side of the emission line corresponds to the mirror image of the red side. The grey area is the core of the line, defined as the region between the peak and 1/3 of the peak flux. The orange area is the result of subtracting the black from the blue line outside the core region, which is what we consider outflowing gas using this method. The black and grey vertical dashed lines are the peak and outflow velocities (\(v_{p}\) and \(\rm v_{OF}\)).
mass rate, and kinetic power. These are key quantities for investigating the outflow impact on the surrounding environment. First, we calculated the mass of the ionized outflows using Eq. 1 (Osterbrock and Ferland, 2006; Carniani et al., 2015; Fiore et al., 2017).
\[M_{\rm[OIII]}=4\times 10^{7}M_{\odot}\left(\frac{C}{10^{03}\rm{/H}}\right) \left(\frac{L_{\rm[OIII]}}{10^{44}}\right)\left(\frac{10^{3}}{\langle n_{e} \rangle}\right) \tag{1}\]
where \(n_{e}\) is the electronic gas density, C the condensation factor, L\({}_{\rm[OIII]}\) the outflowing gas luminosity and O/H = [O/H] - [O/H]\({}_{\odot}\) is the oxygen abundance ratio relative to solar, with [O/H]\({}_{\odot}\sim\) 8.86 (Centeno and Socas-Navarro, 2008). For the whole sample, we assumed that the gas clouds have the same electron density, leading to C = \(<n_{e}^{2}>\) - \(<n_{e}>^{2}\) = 1, and solar metallicity as in Bessiere et al. (2017), hence O/H = 0.
For ionized outflows, an important source of uncertainty is the electron density (Harrison et al., 2018; Holden and Tadhunter, 2023). The outflow mass is inversely proportional to the gas density, which can be estimated from the ratio of the [SII]\(\lambda\lambda\)6716,6731 doublet, from the [OIII]/H\(\beta\) and [NII]/H\(\alpha\) ratios, or using the fainter trans-auroral lines. The latter technique uses the flux ratios of the [SII]\(\lambda\lambda\)6716,6731 and [O II]\(\lambda\lambda\)3726,3729 doublets as well as of the trans-auroral [O II]\(\lambda\lambda\)7319,7331 and [SII]\(\lambda\lambda\)4068,4076 lines. The trans-auroral ratios have the advantage of being sensitive to higher density gas than the classical [SII] ratio (Holt et al., 2011; Rose et al., 2018; Ramos Almeida et al., 2019). Using [SII], Harrison et al. (2014) reported densities in the range 200-1000 cm\({}^{-3}\) for a sample of 16 QSO2s at z\(<\)0.2, and Singha et al. (2022) measured outflow densities of \(\sim\)1900 for type-1 AGN at 0.01\(<\)z\(<\)0.06. Using the trans-auroral lines, Rose et al. (2018) measured outflow densities in the range 2500 \(<n_{e}\) (cm\({}^{-3}\)) \(<\) 20000, for ULIRGS with 0.06 \(<\) z \(<\) 0.15. Finally, using optical line ratios, Baron and Netzer (2019) reported densities of \(\sim\)30000 cm\({}^{-3}\) for a sample of 234 nearby type-2 AGN. For this work, since our spectra do not cover either the [SII] doublet, the trans-auroral lines, or the [NII] and H\(\alpha\) emission lines, we adopted a gas density of n\({}_{e}\)=200 cm\({}^{-3}\) for all the QSO2s, as in Fiore et al. (2017) and Speranza et al. (2021). Therefore, the derived outflow masses will most likely be upper limits.
Since we are measuring only the [OIII] gas, we assume that the total ionized gas mass of the outflows is three times the [OIII] outflow mass (M\({}_{\rm{OF}}\) = 3\(\times\)M\({}_{\rm{[OIII]}}\); Fiore et al., 2017). The outflow mass rate represents the instantaneous rate of outflowing gas at the edge of the wind. Assuming an spherical sector (Fiore et al., 2017; Lutz et al., 2020), it can be defined as three times the total [OIII] mass divided by the time required to push this mass through a spherical surface of radius R\({}_{\rm{OF}}\).
\[\dot{\rm{M}}_{\rm{OF}}=3\ {\rm{v}_{OF}}\ \frac{{\rm{M}}_{\rm{OF}}}{{\rm{R}}_{ \rm{OF}}} \tag{2}\]
As there are no integral field observations that we can use to constrain the outflow radius (R\({}_{\rm{OF}}\)), it might be possible to do it using averaged spatial slices of the broad line wings detected in the long-slit spectra, as in Rose et al. (2018) and Ramos Almeida et al. (2019). However, this procedure is not straightforward in the case of the [OIII] lines, because the blueshifted wing of [OIII]\(\lambda\)5007 is blended with [OIII]\(\lambda\)4959, and the blueshifted wing of the latter is much fainter. Moreover, since for the GMOS-S spectra used here we measured a median seeing of 0.83 arcsec (\(\sim\)3.8 kpc at the average redshift of the sample, z\(\sim\)-0.366), the ionized outflows will most likely be unresolved, according to other studies of QSO2s at similar redshifts (Villar-Martin et al., 2011, 2016; Karouzos et al., 2016; Ramos Almeida et al., 2017, 2019; Speranza et al., 2021). Early studies of quasar-driven outflows based on optical integral field observations of local QSO2s reported ionized outflows extending up to \(\sim\)10-15 kpc (Liu et al., 2013; Harrison et al., 2014; McElroy et al., 2015). However, later works claimed that these sizes were overestimated due to seeing smearing effects (Husemann et al., 2016; Karouzos et al., 2016; Harrison et al., 2018) or to selection biases (Villar-Martin et al., 2016). More recent studies report ionized outflows with sizes of \(\sim\)1-3.4 kpc for local QSO2s (Karouzos et al., 2016; Ramos Almeida et al., 2017, 2019; Speranza et al., 2022). Hence, here we assume an outflow radius of 1 kpc for all the QSO2s in our sample. If the radii are larger than this value, our mass outflow rates will be upper limits, although more compact outflows have been reported for nearby ULIRGs and AGN (Tadhunter et al., 2018, 2021; Singha et al., 2022; Winkel et al., 2023).
The outflow velocity (v\({}_{\rm{OF}}\)) that we used to compute the outflow mass rate depends on the method: in the case of the parametric analysis we considered both v\({}_{\rm{s}}\) and v\({}_{\rm{max}}\) (see Table 1 in Appendix A). v\({}_{\rm{s}}\) is the velocity of the intermediate and/or broad components relative to the narrow component(s), and v\({}_{\rm{max}}\)=v\({}_{\rm{s}}\)+2\(\sigma\) (see Section 3.1). We compute the outflow mass rate associated with each of the intermediate and/or broad components and then added them to compute the total outflow rate (see Table 2).
For the flux-weighted non-parametric analysis, we used as outflow velocities, v\({}_{\rm{OF}}\), the v\({}_{\rm{50}}\) of the red and blue areas shown in Figures 2 and 1. Then, we computed two separate outflow rates for each QSO2, red and blue, which we finally added to compute the total outflow mass rate (see Table 2 in Appendix B). Finally, for the peak-weighted non-parametric analysis we used 50th velocity percentile of the residual wings (orange areas in Figures 3 and 1) to calculate the outflow mass rates (see Table 1 in Appendix C).
Once we estimated the outflow mass rates using Eq. 2, we can calculate the kinetic power as:
\[\dot{\rm{E}}_{\rm{kin}}=\frac{1}{2}\ \dot{\rm{M}}_{\rm{OF}}\ {\rm{v}_{\rm{OF}}^{2}} \tag{3}\]
The outflow masses, outflow mass rates, and kinetic powers measured with the three different methods are shown in Tables 2, 2, 3, and Figure 4. The uncertainties were computed using propagation of errors.
The outflow masses computed through the parametric, flux-weighted, and peak-weighted non-parametric methods have average values of log(M\({}_{\rm{OF}}\))=6.47\(\pm\)0.50, 6.03\(\pm\)0.30, and 5.75\(\pm\)0.32 M\({}_{\odot}\), and the medians are 6.46, 5.93, and 5.75 M\({}_{\odot}\), respectively (see Table 2). From these results, and from the outflow mass histograms shown in Figure 4, we conclude that using the parametric method we derive the largest outflow masses. This is because this with this method we consider the integrated flux of the intermediate and broad Gaussian components as outflowing gas, whilst in the case of the non-parametric methods, we just use the flux included in the tails of the emission lines.
The average outflow mass rates measured from the parametric, flux-weighted, and peak-weighted non-parametric methods are 2.8\(\pm\)2.6, 4.0\(\pm\)4.4, and 1.9\(\pm\)1.8 M\({}_{\odot}\) yr\({}^{-1}\), and the medians are 1.8, 2.5, and 1.3 M\({}_{\odot}\) yr\({}^{-1}\) (see Table 2). The flux-weighted non-parametric method results in higher outflow mass rates, as a consequence of always including blueshifted and redshifted gas (see Section 3.2.1) with large velocities. However, all the values measured from the three methods are between 0.2 and 9 M\({}_{\odot}\) yr\({}^{-1}\)
except for J0142+14, which has an outflow rate of 20 M\({}_{\odot}\) yr\({}^{-1}\) measured from the flux-weighted non-parametric method. It is when we consider the parametric v\({}_{\rm max}\) values that we get the largest outflow mass rates (average value of 23\(\pm\)35 M\({}_{\odot}\) yr\({}^{-1}\), with a median value of 11 M\({}_{\odot}\) yr\({}^{-1}\); see middle panel of Fig. 4 and Table 2).
Focusing on the kinetic powers, we measure average values of log(E\({}_{\rm kin}\)) = 40.5\(\pm\)1.2, 41.9\(\pm\)0.6, and 41.4\(\pm\)0.5 erg s\({}^{-1}\) using the three methods. The median values are 40.3, 41.9, and 41.4 erg s\({}^{-1}\) (see Table 2). In this case, the lowest values of the kinetic power are those measured with the parametric method. This is because of its dependency with v\({}_{\rm OF}^{2}\), which is higher in the case of the non-parametric methods (see Tables A.1, B.2, and C.1). By using the velocity shift of the broad and/or intermediate components (v\({}_{\rm s}\)) relative to the narrow component(s) as outflow velocity, we are deriving lower kinetic powers than when considering the mean velocities (v\({}_{\rm S0}\)) of the high-velocity tails considered in the non-parametric methods. If instead of v\({}_{s}\) we use v\({}_{\rm max}\), as commonly done in the literature (McElroy et al., 2015; Fiore et al., 2017), the kinetic powers are larger than the non-parametric ones (average of 42.9\(\pm\)0.6 erg s\({}^{-1}\) and median of 42.9 erg s\({}^{-1}\); see Table 2). Finally, the average values of the coupling efficiencies (E\({}_{\rm kin}\)/L\({}_{\rm Bol}\)) derived from for the parametric, flux-weighted, and peak-weighted non-parametric methods are (0.014\(\pm\)0.032)%, (0.080\(\pm\)0.16)%, and (0.020\(\pm\)0.025)%, with medians of 0.001%, 0.02%, and 0.01%. These values are one order of magnitude lower than those reported by other studies of ionized outflows in AGN that have followed similar considerations (Fiore et al., 2017; Baron & Netzer, 2019; Davies et al., 2020; Speranzza et al., 2021). However, in the case of the parametric method using v\({}_{\rm max}\) (as in Fiore et al., 2017), we obtain much larger values, with an average of (1.2\(\pm\)3.0)%, and median of 0.22%. The large value of the average and its dispersion come from three QSO2s with coupling efficiencies larger than one: J0142+14, J0332-00, and J0948+00 (see Table A.2). In this case, the median value is more representative of the whole sample.
From this comparison we conclude that the three adopted methods provide different physical outflow properties, as a result of the particular considerations and procedure of each one. However, these differences are consistent within the uncertainties considering that we are not accounting for by those associated with the electron density and outflow radius. In Figure 5 we plotted the outflow mass rates and kinetic powers derived from each method as a function of the bolometric luminosity. The values from the compilation of Fiore et al. (2017) are also shown for comparison. They also considered a fixed density of n\({}_{e}\)=200 cm\({}^{-3}\), an outflow radius of R\({}_{\rm OF}\)=1 kpc, and the same outflow geometry that we are assuming here. We find that the flux-weighted non-parametric results lie within the lower values of the outflow rates and kinetic powers measured by Fiore et al. (2017) for AGN of similar luminosities, while the peak-weighted and the parametric results are smaller. However, the parametric results using v\({}_{\rm max}\) are the most similar to Fiore et al. (2017), as expected since they also used v\({}_{\rm max}\) to calculate the outflow physical properties.
## 5 Correlations between outflow properties and galaxy properties
As previously mentioned, apart from determining the outflow demographics and corresponding properties of our QSO2 sample, our goal is to evaluate possible correlations with different AGN and galaxy properties. These properties, available from Besiero et al. (2012, 2017) and Ramos Almeida et al. (2013), include: presence or not of stellar populations younger than 100 Myr (YSP), of mergers (including interaction stage), density of large-scale environment (B\({}_{\rm gq}\)), and radio luminosity (L\({}_{\rm<GHz}\)).
To do so we evaluated the matrix correlation, shown in Figure 6, between the outflow properties derived from the flux-weighted non-parametric analysis (see Sections 3.2.1 and 4) and the AGN and host galaxy properties (see Section 2.2). From the outflow direct measurements we selected W\({}_{\rm 80}\), v\({}_{\rm S0}\), v\({}_{\rm S0}\), v\({}_{\rm med}\), and \(a\); and from the physical outflow properties, M\({}_{\rm OF}\), \(\dot{M}_{\rm OF}\), and \(\dot{E}_{\rm kin}\).
In order to quantify the degree of correlation among all the properties, we computed the Spearman's rank correlation coefficient (\(\rho\)) for each variable pair, which is a non-parametric measurement of the strength and direction of the statistical dependence that exists between two variables. Unlike the Pearson correlation coefficient (r), the Spearman's rank correlation coeffient does not assume that the data sets are normally distributed and it assesses how well the relation between both can be described
Figure 4: Histograms of the outflow masses, mass rates, and kinetic powers computed through the three different methods: parametric (considering either v\({}_{s}\) or v\({}_{\rm max}\) as the outflow velocities), flux-weighted, and peak-weighted non-parametric. For easier comparison of the values, in the outflow mass rate histograms we omitted the largest values of 57 and 151 M\({}_{\odot}\) yr\({}^{-1}\), obtained for J0923+01 and J0142+14 using v\({}_{\rm max}\) in the parametric method.
using a monotonic function. Hence, values of \(\rho\) equal to -1 or 1 imply an exact monotonic relationship, with positive values indicating that as the X variable increases, Y also increases, while negative values imply that as X increases, Y decreases. In general, absolute values of \(\rho\) in the range 0-0.39 are regarded as a weak or non-existent correlation, 0.40-0.59 as moderate, 0.6-0.79 as strong, and 0.8-1 as very strong correlation. The \(\rho\) values found for our sample are shown in the top right half of Figure 6, with darker colors indicating stronger correlations.
We also performed a p-value test to verify the strength of the correlations, which can be seen in the lower left half of the matrix shown in Figure 6. The p-value measures the probability of observing a correlation coefficient as extreme or more extreme than the one observed from two datasets of uncorrelated properties, i.e., it quantifies the probability of observing such correlation purely by chance. Seeking for a 90% confidence in our results, we are setting a significance level of \(\alpha=0.1\) (100%-confidence level). This would imply that if p-value\(\lesssim\)0.1, we can be confident within a 90% level about a genuine correlation between two properties. On the contrary, if p-value\(>\)0.1, the correlation could have arisen by chance, and hence we cannot conclude that it is significant.
As a sanity check, we first confirmed that correlations between certain outflow properties exist. For example, \(W_{80}\) shows strong correlations with the outflow projected velocities, \(v_{05}\) and \(v_{95}\) (\(\rho\)=-0.91 and \(\rho\)=0.81, respectively). This is a consequence of broader emission line profiles having faster velocities associated with the wings. Similarly, \(v_{05}\) is also strongly correlated with the asymmetry, \(a\), with \(\rho\)=0.86. This is not the case for \(v_{95}\) (\(\rho\)=-0.23), since most of the asymmetries in our sample are associated with blueshifted gas. These outflow parameters also show strong correlations with some of the outflow physical properties, specially with the outflow mass rates and kinetics powers.
Regarding the correlations between the outflow and the AGN and host galaxy properties, we only find moderate correlations between the AGN bolometric luminosity and the outflow mass and outflow mass rate, both with \(\rho\)=0.52, respectively (p-values of 0.021 and 0.023). This implies that more luminous AGN have more massive ionized outflows, and also higher outflow mass rates (Cicone et al., 2014; Hainline et al., 2014; Fiore et al., 2017; Revalski et al., 2018).
We did not find any significant correlations between the outflow properties and the host galaxy properties. Regarding the presence or not of YSPs, 4 out of the 5 quasars (80%; J0114+00, J0234-07, J0332-00, and J0948+00) that do not require of the inclusion of a YSP \(<\) 100 Myr to reproduce their optical spectra show large values of the asymmetry: |a|\(>\)100. On the contrary, only 5 of the 14 QSO2s with such YSPs (36%) have |a|\(>\)100. Since larger asymmetries are associated with more disrupted gas and faster outflows, this result, despite the small sample size, might be indicating that recent star formation is being supressed more efficiently in the QSO2s with the most disrupted kinematics. Nevertheless, as we noted above, there are also QSO2s with disrupted kinematics showing YSPs.
Regarding the optical morphologies of the QSO2s, 3 out of the 4 galaxies that do not show any evidence for mergers/interactions (75%; J0332-00, J0234-07, J0114+00, and J0948+00) show |a|\(>\)100. Considering the merging QSO2s (15/19), we do not find any trend between the outflow properties and the stage of the interaction either (i.e., pre-coalescence or post-coalescence; Ramos Almeida et al., 2011; Bessiere et al., 2012). This would imply that the presence of outflowing gas and/or disrupted kinematics on the spatial scales that our spectra are probing is related to the AGN and not to the mergers. Considering this, it is not surprising that we do not find any correlation between the outflow properties and the large scale environment either. In particular, we looked at the spatial clustering amplitudes (B\({}_{\rm pl}^{\rm av}\)) from Ramos Almeida et al. (2013). The QSO2s with most disrupted kinematics do not show any preference for denser/sparser environments.
Finally, we also looked at possible correlations between the outflow properties and the radio luminosity of the QSO2s. Indeed, Mullaney et al. (2013) and Zakamska & Greene (2014) found a connection between the width of the [OIII] profiles and L\({}_{1.4\rm GHz}\). The radio luminosity is a property that can be associated with star formation and/or nuclear activity (Jarvis et al., 2021; Bessiere et al. in prep.). For our QSO2s, we do not find any correlation between L\({}_{5\rm GHz}\) and any of the outflow properties here
Figure 5: Outflow mass rate and kinetic power as a function of the AGN bolometric luminosity. Light and dark blue squares correspond to the values derived from the parametric method using v\({}_{\rm r}\) and v\({}_{\rm max}\), respectively. Green and orange circles are the values from the flux-weighted and peak-weighted non-parametric methods. The values from Fiore et al. (2017) are shown as grey triangles for comparison.
considered. Here we are using the integrated luminosities at 5 GHz from Bessiere et al. (2012), measured from 5 arcsec resolution FIRST data, and we also checked that the results are the same when we use peak luminosities instead. The lack of correlation might be due to the limited range of radio luminosities that our sample is probing: 22.1 - 24.7 erg s\({}^{-1}\), with an average value of 22.80\(\pm\)0.80 erg s\({}^{-1}\) and median of 22.35 erg s\({}^{-1}\) (see Table 1 and Section 6.2).
## 6 Discussion
In this work we characterized the ionized outflow properties of a sample of 19 QSO2s at redshift 0.3-z\(<\)0.41 using three different analysis methods. We compared the results, and looked for correlations between the outflow properties and various AGN and galaxy properties. Here we discuss these results and put them in context with others from the literature.
### Demographics and energetics of ionized outflows
We found signatures of ionized outflows in 18 of the 19 QSO2s using the parametric method (Section 3.1), based on the presence of at least one Gaussian component with FWHM\(>\)800 km s\({}^{-1}\) and generally blueshifted, as in Villar-Martin et al. (2011, 2016). Using the peak-weighted non-parametric method, we find that all the QSO2s have signatures of outflowing gas: 18 in the form
Figure 6: Correlation matrix between the QSO2 outflow properties derived from the [OIII] emission line and using the flux-weighted non-parametric method, and their AGN and host galaxies properties. The Spearman’s rank correlation coefficients (\(\rho\)) are shown in the top right half of the matrix (blue and red colors) and the corresponding p-values in the bottom left half (grey colors).
of blueshifted asymmetries, and one redshifted. This implies an outflow incidence rate of \(\sim\)95-100%, similar to the results found in other QSO2s studies (Villar-Martin et al., 2011, 2016; Fischer et al., 2018; Dall'Agnol de Oliveira et al., 2021) and higher than those generally reported for low-to-intermediate luminosity AGN (see e.g., Riffel et al., 2023 and references therein).
Some of the QSO2s show extreme [OIII] kinematics, as for example J0924+01 (see Figures 1-3). From the flux-weighted non-parametric analysis of all the QSO2s, we find emission line widths ranging from 530 to 1645 km s\({}^{-1}\) (average W\({}_{80}\)=940\(\pm\)280 km s\({}^{-1}\)) and asymmetry values ranging from -235 to 50 km s\({}^{-1}\) (average a=-120\(\pm\)130 km s\({}^{-1}\)). Regarding the outflow velocities, the fastest that we measure come from the flux-weighted parametric method (average v\({}_{\rm{OR}}\)=-1120\(\pm\)510 km s\({}^{-1}\)) and from the parametric when we compute the maximum velocity (average v\({}_{\rm{max}}\)=-1800\(\pm\)1400 km s\({}^{-1}\)). These velocities are typical of AGN-driven ionized outflows detected in quasars with these bolometric luminosities (Mullaney et al., 2013; Zakamska & Greene, 2014).
We computed outflow physical properties, such as the outflow mass, outflow mass rate, and kinetic rates, from the direct outflow measurements derived through each kinetic analysis method (see Section 3). This quantities are key for investigating the outflow impact on the surrounding environment of their hosts. From the definition of these quantities (see Section 4), it is clear that both the electron density and the outflow radius play a critical role in their determination, alongside with the outflow flux and velocity derived via each method. Since with our data we cannot constrain either n\({}_{e}\) or R\({}_{\rm{OF}}\), we assumed a gas density of n\({}_{e}\)=200 cm\({}^{-3}\) and a radius of 1 kpc for all the QSO2s, as in Fiore et al. (2017). By assuming these values, we can just focus on the differences that are inherent to the method used to measure the outflow flux and velocity.
From our analysis of the outflow energetics using the three different methods we find that, on average, the parametric method results in the highest outflow masses (log M\({}_{\rm{OF}}\)(M\({}_{\odot}\))=6.47\(\pm\)0.50), since it considers the integrated flux of the broad and intermediate components. The flux-weighted non-parametric method produces larger outflow mass rates and kinetic powers (M\({}_{\rm{OF}}\)=4.0\(\pm\)4.4 M\({}_{\odot}\) yr\({}^{-1}\) and E\({}_{\rm{kin}}\)=41.9\(\pm\)0.6 erg s\({}^{-1}\)) than the peak-weighted non-parametric method and the parametric method using v\({}_{x}\) (see Figure 4 and Table 2). This happens because by definition it it always includes a contribution from both the blue and red emission line wings. This effect is more pronounced in the kinetic powers, because of its dependence on the outflow velocity, E\({}_{\rm{kin}}\)\(\propto\) M\({}_{\rm{OF}}\)v\({}_{\rm{OF}}^{2}\)\(\propto\) v\({}_{\rm{OF}}^{3}\). Nevertheless, we find the highest values of the outflow mass rate and kinetic power when we use the parametric method and v\({}_{\rm{max}}\) instead of v\({}_{x}\) (M\({}_{\rm{OF}}\)=23\(\pm\)35 M\({}_{\odot}\) yr\({}^{-1}\) and log(E\({}_{\rm{kin}}\))=42.9\(\pm\)0.6 erg s\({}^{-1}\)).
A comparison between the mass outflow rates and kinetic powers derived from the three methods and the values reported by Fiore et al. (2017) for a sample of 94 AGN is shown in Figure 5. The values derived from the parametric v\({}_{\rm{max}}\) method are fully consistent with Fiore et al. (2017), as expected since they used the same velocity definition to measure the outflow physical properties. However, we argue that these maximum velocities are not representative of outflowing gas, since they only trace the highest velocity gas instead of the bulk of the outflowing gas mass. This bias the results towards large values of the outflow physical properties. Other works using higher electron densities for the winds and/or lower outflow velocities such as v\({}_{x}\) report outflow properties that are from one to three orders of magnitude lower (Holt et al., 2011; Villar-Martin et al., 2016; Rose et al., 2018; Spence et al., 2018; Davies et al., 2020; Holden et al., 2023). Besides, Davies et al. (2020) pointed out that the relatively low scatter find by Fiore et al. (2017) when plotting the data shown in Figure 5, despite the wide range of luminosity, is partly due to the adoption of common values of the outflow density and radius.
The median coupling efficiencies that we measure for the outflows using the three methods, are 0.001%, 0.02%, and 0.01%, and when we use the parametric method and v\({}_{\rm{max}}\), it goes up to 0.22%. The former three values are lower than those reported in recent literature (Fiore et al., 2017; Baron & Netzer, 2019; Speranza et al., 2021), while the parametric v\({}_{\rm{max}}\) value is similar. For example, both Fiore et al. (2017) and Speranza et al. (2021) reported a coupling efficiency of \(\sim\)0.2% for a sample of AGN at z\(<\)0.5 and log(L\({}_{\rm{bolt}}\))\(\sim\)45.5 erg s\({}^{-1}\), and of 3CR radio galaxies at z\(<\)0.3 with log(L\({}_{\rm{bolt}}\))=42.46 erg s\({}^{-1}\), respectively. Again, this is a consequence of using maximum outflow velocities and assuming a low value of the density.
### Lack of correlations between the nuclear ionized outflow properties and galaxy properties
To evaluate the outflow impact on star formation and their possible relation with other AGN and/or host galaxy properties, in Section 5 we investigated the correlation matrix using results from the flux-weighted non-parametric method.
We only found moderate correlations between the AGN bolometric luminosity and the outflow masses and mass rates, both with a Spearman coefficient of \(\rho\) = 0.52. More massive and powerful outflows are usually found in luminous AGN galaxies (Hainline et al., 2014; Fiore et al., 2017; Revalski et al., 2018). Indeed, since here we are considering AGN bolometric luminosities derived from the extinction-corrected [OIII] luminosities, it is not surprising to find correlations, albeit weak, with the outflow masses measured from the [OIII] emission line. Likewise, high outflow mass rates are expected for luminous AGN (Fiore et al., 2017), although here we only find a modest correlation. This can be a consequence of the relatively small luminosity range probed by our sample, of log(L\({}_{\rm{bolt}}\))=44.9-46.7 erg s\({}^{-1}\).
We have not found any significant correlations between the outflow and the host galaxies properties here considered. Since here we are assuming fixed values of the outflow density and radius for all the targets, the outflow physical quantities are not precise. Therefore, the different correlations that we evaluate here might change if individual radii and densities would be measured. This is not the case of the direct outflow measurements, such as the velocity, W80, asymmetry, etc. The outflow density is one of the parameters having the strongest impact in the derived outflow physical quantities. The outflow mass rate and kinetic power vary by one order of magnitude when we assume a density of 2000 cm\({}^{-3}\) instead of 200 cm\({}^{-3}\). However, we find the same lack of correlation between the outflow properties derived from the flux-weighted non-parametric method and different AGN and host galaxy properties when precise density measurements are used for the individual targets in the QSOFEED sample of QSO2s (Bessiere et al. in prep.).
Regarding the impact of the outflows on recent star formation, we find that 4 of the 5 QSO2s lacking a YSP with age \(<\)100 Myr (Bessiere et al., 2017) present disturbed kinematics with large asymmetry values ([a]\(>\)100). On the other hand, only 5 of the 14 QSO2s with a YSP present large asymmetry values. Despite the small size of our sample, these results might be indicating that recent star formation (i.e., the one having the same dynamical timescales as the outflows) is being suppressed more efficiently in the QSO2s with most disrupted kinematics.
Nevertheless, there are also QSO2s with disrupted kinematics that have YSPs. This could be due either to positive feedback (Klamer et al., 2004; Gaibler et al., 2012; Zubovas et al., 2013; Cresci et al., 2015), to the different spatial scales considered for the stellar population analysis (\(\sim\)8 kpc) and for the outflow measurements (\(\sim\)3.9 kpc), and/or to the time that the winds might need to suppress star formation. Using integral field spectroscopic data of the QSO2 Mrk 34, Bessiere & Ramos Almeida (2022) showed that both positive and negative feedback can simultaneously occur in different parts of same galaxy, depending on the amount of energy and turbulence that the outflows inject in the ISM. This illustrates the complexity of the outflow-ISM interplay, and evidences the importance of using spatially resolved studies of AGN to evaluate the impact of feedback.
Given that interactions between galaxies disrupt the stellar and gas content of the galaxies involved, we could expect higher gas turbulence in mergers than in undisturbed galaxies. However, we do not find any trend between the [OIII] kinematics and the optical morphologies of the QSO2s or with their environment. This is most likely due to the different timescales involved: 1-3 Gyr for the mergers (Conselice et al., 2003) and 1-100 Myr for the AGN-driven outflows (Zubovas & King, 2016). A major or minor merger can efficiently transport gas towards the center the galaxy during hundreds of Myr, and this gas supply can be intermittent, leading to different phases of nuclear activity (King & Nixon, 2015). This makes it challenging to look for correlations between the large-scale galaxy morphology and the AGN-driven outflows.
Finally, we investigated the role of jets in driving or contributing to drive outflows in our QSO2s. We do not find any significant correlation between the outflow properties and either the integrated or peak 5 GHz luminosities from FIRST. Using spectra from SDSS, Mullaney et al. (2013) concluded that the width of the [OIII]\(\lambda\)5007 line shows a maximum in AGN with radio luminosities of log(L\({}_{1.4\rm GHz}\))=23-25 W Hz\({}^{-1}\), i.e., moderate radio luminosities as those of our QSO2s. The lack of correlation between these parameters is probably due to the small radio luminosity range probed by our QSO2s, of 22.1-24.7 W Hz\({}^{-1}\). However, the dominant origin of this radio emission is still a matter of on-going debate (Zakamska & Greene, 2014), as it might be produced by non-thermal AGN emission (Jarvis et al., 2019), star formation (Bessiere et al. in prep.), and/or shocks induced by the quasar winds/outflows (Fischer et al., 2023).
## 7 Conclusions
We characterized the ionized gas kinematics and energetics of a sample of 19 QSO2s at 0.3\(<\)z\(<\)0.41, using three different methods, parametric and non-parametric, to analyze the nuclear (\(\sim\)3.9 kpc) [OIII] emission-line profiles. The main conclusions of the work are the following.
* We detect ionized gas outflows in the form of asymmetric and broad emission line profiles in 95-100% of the sample using the three methods. 18 of the 19 QSO2s show [OIII] profiles with blueshifted wings, whilst the other one shows a redshifted wing.
* The average physical outflow properties (e.g., outflow mass, mass rate, and kinetic energy) that we derived from the three methods are consistent within the errors. The parametric method results in the highest outflow masses, and the flux-weighted non-parametric and specially the parametric method using v\({}_{\rm max}\) provide the highest mass outflow rates and kinetic powers.
* We measure outflow mass rates ranging between 0.2 and 20 M\({}_{\odot}\) yr\({}^{-1}\) and kinetic powers between 38.3 and 42.9 erg s\({}^{-1}\) for the QSO2s. For the parametric method using v\({}_{\rm max}\) the highest values go up to 151 M\({}_{\odot}\) yr\({}^{-1}\) and 44.0 erg s\({}^{-1}\). These values are most likely upper limits, considering that we assumed fixed values of 200 cm\({}^{-3}\) and 1 kpc for the outflow density and radius, respectively.
* We find a modest correlation between the AGN bolometric luminosity and the outflow mass and mass rate, but we do not find any correlation with the host galaxy properties here considered.
* Four of the five QSO2s lacking a YSP with age \(<\)100 Myr present disturbed kinematics with large asymmetry values (|a\(>\)100). On the other hand, only 5 of the 14 QSO2s with a YSP present large asymmetry values. Despite the small sample size, these results might be indicating that recent star formation is being suppressed more efficiently in the QSO2s with most disrupted kinematics.
Here we showed for the first time a comparison between outflow measurements using three different methods commonly used in the literature. By assuming a fix outflow density and radius, we can focus on the differences introduced by how the flux and velocity are calculated. We conclude that, although the average physical outflow properties derived from the three methods are consistent within the errors, the commonly adopted parametric measurements using maximum outflow velocities provide the highest values, not representative of the average outflow velocity. Finally, we argue that the lack of correlations between the outflow and the AGN and galaxy properties here considered is most likely due to the small luminosity ranges probed by our sample, and to the different timescales of the outflows and galaxy-wide properties.
###### Acknowledgements.
CRA and GS acknowledge financial support from the European Union's Horizon 2020 research and innovation programme under Marie Sklodowska-Curie grant agreement No 860744 (BiD4BES). CRA-GS, JAP, and PSB acknowledge the project "Feeding and feedback in active galaxies", with reference PD2010-106027GB-C42, funded by MICNN-AE/10.13039/5011000110033. CRA also acknowledges the project "Quantifying the impact of quasar feedback on galaxy evolution", with reference EUR2020-112266, funded by MICNN-AE/11.03039/50110001033 and the European Union NeuGeneration/IP/RR. CRA thanks the Kavli Institute for Cosmology of the University of Cambridge for their hospitality while working on this paper, and the IAC Severo Ochoa Program for the corresponding financial support. The authors thank the anonymous referee for useful and constructive suggestions.
|
2309.16103 | Non-equilibrium molecular dynamics of steady-state fluid transport
through a 2D membrane driven by a concentration gradient | We use a novel non-equilibrium algorithm to simulate steady-state fluid
transport through a two-dimensional (2D) membrane due to a concentration
gradient by molecular dynamics (MD) for the first time. We confirm that, as
required by the Onsager reciprocal relations in the linear-response regime, the
solution flux obtained using this algorithm agrees with the excess solute flux
obtained from an established non-equilibrium MD algorithm for pressure-driven
flow. In addition, we show that the concentration-gradient solution flux in
this regime is quantified far more efficiently by explicitly applying a
transmembrane concentration difference using our algorithm than by applying
Onsager reciprocity to pressure-driven flow. The simulated fluid fluxes are
captured with reasonable quantitative accuracy by our previously derived
continuum theory of concentration-gradient-driven fluid transport through a 2D
membrane [J. Chem. Phys. 151, 044705 (2019)] for a wide range of solution and
membrane parameters even though the simulated pore sizes are only several times
the size of the fluid particles. The simulations deviate from the theory
especially for strong solute--membrane interactions relative to the thermal
energy, for which the theoretical approximations break down. Our findings will
be beneficial for molecular-level understanding of fluid transport driven by
concentration gradients through membranes made from 2D materials, which have
diverse applications in energy harvesting, molecular separations, and
biosensing. | Daniel J. Rankin, David M. Huang | 2023-09-28T02:00:34Z | http://arxiv.org/abs/2309.16103v1 | Non-equilibrium molecular dynamics of steady-state fluid transport through a 2D membrane driven by a concentration gradient
###### Abstract
We use a novel non-equilibrium algorithm to simulate steady-state fluid transport through a two-dimensional (2D) membrane due to a concentration gradient by molecular dynamics (MD) for the first time. We confirm that, as required by the Onsager reciprocal relations in the linear-response regime, the solution flux obtained using this algorithm agrees with the excess solute flux obtained from an established non-equilibrium MD algorithm for pressure-driven flow. In addition, we show that the concentration-gradient solution flux in this regime is quantified far more efficiently by explicitly applying a transmembrane concentration difference using our algorithm than by applying Onsager reciprocity to pressure-driven flow. The simulated fluid fluxes are captured with reasonable quantitative accuracy by our previously derived continuum theory of concentration-gradient-driven fluid transport through a 2D membrane [_J. Chem. Phys._**151**, 044705 (2019)] for a wide range of solution and membrane parameters even though the simulated pore sizes are only several times the size of the fluid particles. The simulations deviate from the theory especially for strong solute-membrane interactions relative to the thermal energy, for which the theoretical approximations break down. Our findings will be beneficial for molecular-level understanding of fluid transport driven by concentration gradients through membranes made from 2D materials, which have diverse applications in energy harvesting, molecular separations, and biosensing.
## I Introduction
Transport of liquid mixtures and solutions through porous membranes is pivotal to many applications, including water desalination and purification,[1] chemical separations, energy generation[2] and storage[3], and biological and chemical sensing.[4] But inadequate performance of current membranes limits widespread adoption of these technologies.[5] Membranes made from two-dimensional (2D) materials, such as graphene, molybdenum disulfide (MoS\({}_{2}\)), or hexagonal boron nitride (hBN), hold great promise for tackling these challenges.[6; 7; 8; 9] But gaps in knowledge of fundamental aspects of transport processes in 2D membranes, particularly those driven by the concentration gradients[10] that are central to many applications, hinders predictive design of 2D membranes.
The atomic-scale thickness of 2D membranes confers fundamentally different properties compared with conventional membranes that are highly beneficial. For example, 2D membranes can circumvent the permeability-selectivity trade-off that often plagues conventional desalination or filtration membranes,[6] whereby selectivity against transport of an unwanted component of a mixture is generally associated with reduced permeability to a desired component, with computer simulations of 2D graphene membranes achieving almost complete salt rejection along with water fluxes orders of magnitude larger than those of current desalination membranes.[11] The extreme thinness of 2D membranes also means that transmembrane gradients of e.g. pressure, concentration, or electric potential can be enormous, resulting in huge driving forces for fluid transport. The orders-of-magnitude higher power densities for osmotic power generation of single-layer MoS\({}_{2}\) membranes compared with conventional membranes was attributed partly to the massive salt gradient.[12] In nanopore-based sensing or sequencing of macromolecules, which provides a cheap, fast, and portable method of chemical or biological sensing, the single-atom thickness of 2D membranes offers the possibility of discriminating single monomers in biopolymers due to the membrane thickness being comparable to the inter-monomer spacing.[6; 9] Achieving this goal, however, requires precise quantification of ion fluxes both in the absence and presence of the biomolecule.
Although numerical simulations of fluid transport across 2D membranes have been carried out in the presence of concentration gradients using both continuum[13; 14] and molecular[11; 15; 16; 17; 18; 19] models, until our recent work,[20] no analytical theory existed to quantify the relationship between the solution and solute flow through a 2D membrane driven by a solute concentration gradient and basic parameters such as the membrane pore size and strength and range of solute-membrane interactions. This theory was derived from a continuum hydrodynamic model of the fluid and its quantitative accuracy was verified by comparison with computational fluid dynamics simulations.[20] However, the continuum fluid approximation can break down when the dimensions of the confining pores becomes comparable to the size of the fluid molecules[21], which could be an issue for 2D membranes even with large pores, due to their atomic-scale thickness. Furthermore, continuum models cannot easily account for the effects of mechanical flexibility of the membrane, which can be significant for 2D membranes[22; 16; 23]. They also do not readily predict from first principles some phenomena that can strongly impact nano-scale fluid transport, such as finite solid-fluid friction, inhomogeneous fluid properties, non-electrostatic
ion-specific interactions, and non-ideal solutions.
But modelling fluid transport due to a concentration difference across a porous membrane using molecular simulations is not straightforward. This is because the periodic boundary conditions that are generally used in such simulations would create an artificial concentration jump across the periodic boundary when a concentration difference is applied across the membrane. Without constraints, mixing of the solution would occur across the periodic boundary instead of across the membrane. Until recently, a generally suitable molecular algorithm for simulating steady-state transport of liquid mixtures driven by concentration gradients has not existed. To the best of our knowledge, all molecular simulations of concentration-gradient-driven transport through 2D membrane until now have used a non-equilibrium molecular dynamics (NEMD) method in which the solute concentrations on either side of the membrane are set to be initially different and the resulting transient fluxes measured over time.[11; 12; 15; 16; 17; 18; 19] Such simulations in general do not enable the solution and solute fluxes for a given transmembrane concentration difference to be accurately quantified, since the concentrations change measurably over time in the relatively small systems that can be practically simulated. This can be especially problematic for electrolyte solutions, since electrostatic screening depends on concentration, which could have a significant impact for the large concentration gradients that occur across 2D membranes.
Methods for calculating concentration-gradient-driven transport from equilibrium simulations in the absence of a concentration gradient[24] also have deficiencies, particularly in the case of electrolytes, since they do not account for the effects of spatial variations of concentration-dependent properties such as electrostatic screening. On the other hand, stochastic algorithms that combine molecular dynamics (MD) with a grand canonical Monte Carlo (GCMC) method for maintaining constant reservoir concentrations by inserting, deleting, or exchanging particles in the reservoirs[25; 26; 27; 28] suffer from low acceptance rates of stochastic particle moves at the high densities in liquids and unphysical particle insertions can perturb the flow.[29] Deterministic MD algorithms have also been developed to simulate steady-state flow in a periodic system due to a concentration gradient by applying a supplementary force to solute and/or solvent molecules to mimic a chemical potential gradient[30; 31; 32; 33; 34], either throughout the system[33] or in small transition regions far from the membrane.[30; 31; 32; 34] Significant advantages of these methods over stochastic algorithms are that they are efficient even at liquid densities and they do not involve unphysical particle insertion or deletion steps. However, most of these algorithms have significant deficiencies that limit their utility, such as not being applicable to complex geometries such as 2D membranes,[33] not explicitly modeling the concentration difference (and thus not accounting for spatial variations in concentration-dependent properties),[33] or not enabling independent control of differences in solute concentration and pressure across the membrane,[30; 31; 32] which can lead to spurious effects, particularly at high solute concentrations, as discussed later on. One recent algorithm[34] has been developed that does not suffer from these issues, but it controls the solute chemical potential difference across the membrane, whereas the solute concentration difference is a more useful control parameter for direct comparison with experiments.
In this work, we have modified a previous deterministic NEMD algorithm[31] for constraining the solute concentration difference across a porous membrane in a periodic system to enable independent control of the transmembrane pressure difference. As a proof-of-principle of its ability to simulate and accurately quantify steady-state concentration-gradient-driven fluid transport through 2D membranes, we have applied this algorithm to a model system comprising a binary Lennard-Jones (LJ) liquid mixture and 2D LJ membrane. Using this model system, we have systematically investigated the effects of key parameters such as the membrane pore size, average solute mole fraction, and the strength and range of the solute-membrane interactions on the solution and solute fluxes. We have used these molecular simulation results to evaluate the accuracy of our previously developed continuum theory for predicting concentration-gradient-driven fluid transport through these atomically thin membranes.
## II Computational methods
### System details
All MD simulations were performed using LAMMPS (3 Mar 2020 GPU-accelerated version),[35; 36] with initial simulation configurations constructed using Moltemplate (version 2.16.1)[37] and visualization of simulation trajectories carried out using OVITO (version 3.0.0).[38] The system comprised a binary liquid mixture (solute and solvent) and single-layer planar solid membrane parallel to the \(x,y\) plane containing an approximately circular hole centered at the origin (Figure 1(a)). The solid particles were placed in a single layer of the (111) surface of a close-packed face-centred cubic (fcc) lattice (lattice constant \(\sqrt{2}\sigma\)), and their positions were fixed throughout the simulations. The Lennard-Jones (LJ) potential,
\[u_{ij}(r_{ij})=4\epsilon_{ij}\left[\left(\frac{\sigma_{ij}}{r_{ij}}\right)^{12 }-\left(\frac{\sigma_{ij}}{r_{ij}}\right)^{6}\right], \tag{1}\]
was used for the interactions between particles, where \(r_{ij}\) is the distance between particles \(i\) and \(j\), and \(\epsilon_{ij}\), and \(\sigma_{ij}\) are parameters quantifying the strength and range, respectively of their interactions. The solute-solute, solvent-solvent, and solute-solvent interaction parameters, \(\epsilon_{ij}\), and \(\sigma_{ij}\), were set to the same values, \(\epsilon\) and \(\sigma\), respectively, in all simulations, while the parameters for the interactions between solute and solid (wall) particles,
\(\epsilon_{\text{uw}}\) and \(\sigma_{\text{uw}}\), respectively, were varied for different simulations. Thus, the solute and solvent particles were identical in all respects except for their interactions with the membrane. The potential was cut-off at a distance of \(4\sigma\) and all particles had mass \(m\). LJ units are used throughout this work, with masses, distances, energies, temperatures, pressures, and times in units of \(m\), \(\sigma\), \(\epsilon\), \(\epsilon/k_{\text{B}}\), \(\epsilon/\sigma^{3}\), and \(\tau=\sqrt{m\sigma^{2}/\epsilon}\), respectively, where \(k_{\text{B}}\) is the Boltzmann constant. In all simulations, time integration was done with the velocity-Verlet integrator with a time step of \(0.005\tau\) and periodic boundary conditions were applied in all dimensions. Unless otherwise stated, simulations were carried out in the canonical (NVT) ensemble at a temperature of \(\epsilon/k_{\text{B}}\) using a Nose-Hoover [39; 40] style thermostat,[41] with only the velocity components perpendicular to the flow (\(z\)) direction thermostatted in NEMD simulations.
### Constrained concentration- and pressure-difference algorithm
To carry out NEMD simulations of steady-state flow due to concentration and/or pressure gradients, we adapted an algorithm by Khalili-Araghi _et al._[31] designed to maintain unequal solute concentrations across a membrane in a system with periodic boundary condition. Specifically, we modified it to enable independent control of concentration and pressure differences. Here, we use the convention that the membrane lies in the \(x,y\)-plane at \(z=0\) in the primary simulation box.
The algorithm in Ref. [31] applies a supplementary constant force \(f_{i}\) in the direction perpendicular to the membrane to particles of type \(i\) in a small transition region of width \(d\) far from the membrane, as illustrated in Fig. 1(b). The algorithm was called the nonperiodic energy step method, since the addition of this force is equivalent to applying a nonperiodic energy step or ramp of size \(\Delta\varepsilon_{i}=-f_{i}d\) across the transition region to the particles, which induces a concentration difference between the two sides of the membrane. The energy step \(\Delta\varepsilon_{i}\) needed to achieve a target ratio of the concentrations \(c_{i0}^{+}\) and \(c_{i0}^{-}\) of species \(i\) in the upper and lower fluid reservoirs, respectively, can be estimated from the relationship for a system of non-interacting (ideal) particles at infinite dilution, \(c_{i0}^{+}/c_{i0}^{-}=\exp\left(\Delta\varepsilon_{i}/(k_{\text{B}}T)\right)\). The applied force required to maintain this concentration ratio in this case is
\[f_{i}^{\text{ideal}}=-\frac{k_{\text{B}}T}{d}\ln\left(\frac{c_{i0}^{+}}{c_{i0 }^{-}}\right), \tag{2}\]
which is used as the initial value of the applied force in the algorithm, i.e. \(f_{i}(t=0)=f_{i}^{\text{ideal}}\). In general, an analytical relationship between the force and concentration difference or ratio does not exist for interacting particles. To account for the effect of particle-particle interactions, the forces are adjusted dynamically to converge to the target concentration ratio according to
\[f_{i}^{\text{KA}}(t+\Delta t)=f_{i}^{\text{KA}}(t)+\frac{\Delta t}{\alpha \tau_{\text{c}}}\Delta f_{i}^{\text{KA}}(t) \tag{3}\]
with
\[\Delta f_{i}^{\text{KA}}(t)=\frac{k_{\text{B}}T}{d}\left[\left\langle\ln \left(\frac{c_{i}^{+}}{c_{i}^{-}}\right)\right\rangle-\ln\left(\frac{c_{i0}^{ +}}{c_{i0}^{-}}\right)\right], \tag{4}\]
where \(\Delta t\) is the simulation time step, \(\alpha\) and \(\tau_{\text{c}}\) are tunable parameters, \(c_{i}^{+}\) and \(c_{i}^{-}\) are the instantaneous concentrations in the upper and lower reservoirs, respectively, and \(\left\langle\cdots\right\rangle\) denotes a time average over the interval \((t-\tau_{\text{c}},t]\), i.e. over a duration \(\tau_{\text{c}}\) immediately preceding the current time step. (We use the superscript "KA" to distinguish the equations in Ref. [31] from those in our modified algorithm, which are described below.) The instantaneous concentrations are measured in control regions each of width \(l_{\text{b}}\) on either side of the membrane and far from it (shown in yellow in Fig. 1(b)) that is sufficiently wide to calculate the average in Eq. (4) accurately. From Eq. (3), the concentration ratio is expected to converge to the target ratio \(c_{i0}^{+}/c_{i0}^{-}\) over a duration on the order of \(\alpha\tau_{\text{c}}\).
To avoid calculating the average in Eq. (3) every time step and to reduce the chance of \(\Delta f_{i}(t)\) becoming undefined due to \(c_{i}^{+}\) or \(c_{i}^{-}\) being zero, we have modified the force update algorithm from that in Ref. [31] such that the applied force is updated every \(\tau_{\text{c}}\) time steps instead of every time step and the instantaneous concentrations are averaged over time before taking the logarithm. Thus,
Figure 1: (a) Snapshot of a typical MD simulation system, consisting of a binary LJ liquid mixture (solvent particles are translucent) and a single-layer planar fcc (111) lattice membrane with a circular pore. (b) Schematic of constrained concentration- and pressure-difference algorithm: a force \(f_{i}\) is applied perpendicular to the membrane to each particle of type \(i\) (solute or solvent) within a transition region (red) of width \(d\); the concentration ratio and pressure difference is measured between control regions (yellow) of width \(d_{\text{b}}\) on either side of the membrane a distance \(l_{\text{b}}\) from the transition region; the applied forces are dynamically adjusted to converge the concentration ratio and pressure difference to target values.
we replace Eqs. (4) and (3) by
\[f_{i}(t+\tau_{\rm c})=f_{i}(t)+\frac{\Delta f_{i}(t)}{\alpha} \tag{5}\]
and
\[\Delta f_{i}(t)=\frac{k_{\rm B}T}{d}\left[\ln\left(\frac{\langle c_{i}^{+} \rangle}{\langle c_{i}^{-}\rangle}\right)-\ln\left(\frac{c_{i0}^{+}}{c_{i0}^{- }}\right)\right], \tag{6}\]
respectively, where the time average \(\langle\cdots\rangle\) is taken over the interval \([n\tau_{\rm c},(n+1)\tau_{\rm c})\) for \(n\leq t/\tau_{\rm c}<n+1\) and \(n\in\mathbb{Z}\). As in Ref. [31], Eq. (2) is used to initialize the applied force and convergence to the target concentration ratio is expected to occur over a duration on the order of \(\alpha\tau_{\rm c}\).
In principle, the force update scheme specified by Eqs. (3) and (4) from Ref. [31] or our modified version specified by Eqs. (5) and (6) can be used to constrain the concentration difference across of the membrane of any or all species in a multicomponent mixture. In Ref. [31], the external force was only applied to solute species (electrolyte ions in that case), whereas no external force was applied to the solvent (water). However, in general, applying an external force to solutes in the transition region without doing the same to the solvent will induce a hydrostatic pressure difference across the membrane that will affect the fluid fluxes, as we show below. Thus, an external force must also be applied to the solvent particles to achieve a desired pressure difference. Instead of constraining the solvent concentration using Eq. (6) to achieve this goal, we use a force update scheme for the solvent that directly controls the pressure difference, which more closely mimics how applied fields would be controlled experimentally. Thus, while we use Eqs. (5) and (6) to update the applied force on the solute particles (which we label as particles of type \(i={\rm u}\)), for the solvent particles (type \(i={\rm v}\)), we replace Eq. (6) by
\[\Delta f_{\rm v}(t)=\frac{A}{N_{\rm v}(t+\tau_{\rm c})}\left(\langle\Delta P \rangle-\Delta P_{0}\right), \tag{7}\]
where \(A\) is the cross-sectional area of the simulation box, \(\Delta P_{0}\) is the target pressure difference, \(N_{\rm v}(t+\tau_{\rm c})\) is the instantaneous number of solvent particles in the transition region at time \(t+\tau_{\rm c}\), \(\Delta P=P^{+}-P^{-}\) is the instantaneous pressure difference between the control regions on either side of the membrane, and the time average \(\langle\cdots\rangle\) is computed identically to that in Eq (6). (Note that \(\langle N_{\rm v}\rangle\) may be a better choice than \(N_{\rm v}(t+\tau_{\rm c})\) in Eq. (7) as its use would reduce the fluctuations in the applied force.) \(P^{\pm}\) is calculated from the diagonal components of the per-atom stress tensor summed over atoms in the control regions using the stress/atom compute in LAMMPS.[42] We note that this method is known to be inaccurate for computing the local pressure in inhomogeneous systems,[43; 44] for which more accurate but more computationally expensive and less widely implemented alternatives[43; 44; 45] exist. However, by keeping the control regions away from strong inhomogeneities in the fluid, this problem can be mitigated. For the initial applied force on the solvent particles, \(f_{\rm v}(t=0)=0\) is used.
### Simulation details
Two system sizes were simulated. Unless otherwise stated, a \(50\times 50\) unit-cell membrane and a total of \(396\,000\) fluid particles were used. In addition, some non-equilibrium simulations were carried out with a larger \(80\times 80\) unit-cell membrane and a total of \(1\,622\,016\) fluid particles to verify that the simulations of the smaller system did not suffer from finite-size effects.
For each system size, fluid particles were placed on two cubic lattices of equal size on either side of the solid surface, which initially contained no pore, with the \(z\) dimension of the box sufficiently large that the particles did not overlap. All fluid particles were initially set to be solvent particles. The system was initially equilibrated in the isothermal-isobaric (NPT) ensemble for \(10^{6}\) time steps at a temperature of \(\epsilon/k_{\rm B}\) and pressure of \(\epsilon/\sigma^{3}\) using a Nose-Hoover[39; 40] style thermostat and barostat[41] with only the \(z\) dimension barostatted for the fluid particles. The average box length in the \(z\) dimension measured over the last \(10^{5}\) time steps, by which time the instantaneous box length had plateaued, was used as the box length in all subsequent constant-volume simulations, which were at a temperature of \(\epsilon/k_{\rm B}\). The \(z\) dimension of the box was deformed at constant velocity over \(10^{3}\) time steps to reach this value. All solid atoms within a distance \(a\) of the origin were deleted to create an approximately circular pore of radius \(a\) in the membrane, as illustrated in Fig. S1 of the supplementary material.
Equilibrium MD and NEMD simulations were carried out for various combinations of the solute-membrane interaction parameters \(\epsilon_{\rm uw}\) and \(\sigma_{\rm uw}\), pore radius \(a\), and average solute mole fraction \(\bar{\chi}\), with \(\epsilon_{\rm uw}/\epsilon=0.5\), \(0.8\), \(1.2\) or \(1.5\), \(\sigma_{\rm uw}/\sigma=0.8\), \(1.2\) or \(1.5\), \(a/\sigma=0\) (equilibrium simulations only) \(3\), \(4\), \(6\), or \(8\), and \(\bar{\chi}=0.05\) or \(0.2\). In addition to NEMD simulations of concentration-gradient-driven flow, simulations of pressure-driven-flow without a concentration gradient were also carried out for selected systems using the algorithm in Ref. [46], which is similar in some respects to our constrained concentration- and pressure-difference algorithm, but in which a constant and equal force (\(f_{\rm u}=f_{\rm v}\)) is applied to solute and solvent molecules in the transition region. Most simulations with a constrained concentration difference used a target transmembrane solute concentration ratio of \(c_{\rm u0}^{+}/c_{\rm u0}^{-}=20\), but ratios of \(2\), \(3\), and \(5\) were also used for selected systems to verify linear response of the fluid fluxes to the applied driving force. Unless otherwise stated, the transition region width \(d\), control region width \(d_{\rm b}\), and distance \(l_{\rm b}\) between the transition and control regions in the NEMD simulations were all \(2\sigma\) (see Fig. 1) and the target pressure difference \(\Delta P_{0}\) was zero. Details of the simulated systems and their properties are given in Tables S1-S5 of the supplementary material.
Starting from the final simulation configuration from the previous NPT equilibration step, fluid particles were randomly converted into solute particles to give the desired average solute mole fraction. Additionally, for the
NEMD simulations, the fluid particles were converted to achieve the maximum target solute concentration ratio of 20. Then, for each system geometry, a NEMD simulation was carried out using the constrained concentration- and pressure-difference algorithm with \(\epsilon_{\mathrm{uw}}=\epsilon\) and \(\sigma_{\mathrm{uw}}=\sigma\), reducing the target solute concentration ratio at intervals of \(\sim 8\times 10^{6}\) time steps to obtain steady-state simulation configurations at each of the desired concentration ratios. Fig. 2 shows the variation of the solute concentration and pressure in the control regions with time from one of these simulations, which illustrates the ability of the algorithm to converge and maintain the concentration and pressure difference at the target values. The final configuration at each target concentration ratio was used as the starting configuration for simulations with other values of \(\epsilon_{\mathrm{uw}}\) and \(\sigma_{\mathrm{uw}}\) at that target ratio. All these simulations used \(\alpha=50\) and \(\tau_{\mathrm{c}}=50\tau\).
For the NEMD simulations, an automated equilibration detection method [47] was used to determine when each system had reached a non-equilibrium steady state and to estimate the effective number of uncorrelated samples in order to calculate steady-state averages and statistical uncertainties (at the 95% confidence level) of fluctuating variables. Distribution functions such as solute concentration profiles, fluid density profiles, and pressure profiles were calculated only using data after the first \(8\times 10^{6}\) time steps of each simulation, which ensured the system was at equilibrium or in a non-equilibrium steady state.
It should be noted that changing solvent particles into solute particles with different solute-membrane interaction parameters results in deviations of the bulk pressure in the fluid reservoirs from the target pressure of \(\epsilon/\sigma^{3}\) in the NPT equilibration simulations, particularly at high concentrations, with average pressures in the equilibrium simulations varying from 0.93 to \(1.00\epsilon/\sigma^{3}\) across the range of systems studied (Table S3 of the supplementary material). However, the bulk solution density remained approximately constant, varying from 0.782 and \(0.787\sigma^{-3}\) across the range of solute-membrane interactions.
## III Results and Discussion
### Application of constrained concentration- and pressure-difference algorithm
Fig. 3 depicts the solute concentration, centerline solute concentration (calculated for solute particles within a distance \(\sigma\) of the axis passing through the pore center), and total (solute + solvent) fluid density profiles perpendicular to the membrane for a system to which our constrained concentration- and pressure-difference algorithm was applied, either with or without enforcing the pressure constraint \(\Delta P_{0}=0\) using Eq. (7). (Results for the highest concentration ratio without the pressure constraint are not shown because the concentration ratio never converged to a steady state.) The method without the pressure constraint is equivalent to the original constrained concentration-difference algorithm of Khalili-Araghi _et al._[31] The system in Fig. 3 had a pore radius \(a=6\sigma\), solute mole fraction \(\bar{\chi}=0.2\), and overall repulsive solute-membrane interactions, as indicated by the solute depletion near the membrane in Fig. 3(a). Qualitatively similar results were obtained for other systems, as illustrated in the supplementary material for a system with same solute-membrane interactions but lower (\(\bar{\chi}=0.05\)) solute mole fraction in Fig. S2 and for a system with the same solute mole fraction but with attractive effective solute-membrane interactions in Fig. S3.
The solute concentration profiles with and without the pressure constraint in Fig. 3(a) and (b) are dramatically different. At first glance, the total fluid density profiles in Fig. 3(c) are similar. But closer inspection, as shown in the inset, reveals a transmembrane density difference without the pressure constraint, which is induced by the net force exerted on the solution by the applied force on the solute particles used to constrain the concentration difference. The net force is manifested in a transmem
Figure 2: (a) Solute concentration \(c_{\mathrm{u}}\) and (b) pressure \(P\) in the upper (\(+\)) and lower (\(-\)) control regions vs time when applying the constrained concentration- and pressure-difference algorithm for a target concentration ratio \(c_{\mathrm{u0}}^{+}/c_{\mathrm{u0}}^{-}=20\) and target pressure difference \(\Delta P_{0}=0\), starting from a configuration in which solutes were uniformly distributed on either side of the membrane at the target concentration ratio (\(a=6\sigma\), \(\epsilon_{\mathrm{uw}}=\epsilon\), \(\sigma_{\mathrm{uw}}=\sigma\), \(\bar{\chi}=0.2\)).
brane pressure difference, as shown in the pressure profiles for the same system in Fig. 4 when the pressure difference is not constrained. On the other hand, constraining the pressure difference to \(\Delta P_{0}=0\) results in equal pressures and total fluid densities on either side of the membrane. Fluid flow driven by the pressure difference polarizes the solute concentration near the membrane, resulting in the differing solute concentration profiles in Fig. 3(a) and (b) with and without the pressure constraint. As shown by the fitted curves in Fig. 3(a), the centerline solute concentration when both the concentration and pressure difference are constrained is consistent with theoretical predictions under conditions in which there is a concentration difference but no pressure difference and the solute-membrane interaction range is small compared with the pore radius, in which the concentration profile is expected to be an inverse tangent function of the axial coordinate. [20]
Fluid fluxes were determined by counting the number of solute and solvent particles crossing the membrane as a function of time, as illustrated in Fig. 5 for two systems to which the constrained concentration- and pressure-difference algorithm was applied with zero pressure difference. (In practice, we measured fluxes at the boundary of the simulation box, which was in the center of the transition region (see Fig. 1), which gives the same result as any plane parallel to the membrane at steady steady due to particle conservation.) The effective solute-membrane interactions in Fig. 5(a) and (b) are repulsive and attractive, respectively, i.e. solute is depleted and enhanced near the membrane relative to the bulk, respectively. The linearity of the curves vs time shows both systems are in the steady state with constant fluid fluxes for most of the simulation (Similar behavior was observed for all systems studied.) Consistent with expectations for systems with a transmembrane concentration difference but no pressure difference and for which solute diffusion dominates advection (Peclet number \(\mathrm{Pe}<1\)), the solute flux is in the direction of decreasing concentration, while the total solution flux due to concentration-gradient-driven diffusionsis opposite in direction for membranes that repel and attract the solute, with flow towards increasing and decreasing concentration, respectively. [10]
The flux of particles of type \(i\) was calculated as a
Figure 3: (a) Solute concentration, (b) centerline solute concentration, and (c) total fluid density vs \(z\) coordinate in non-equilibrium simulations with (\(f_{\mathrm{v}}\neq 0\), solid lines) and without (\(f_{\mathrm{v}}=0\), dashed lines) the transmembrane pressure difference constrained to be zero for \(a=6\sigma\), \(\epsilon_{\mathrm{uw}}=0.5\epsilon\), \(\sigma_{\mathrm{uw}}=0.8\sigma\), \(\bar{\chi}=0.2\), and various target solute concentration ratios \(c_{\mathrm{u0}}^{\mathrm{i}}/c_{\mathrm{u0}}^{\mathrm{i}}\). The inset in (a) zooms in on \(z\) values near the membrane, whereas the inset in (c) zooms in on density values around the bulk density. The dotted black lines in (b) are least-squares fit of the solid line to a function of the form \(b_{0}+b_{1}\tan^{-1}(z/b_{2})\) with fit parameters \(b_{0}\), \(b_{1}\), and \(b_{2}\).
Figure 4: Pressure profile in non-equilibrium simulations with (\(f_{\mathrm{v}}\neq 0\), solid lines) and without (\(f_{\mathrm{v}}=0\), dashed lines) the transmembrane pressure difference constrained to be zero for the same conditions in Fig. 3.
numerical derivative, \(\dot{N}_{i}=\Delta N_{i}/\Delta t\), of the cumulative particle number \(N_{i}\) crossing the membrane vs time \(t\) in the steady state. The simulation trajectory was divided into intervals of \(2\times 10^{5}\) timesteps = \(10^{3}\tau\) for which the flux was calculated, from which the average flux with statistical uncertainties was obtained using the method described in Sec. II.3. (We verified that the calculated uncertainties were insensitive to halving or doubling this time interval.) Fig. 6 shows the total (\(\dot{N}=\dot{N}_{\rm u}+\dot{N}_{\rm v}\)) and solute (\(\dot{N}_{\rm u}\)) fluxes vs the target concentration ratio for a system with repulsive effective solute-membrane interactions for low (\(\bar{\chi}=0.05\)) and high (\(\bar{\chi}=0.2\)) solute mole fractions with or without the \(\Delta P_{0}=0\) pressure constraint. Consistent with expectations for this system when no transmembrane pressure difference is applied, the total flux is towards increasing concentration whereas the solute flux is towards decreasing concentration when the pressure constraint is enforced. On the other hand, both the total and solute flux are towards decreasing concentration without the pressure constraint, due pressure-driven flow as a result of the non-zero transmembrane pressure difference. The relative discrepancy between the total solution fluxes with and without the pressure constraint is similar for both solute mole fractions and different concentration ratios, highlighting the general importance of applying the pressure constraint to obtain accurate fluxes in constrained concentration-difference simulations. The solute flux is significantly different with and without the pressure constraint, but the relative discrepancy is much smaller at the lower solute mole fraction, due to the greater importance of solute diffusion over advection (lower Peclet number) at the lower mole fraction, suggesting that there may be circumstances in which the pressure constraint may not greatly affect the solute flux.
The pressure difference was not constrained in the original constrained concentration-difference algorithm in Ref. [31]. The study for which the algorithm was developed focused on measuring the ionic current through biological membrane channels due to a concentration difference for a relatively dilute aqueous KCl electrolyte. A concentration ratio of \(0.1\):\(1\,\mathrm{mol\,L^{-1}}\), pore radius \(a\approx 0.55\,\mathrm{nm}\), and pore length \(L\approx 6\,\mathrm{nm}\) was considered for the OnpPori form that was studied. To maintain the concentration ratio across the pore an external force \(f=0.557\,\mathrm{kcal\,mol^{-1}}\) was applied to the ions within a \(d=2.5\,\mathrm{\AA}\) transition region. As explained in the next section, this force would
create a pressure difference of magnitude \(|\Delta P|=N_{\rm u}f/A\) across the membrane, where \(N_{\rm u}\) is the number of ions in the transition region and \(A\) is its cross-sectional area. Using \(A=123.5\,\rm\AA\times 123.5\,\rm\AA\) and the average ion concentration of \(\tilde{c}_{\rm u}=1.1\,\rm mol\,L^{-1}\) in these simulations gives \(N_{\rm u}\approx 50\), and thus \(|\Delta P|\approx 2.5\,\rm kPa\). Ignoring the hydraulic resistance of the pore ends for simplicity, which would reduce the flux further, the total solution flux due to this pressure difference can be estimated from the Hagen-Poiseuille equation,[48]\(Q=\pi a^{4}\Delta P/(8\eta L)\), where \(\eta\) is the solution shear viscosity, which we have taken to be that of pure water, \(\eta=8.94\times 10^{-4}\,\rm Pa\,s\).[49] Using the parameters above gives an estimate of the convective ion flux of \(\tilde{c}_{\rm u}Q\approx 10^{4}\,\rm s^{-1}\). The lowest total ionic current that was measured in Ref. [31] was \(\approx 10\,\rm pA\), which gives a lower bound (corresponding to a perfectly ion-selective channel) on the total ion flux of \(\approx 6\times 10^{7}\,\rm s^{-1}\). Thus, the convective ion flux due to the induced pressure difference would have been negligible compared with that due to the applied concentration difference in this study, and so the application of a pressure constraint would have made little difference to the results.
### Transport coefficients: verification of linear response and Onsager reciprocity
From now on, we focus on NEMD simulations using our constrained concentration- and pressure-difference algorithm in which the pressure difference has been constrained to be zero. To quantify the concentration-gradient-driven fluid fluxes for all of the systems studied, we define two transport coefficients - the diffusioosmotic mobility,
\[\kappa_{\rm DO}\equiv-\frac{Q}{\Delta\Pi/(k_{\rm B}T)}, \tag{8}\]
and the solute permeance,
\[\mathcal{P}_{\rm s}\equiv-\frac{J_{\rm u}}{\Delta\Pi/(k_{\rm B}T)}, \tag{9}\]
which characterize the total volumetric solution flux \(Q\) and solute flux \(J_{\rm u}=\tilde{N}_{\rm u}\), respectively for a given transmembrane osmotic pressure difference \(\Delta\Pi\) at temperature \(T\). These definitions follow the notation in our previous work,[20] in which we derived a theory of concentration-gradient-driven flow through 2D membranes for dilute solutions, but the equations above generalize them to arbitrary solute concentrations.[35; 50] Eqs. (8) and (9) reduce to the corresponding equations (Eqs. (33) and (34)) in Ref. [20] in the dilute solution limit, where \(\Delta\Pi=k_{\rm B}T\Delta c_{\rm u}\).[50] The solution flux can be determined from the simulations as \(Q=\frac{\tilde{N}}{\bar{\rho}}\), where \(\bar{\rho}\) is the bulk total fluid density, which we calculated as the average of the total fluid density in the upper and lower control regions, i.e. \(\bar{\rho}=(\rho^{+}+\rho^{-})/2\).
As described recently for a similar NEMD simulation algorithm,[34] the transmembrane osmotic pressure difference \(\Delta\Pi\) and hydrostatic pressure difference \(\Delta P\) can be calculated by considering the balance of applied forces of the fluid particles in the transition region. Decomposing the applied force on each solute particle and each solvent particle as \(f_{\rm u}=f+\delta f_{\rm u}\) and \(f_{\rm v}=f+\delta f_{\rm v}\), respectively, such that \(N_{\rm u}\delta f_{\rm u}+N_{\rm v}\delta f_{\rm v}=0\), the force due to the osmotic pressure difference is \(-A\Delta\Pi=N_{\rm u}\delta f_{\rm u}=-N_{\rm v}\delta f_{\rm v}\), while the force due to the hydrostatic pressure difference is \(-A\Delta P=(N_{\rm u}+N_{\rm v})f=Nf\), where \(N_{\rm u}\) and \(N_{\rm v}\) are the number of solute and solvent particles, respectively, in the transition region, \(N\) is the total number of fluid particles in the transition region, and \(A\) is its cross-sectional area.[34] (Note that, following the convention in Sec. II.2 we have defined \(\Delta\Pi\) and \(\Delta P\) as differences across the membrane rather than across the transition region as was done in Ref. [34], so the sign in the previous equations is opposite that in the equivalent equations in Ref. [34].) From these equations, \(\Delta\Pi\) and \(\Delta P\) can be calculated in terms of the number of particles and applied force on particles of each type in the transition region as
\[\Delta\Pi=-\frac{1}{A}\left(\frac{N_{\rm u}N_{\rm v}}{N_{\rm u}+N_{\rm v}} \right)\left(f_{\rm v}-f_{\rm u}\right) \tag{10}\]
and
\[\Delta P=-\frac{1}{A}\left(N_{\rm u}f_{\rm u}+N_{\rm v}f_{\rm v}\right). \tag{11}\]
Alternatively, \(\Delta\Pi\) and \(\Delta P\) can be calculated from the solute concentration and pressure, respectively, in the control regions. In this case, \(\Delta P=P^{+}-P^{-}\), i.e. the pressure difference evaluated in the constrained concentration- and pressure-difference algorithm. On the other hand, \(\Delta\Pi\) can be estimated using the osmotic pressure of an incompressible ideal binary mixture, which can be derived from the entropy of mixing at any solute concentration (assuming the same solute and solvent molecular volume \(v\)) to be[51; 52; 53]
\[\Pi=-\frac{k_{\rm B}T}{v}\ln(1-\chi)=-\rho k_{\rm B}T\ln(1-\chi) \tag{12}\]
where \(\chi\) is the solute mole fraction and \(\rho=1/v\) is the total fluid density (which is assumed to be independent of \(\chi\)). This equation reduces to the standard van't Hoff equation, \(\Pi=c_{\rm u}k_{\rm B}T\), for \(\chi\ll 1\). We have evaluated \(\Delta\Pi=\Pi^{+}-\Pi^{-}\) from Eq. (12) using the solute mole fraction, \(\chi^{+}\) and \(\chi^{-}\), and bulk fluid density, \(\rho^{+}\) and \(\rho^{-}\), in the upper and lower control regions, respectively. We have compared \(\Delta\Pi\) and \(\Delta P\) calculated from the applied force balance in the transition region and from the concentration/density or pressure in the control regions in Figs. S6 and S7, respectively, in the supplementary material for all our non-equilibrium simulations. This includes simulations in which the osmotic pressure difference was non-zero and the pressure difference was constrained to be zero, simulations in which the osmotic pressure difference was non-zero and the pressure difference was unconstrained and thus non-zero, and simulations in which the
osmotic pressure difference was zero and pressure difference was non-zero. This comparison shows perfect agreement between \(\Delta P\) calculated using either method, but \(\Delta\Pi\) calculated from the control regions using Eq. (12) overestimates (by up to \(\approx\)15%) the value obtained from the transition region force balance using Eq. (10) for larger \(\Delta\Pi\). The origin of this discrepancy is likely the absence of a well-defined "bulk" solute concentration in the upper or lower fluid reservoirs to unambiguously define the osmotic pressure in Eq. (12) in simulations with a non-zero concentration difference, since the concentration varies throughout the system (see e.g. Fig. 3). By contrast, even with a transmembrane pressure difference, the pressure profile is essentially flat in either reservoir except in the immediate vicinity of the transition region or membrane (see e.g. Fig. 4), so the \(\Delta P\) between the control regions is representative of the transmembrane pressure difference.
Using \(\Delta\Pi\) from the force balance in the transition region, we have verified that the transport coefficients in Eqs. (8) and (9) were independent of the system size and transition region width for selected systems in Fig. S8. We have have also verified that the transport coefficients were independent of \(\Delta\Pi\), i.e. the systems were in the regime of linear response of the fluid fluxes to the applied osmotic driving force, for selected systems as shown in Fig. 7 for \(\kappa_{\rm DO}\) and in Fig. S9 of the supplementary material for \(\mathcal{P}_{\rm s}\). The three selected systems encompassed those most likely to deviate from linear response, namely those with the three highest solution flux magnitudes (which includes those with the two highest solute flux magnitudes) at the highest target concentration ratio \(c_{\rm u0}^{+}/c_{\rm u0}^{-}=20\) for the pore radius \(a=6\sigma\) that was used in most of simulations. As indicated by solid symbols in these figures, \(\kappa_{\rm DO}\) and \(\mathcal{P}_{\rm s}\) are independent of \(\Delta\Pi\) up to the highest \(\Delta\Pi\), except for the system with the largest fluxes (\(\epsilon_{\rm uw}=1.5\epsilon\), \(\sigma_{\rm uw}=1.5\sigma\), \(\chi=0.2\)), for which linear response appears to hold up to \(\Delta\Pi\approx 0.15\epsilon/\sigma^{3}\) (\(c_{\rm u0}^{+}/c_{\rm u0}^{-}=3\)). Although a few simulations were carried out for systems with a larger pore radius of \(a=8\sigma\), for which the fluid fluxes were greater than those with \(a=6\sigma\) for the same solution and membrane properties, the fluxes were well within the range in which the system with the largest fluxes was still in the linear-response regime. Thus, we can be confident that all systems besides that one were in the linear-response regime for the conditions simulated.
In the linear response regime, the fluid fluxes due to transmembrane differences in the hydrostatic pressure \(\Delta P\) and osmotic pressure \(\Delta\Pi\) (or, equivalently, chemical potential, \(\Delta\mu\)), are given by [53]
\[\left[\begin{array}{c}Q\\ J_{\rm u}-\bar{c}_{\rm u}Q_{\rm v}\end{array}\right]=\left[\begin{array}{cc} \Lambda_{11}&\Lambda_{12}\\ \Lambda_{21}&\Lambda_{22}\end{array}\right]\left[\begin{array}{c}-\Delta P \\ -\Delta\mu\end{array}\right]\,, \tag{13}\]
or using \(\Delta\mu=\bar{c}_{\rm u}\Delta\Pi\),[34] with \(L_{11}=\Lambda_{11}\), \(L_{12}=\Lambda_{12}/\bar{c}_{\rm u}\), \(L_{21}=\Lambda_{21}/\bar{c}_{\rm u}\), and \(L_{22}=\Lambda_{22}/\bar{c}_{\rm u}^{2}\),
\[\left[\begin{array}{c}Q\\ J_{\rm u}/\bar{c}_{\rm u}-Q_{\rm v}\end{array}\right]=\left[\begin{array}{ cc}L_{11}&L_{12}\\ L_{21}&L_{22}\end{array}\right]\left[\begin{array}{c}-\Delta P\\ -\Delta\Pi\end{array}\right]\,, \tag{14}\]
where \(\Lambda_{12}=\Lambda_{21}\) and \(L_{12}=L_{21}\) by the Onsager reciprocal relations, [53; 54]\(Q_{\rm v}=J_{\rm v}/\bar{c}_{\rm v}=N_{\rm v}/\bar{c}_{\rm v}\) is the volumetric solvent flux, and \(\bar{c}_{\rm u}\) and \(\bar{c}_{\rm v}\) are the average bulk solute and solvent concentrations, respectively, which we have calculated as the average of the concentrations in the upper and lower control regions, i.e. \(\bar{c}_{i}=(c_{i}^{+}+c_{i}^{-})/2\). Note that a number of previous studies on concentration-gradient-driven transport [55; 56; 57; 33; 50; 58; 34] have not distinguished between the total volumetric solution flux \(Q\) and volumetric solvent flux \(Q_{\rm v}\) in Eqs. (13) or (14), although this distinction is clear in the derivation by de Groot and Mazur [53] and in equations used in other studies. [59; 60] For the dilute solutions investigated in most of these studies, this distinction is not important, since \(Q\approx Q_{\rm v}\) in this regime, but \(Q\) and \(Q_{\rm v}\) can differ significantly at high solute concentrations such as studied here, especially when the solute and solvent fluxes are in opposite directions.
From the definitions of the transport coefficients in Eqs. (8) and (14), \(L_{12}=\kappa_{\rm DO}/(k_{\rm B}T)\), and given that \(T=\epsilon/k_{\rm B}\) in all our simulations, this means that \(L_{12}\) and \(\kappa_{\rm DO}\) have the same numerical value in reduced LJ units in this study. For two of the systems for which \(L_{12}\) was measured in Fig. 7, we also measured \(L_{21}\) using the definition in Eq. (14) in simulations of pressure-driven flow in the absence of a transmembrane concentration difference for several pressure differences using the NEMD algorithm in Ref. [46]. These results are presented in Fig. 7, and show that the \(L_{12}=L_{21}\) reciprocal relation is ver
ified, at least for the lowest applied pressures, with the consistency between our algorithm and the previously established and widely applied method in Ref. [46] demonstrating the validity of our method for quantifying non-equilibrium concentration-gradient-driven transport.
Fig. 7 also shows that the pressure-driven flow simulations deviate from linear response at much lower values of \(\Delta P\) than the \(\Delta\Pi\) values at which deviations occur in the concentration-gradient-driven flow simulations. This means that much longer simulations were required to obtain roughly comparable statistical uncertainties in the linear-response regime for the pressure-driven flow simulations compared with the concentration-gradient-driven flow simulations, highlighting the computational benefits of directly measuring the diffusioosmotic mobility in NEMD simulations with an applied concentration gradient, at least for 2D membrane systems.
### Comparison of simulation vs theory
We have previously derived a theory of fluid transport through a circular pore in an infinitesimally thin planar membrane due to a transmembrane concentration difference [20] by solving the continuum hydrodynamic (Stokes, advection-diffusion, and continuity) equations for low-Reynolds-number steady-state flow of a dilute solution of an incompressible Newtonian fluid, under the assumptions that solute diffusion dominates solute advection (Peclet number \(\mathrm{Pe}\ll 1\)) and that the effective solute-membrane interaction potential \(U\) is small compared with the thermal energy \(k_{\mathrm{B}}T\). This theory is straightforwardly generalized to arbitrary solute concentrations, by analogy with a related theory derived for concentration-gradient-driven fluid transport parallel to a planar surface at high solute concentration: [33; 50] the equations derived in Ref. [20] for the transport coefficients quantifying the fluid fluxes at low concentration apply at high concentration, while the distinction between low and high solute concentrations is manifested in the concentration dependence of the osmotic pressure driving force in Eqs. (8) and (9). Thus, the diffusioosmotic mobility \(\kappa_{\mathrm{DO}}\) quantifying the total solution flux is [20]
\[\kappa_{\mathrm{DO}}=\frac{2k_{\mathrm{B}}Ta^{3}}{\pi\eta}\int_{0}^{1}\mathrm{ d}\zeta\,\zeta^{2}\int_{0}^{\infty}\mathrm{d}\nu\,\left(\frac{e^{-U/(k_{ \mathrm{B}}T)}-1}{1+\nu^{2}}\right), \tag{15}\]
and the solute permeance \(\mathcal{P}_{\mathrm{s}}\) quantifying the solute flux (evaluated at the pore mouth at \(z=0\)) is [20]
\[\mathcal{P}_{\mathrm{s}}=2D\int_{0}^{a}\mathrm{d}r\frac{re^{-U/(k_{\mathrm{B} }T)}}{\sqrt{a^{2}-r^{2}}}, \tag{16}\]
where \(a\) is the pore radius, \(\eta\) is the solution shear viscosity, \(D\) is the solute diffusivity, and the oblate-spheroidal coordinates \(\zeta\) and \(\nu\) are defined in terms of the radial and axial coordinates by \(r=a\sqrt{(1+\nu^{2})(1-\zeta^{2})}\) and \(z=a\nu\zeta\), respectively. The effective solute-membrane interaction potential \(U\), which includes contributions both from direct solute-membrane interactions and indirect solvent-mediated interactions, is defined by [20]
\[c(\zeta,\nu)\equiv c_{\mathrm{u}\infty}(\zeta,\nu)e^{-U(\zeta,\nu)/(k_{ \mathrm{B}}T)}, \tag{17}\]
where \(c(\zeta,\nu)\) is the solute concentration distribution and \(c_{\mathrm{u}\infty}(\zeta,\nu)\) is a hypothetical solute concentration distribution for the same boundary conditions but with \(U=0\).
The theory yields simple scaling relationships for the transport coefficients as a function of the pore radius \(a\) and strength \(\epsilon\) and range \(\lambda\) of \(U\) in the limits of weak interactions (\(\epsilon\ll k_{\mathrm{B}}T\)) and a small (\(\lambda\ll a\)) or large (\(\lambda\gg a\)) interaction range relative to the pore size. [20] Although none of our simulations correspond strictly to the \(\lambda\ll a\) or \(\lambda\gg a\) limits, \(\lambda\) is significantly smaller than \(a\) for all but the smallest pore studied. Our simulation results are consistent with the predicted scaling in the \(\lambda\ll a\) limit, in which \(\kappa_{\mathrm{DO}}\) and \(\mathcal{P}_{\mathrm{s}}\) are both expected to be proportional to the pore radius \(a\), [20] as shown in Fig. 8. Fig. 8 also shows that \(\kappa_{\mathrm{DO}}\) is approximately independent of the average solute mole fraction for the simulated conditions, whereas \(\mathcal{P}_{\mathrm{s}}\) depends significantly on the average solute mole fraction. The behavior of \(\mathcal{P}_{\mathrm{s}}\) appears to be due to a subtle interplay of the opposing diffusive and advective solute fluxes for the systems in Fig. 8, which is not captured by the theory as it assumes that solute advection is negligible.
The dependence of \(\kappa_{\mathrm{DO}}\) and \(\mathcal{P}_{\mathrm{s}}\) on the solute-membrane interaction strength and range parameters, \(\epsilon_{\mathrm{uw}}\) and \(\sigma_{\mathrm{uw}}\), are given in in the supplementary material in Figs. S10 and S11, respectively. It should be noted that the strength and range of \(U\) is not simply proportional to these parameters, due to the complex many-body contributions to the effective interactions. Interestingly, Fig. S10 shows a linear dependence of \(\kappa_{\mathrm{DO}}\) and \(\mathcal{P}_{\mathrm{s}}\) on \(\epsilon_{\mathrm{uw}}\) for fixed \(\sigma_{\mathrm{uw}}\), which is consistent with the predicted scaling with the effective solute-membrane interaction strength \(\epsilon\) for \(\epsilon\ll k_{\mathrm{B}}T\), even though the direct solute-membrane interactions are certainly not weak in all cases. The dependence of \(\kappa_{\mathrm{DO}}\) and \(\mathcal{P}_{\mathrm{s}}\) on \(\sigma_{\mathrm{uw}}\) for fixed \(\epsilon_{\mathrm{uw}}\) shown in Fig. S11 is more complex, in part because \(\sigma_{\mathrm{uw}}\) controls not only the range but also the strength of the direct solute-membrane interactions, since a given solute particle interacts with more membrane particles as \(\sigma_{\mathrm{uw}}\) increases; thus, \(\mathcal{P}_{\mathrm{s}}\) varies non-monotonically while \(\kappa_{\mathrm{DO}}\) even changes sign with increasing \(\sigma_{\mathrm{uw}}\).
We have used Eqs. (15)-(17) to predict \(\kappa_{\mathrm{DO}}\) and \(\mathcal{P}_{\mathrm{s}}\) for all the simulated systems without using any information from the NEMD simulations. We used analytical equations fitted to equilibrium MD simulation data for the diffusivity and shear viscosity of the LJ fluid over a wide range of density and temperature for the same interaction cutoff distance in our simulations [61] to obtain \(D=0.0697\sigma^{2}/\tau\) and \(\eta=1.84\epsilon\tau/\sigma^{3}\) for the temperature \(T=\epsilon/k_{\mathrm{B}}\) and total bulk density \(\rho\approx 0.787\sigma^{-3}\) in all the simulations. To obtain \(U\) from Eq. (17), we used the solute concentration profile from an equilibrium MD simulation with the same solute-membrane interaction parameters, pore radius, and average solute mole frac
tion as the NEMD simulation for which the transport coefficients were being predicted. In this case, \(c_{\rm usc}(\zeta,\nu)\) in Eq. (17) is a constant and equal to the bulk solute concentration in the equilibrium simulation. Distributions of the solute concentration and total fluid density in all the equilibrium simulations, in one dimension (1D) as a function of the axial (\(z\)) or radial (\(r\)) coordinate and in two dimensions (2D) as a function of both \(r\) and \(z\), are given in Figs. S29-S38 of the supplementary material. The integrals in Eqs. (15) and (16) were computed using the quad and nquad functions, respectively, in the SciPy Python package and 2D solute concentration distributions were interpolated using a bivariate spline with the RectBivariateSpline SciPy function.
The pore radius \(a\) in Eqs. (15) and (16) corresponds to where the hydrodynamic boundary conditions are applied in the continuum theory and does not necessarily correspond to the definition of the pore radius used up to this point, which was the distance from the pore center within which the centers of solid atoms were absent. The Gibbs dividing surface from equilibrium MD simulations has previously been shown previously to describe the hydrodynamic boundary position in NEMD simulations of fluid flow accurately, [62] and so we have used this prescription to define an effective pore radius \(a_{\rm h}\) to replace the actual pore radius \(a\) in Eqs. (15) and (16), given by
\[\int_{0}^{a_{\rm h}}r\left[\rho_{\infty}-\rho(r,z=0)\right]\,{\rm d}r=\int_{a _{\rm h}}^{\infty}r\rho(r,z=0)\,{\rm d}r, \tag{18}\]
where the total fluid density \(\rho(r,z=0)\) in the plane of the membrane pore at \(z=0\) was obtained from the 2D equilibrium distribution \(\rho(r,z)\) by the same bivariate spline interpolation described above for the solute concentration distribution. (In practice the second integral was calculated up to finite value of \(r\) beyond which the total fluid density \(\rho(r,z=0)\) was zero.) As shown in Fig. S39 of the supplementary material, \(a_{\rm h}\) is within a few percent of \(a\), so either \(a\) or \(a_{\rm h}\) could be used in Eqs. (15) and (16) with little difference. (The dependence of \(a_{\rm h}\) on the solute-membrane interaction strength and range parameters, \(\epsilon_{\rm uw}\) and \(\sigma_{\rm uw}\), for fixed \(a\) are also given in the supplementary material in Figs. S40 and S41.)
Fig. 9 compares the diffusioosmotic mobility \(\kappa_{\rm DO}\) from the simulations with that calculated from the theory for all the simulated systems (which includes variations in pore radii, average solute mole fraction, and the strength and range of solute-membrane interactions) except for those with the strongest attractive solute-membrane interactions (\(\epsilon_{\rm uw}=1.5\epsilon\), \(\sigma_{\rm uw}=1.5\sigma\)). The latter data are not included because they occur on a very different scale from the rest of the data, making visualization on the same figure difficult, and because the assumption in the theory of weak solute-membrane interactions certainly breaks down for these systems. A comparison of all the simulated systems in the linear-response regime is given in Fig. S12 of the supplementary material. The simulations and theory are also compared in Fig. S13 of the supplementary material for calculations using the actual pore radius \(a\) instead of the effective pore radius \(a_{\rm h}\) in the theory, showing that this choice makes little quantitative difference.
There is good quantitative agreement between the theory and simulations for most of the data in Fig. 9, although deviations are evident for larger magnitudes of \(\kappa_{\rm DO}\), especially for positive \(\kappa_{\rm DO}\). Discrepancies between the theory and simulations are not entirely surprising, given the number of approximations made in the theory, namely weak effective solute-membrane interactions, a small Peclet number, and an infinitesimally thin planar membrane. The discrepancies appear to be largely due to the break down of the assumption of weak effective solute-membrane interactions, as indicated by the correlation between the deviation of the theory from the simulation results and the surface solute excess \(\Gamma\) used to color the data points in Fig. 9, which quantities the degree of adsorption (\(\Gamma>0\)) or depletion (\(\Gamma<0\)) of the solute at the membrane surface relative to the bulk. \(\Gamma\) in Fig. 9 was calculated from the solute concentration profile \(c_{\rm u}(z)\) perpendicular to a membrane containing no pore in an equilibrium MD simulation of a system with otherwise identical fluid and membrane properties to the
NEMD simulation using
\[\Gamma=\int_{0}^{\infty}\left(\frac{c_{\rm u}(z)}{c_{\rm u\infty}}-1\right)\,{\rm d }z, \tag{19}\]
where, in practice, the upper integration limit in Eq. (19) was taken to be the maximum value of \(z\) in the simulation box (this choice was not crucial as \(c_{\rm u}(z)\to c_{\rm u\infty}\) within the simulation box).
On the other hand, there does not appear to be a clear correlation between the discrepencies between the theory and simulation for \(\kappa_{\rm DO}\) and other potentially relevant parameters such as the Peclet number \({\rm Pe}\), pore radius \(a\), or solute-membrane interaction strength and range parameters, \(\epsilon_{\rm uw}\) and \(\sigma_{\rm uw}\), as shown in Figs. S14-S17 of the supplementary material, in which the data points have been colored by the value of these parameters. If anything, the discrepancies show the opposite trend vs \({\rm Pe}\) to that expected, with the deviation between theory and simulation increasing with decreasing \({\rm Pe}\). We estimated \({\rm Pe}\), which measures the relative magnitude of solute advection to solute diffusion, by
\[{\rm Pe}=\left|\frac{\bar{c}_{\rm u}Q_{\rm v}}{J_{\rm u}-\bar{c}_{\rm u}Q_{ \rm v}}\right|, \tag{20}\]
where we have used the solvent volumetric flux \(Q_{\rm v}\) instead of the total volumetric flux \(Q\) to quantify the advective solute flux because the total flux \(Q\) includes the diffusive component. The assumption of an infinitesimally thin membrane is expected to become less accurate as the aspect ratio of the pore decreases, which corresponds to decreasing pore radius. As noted above, although the effective solute-membrane interaction strength depends on \(\epsilon_{\rm uw}\) and \(\sigma_{\rm uw}\), the dependence on either parameter is not straightforward, and thus the lack of correlation of the discrepancies in \(\kappa_{\rm DO}\) with either parameter is not unexpected.
Fig. 10 shows a similar comparison between theory and simulation to Fig. 9, but for the solute permeance \({\cal P}_{\rm s}\). Since solute advection is assumed to be neglible (\({\rm Pe}\ll 1\)) in the theory, but is clearly is significant in many of the simulations, for which \(0.02\lesssim{\rm Pe}\lesssim 0.5\), in Fig. 11 we have also compared \({\cal P}_{\rm s}\) from the theory with the solute permeance from the simulations calculated from the diffusive flux only, which we define as
\[{\cal P}_{\rm s,diff}\equiv-\frac{(J_{\rm u}-\bar{c}_{\rm u}Q_{\rm v})}{\Delta \Pi/(k_{\rm B}T)}. \tag{21}\]
As with Fig. 9, we have excluded data for the strongest attractive solute-membrane interactions (\(\epsilon_{\rm uw}=1.5\epsilon\), \(\sigma_{\rm uw}=1.5\sigma\)) simulated from Figs. 10 and 11, but the corresponding plots containing all the simulation data can be found in Figs. S18 and S19, respectively, in the supplementary material. As for \(\kappa_{\rm DO}\) the use of the actual pore radius \(a\) or effective pore radius \(a_{\rm h}\) in the theory makes little difference, as shown in Fig. S20 in the supplementary material.
The agreement between the theory and simulations is not as good for the solute permeance as for \(\kappa_{\rm DO}\), but the agreement appears to improve by excluding the advective flux from the simulation definition of the solute permeance. As for \(\kappa_{\rm DO}\), the discrepancies between the the theory and simulation are not as good as the solute permeance. As for \(\kappa_{\rm DO}\), the agreement between theory and simulation is not as good as the solute
ory and simulation appear to be most strongly correlated with the strength of the effective solute-membrane interactions as quantified by the surface solute excess used to color the symbols in Figs. 10 and 11 (similar plots for all the simulated systems colored by the Peclet number Pe, pore radius \(a\), or solute-membrane interaction strength and range parameters, \(\epsilon_{\rm{uw}}\) and \(\sigma_{\rm{uw}}\), are given in Figs. S21-S28 in the supplementary material).
## IV Conclusion
We have developed a constrained concentration- and pressure-difference algorithm for non-equilibrium molecular dynamics simulations of steady-state fluid transport driven by concentration and/or pressure differences across a porous membrane in a system with periodic boundary conditions. Our algorithm adapts a previous algorithm by Khalili-Araghi _et al._, Khalili-Araghi _et al._ (2015) which controls the transmembrane concentration difference by applying an external force to solute particles in a transition region far from the membrane, by also applying an external force to solvent particles in the transition region to control the transmembrane pressure difference. Applying this algorithm to a model system comprising a binary Lennard-Jones liquid mixture and a 2D Lennard-Jones membrane containing a circular pore, we have simulated steady-state concentration-gradient-driven fluid transport across a 2D membrane with molecular resolution for the first time, enabling accurate quantification of the solution and solute fluxes due to a given applied concentration difference. We have shown that the application of the pressure-difference constraint has a significant effect on the solute concentration distribution across the membrane and the steady-state solution flux due to a transmembrane concentration difference for both low and high average solute concentrations, although the solute flux is less affected by the pressure-difference constraint at low solute concentrations. We have also shown that the solution flux due to an applied concentration difference generated by our algorithm is consistent with Onsager reciprocity in the linear-response regime by comparison with fluid fluxes due to an applied pressure difference. Furthermore, we have shown that directly simulating a transmembrane concentration difference is far more efficient for quantifying the concentration-gradient-driven solution flux in the 2D membrane systems studied than the indirect approach of applying the Onsager reciprocal relations to pressure-driven flow simulations. This is because only very small pressure differences can be applied before the pressure-driven fluid fluxes deviate from linear behavior. Finally, we have shown that our recently developed theory of concentration-gradient-driven flow across a 2D membrane, Klimner _et al._ (2018) although derived for a continuum fluid model, gives reasonably good quantitative agreement with the molecular simulations, especially for the total fluid flux, demonstrating its utility for quantifying the fluid transport even in molecular systems. Nevertheless, deviations from the simulation results are evident particularly for strong solute-membrane interactions, for which the assumptions of the theory break down.
## Supplementary material
The supplementary material contains details of the parameters and properties of all the simulated systems, additional non-equilibrium simulation results (concentration, density, and pressure distributions; comparison of transmembrane osmotic pressure and hydrostatic pressure differences calculated by different methods; verification of linear response for the solute permeance; verification of the independence of measured transport coefficients of system size and transition region width; additional plots of transport coefficients vs system parameters; additional comparisons of transport coefficients obtained from simulation and theory), and solute concentration and total density distributions from equilibrium simulations.
###### Acknowledgements.
This work was supported by the Australian Research Council under the Discovery Projects funding scheme (Grant No. DP210102155). This research was undertaken with the assistance of resources and services from the National Computational Infrastructure (NCI), which is supported by the Australian Government, and from
Figure 11: Solute permeance from simulations (calculated from diffusive flux only) vs theory for all systems for \(c_{\rm{u0}}^{+}/c_{\rm{u0}}^{-}=20\) except for those with the strongest attractive solute–membrane interactions (\(\epsilon_{\rm{uw}}=1.5\epsilon\), \(\sigma_{\rm{uw}}=1.5\sigma\)). Symbols are colored by the surface solute excess \(\Gamma\) (units: \(\sigma\)) and different symbol shapes distinguish high (\(\bar{\chi}=0.2\), circles) and low (\(\bar{\chi}=0.05\), squares) average solute mole fractions. The simulation and theory values are equal along the solid line.
the University of Adelaide's Phoenix High-Performance Computing service.
## Author Declarations
### Conflict of Interest
The authors have no conflicts to disclose.
### Author Contributions
Daniel J. Rankin: Methodology (equal); Investigation (supporting); Formal Analysis (supporting); Writing - original draft (equal); Writing - review & editing (supporting). David M. Huang: Conceptualization (lead); Methodology (equal); Investigation (lead); Data curation (lead); Formal Analysis (lead); Funding acquisition (lead); Project administration (lead); Supervision (lead); Writing - original draft (equal); Writing - review & editing (lead)
## Data Availability
Moltemplate [37] input scripts for creating the initial simulation configurations and sample LAMMPS [35; 36] input scripts for running each type of MD simulation in this study can be found at [https://doi.org/10.25909/17139593](https://doi.org/10.25909/17139593). Other data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2309.06310 | Using conservative voltage reduction and dynamic thermal rating for
congestion management of power network | Increasing the amount of electric power that is used on the demand side has
brought more attention to the peak-load management of the distribution network
(DN). The creation of infrastructures for smart grids, the efficient
utilization of the distributed network's components, and the appropriate
administration of the distributed network would result in a valuable solution
for the operators of the distributed network. As a result, a framework for
peak-load management is given in this research. Within this framework, the
real-time rating of the components and the voltage-dependent characteristics of
the electric loads work together to assist the DN operator in effectively
navigating peak periods. The combination of the conservation voltage reduction
(CVR) and the dynamic thermal rating (DTR) of the components that make up the
DN produces outcomes that are more helpful than any of these factors alone
could provide. This is true even though each of these factors contributes to
the efficient functioning of the DN. According to the findings, as compared to
the individual implementation of CVR, the simultaneous utilization of DTR and
CVR results in a cost-savings rise at peak events which is 58.75 percentage
points more than the individual implementation. In addition, a discussion is
offered concerning the current difficulties that are being experienced by the
feeders that are providing the voltage-dependent constant-power loads during
the utilization of the CVR, which are handled by the dynamic rating of the
components that make up the DN. | Ramin Nourollahi, Rasoul Esmaeilzadeh | 2023-09-12T15:21:21Z | http://arxiv.org/abs/2309.06310v1 | Using conservative voltage reduction and dynamic thermal rating for congestion management of power network
###### Abstract
Increasing the amount of electric power that is used on the demand side has brought more attention to the peak-load management of the distribution network (DN). The creation of infrastructures for smart grids, the efficient utilisation of the distributed network's components, and the appropriate administration of the distributed network would result in a valuable solution for the operators of the distributed network. As a result, a framework for peak-load management is given in this research. Within this framework, the real-time rating of the components and the voltage-dependent characteristics of the electric loads work together to assist the DN operator in effectively navigating peak periods. The combination of the conservation voltage reduction (CVR) and the dynamic thermal rating (DTR) of the components that make up the DN produces outcomes that are more helpful than any of these factors alone could provide. This is true even though each of these factors contributes to the efficient functioning of the DN. According to the findings, as compared to the individual implementation of CVR, the simultaneous utilisation of DTR and CVR results in a cost-savings rise at peak events that is 58.75 percentage points more than the individual implementation. In addition, a discussion is offered concerning the current difficulties that are being experienced by the feeders that are providing the voltage-dependent constant-power loads during the utilisation of the CVR, which are handled by the dynamic rating of the components that make up the DN.
Conservation voltage reduction, dynamic thermal rating, distribution network, demand response programs
## I Introduction
The significant expansion of electric cars, distributed generation, and urban development necessitates the establishment of communication and control infrastructures to effectively manage the distribution networks (DNs) [1]. Conversely, the yearly rise in load demands and the phenomenon of global warming pose a significant risk to the distribution networks (DNs) and their associated equipment, particularly during hot summer days (6678820). Three primary challenges to improving equipment capacity, whether via replacement or addition, are the privatisation of distribution businesses, financial constraints, and excessive prices. The present challenge has compelled the owners of DNs to contemplate the optimisation of their current infrastructures, perhaps necessitating the use of peak management strategies by the DN operator during periods of heavy loading. The establishment of communication and control infrastructures presents an advantageous prospect for operators to use the sophisticated DN management system in order to efficiently oversee the network during periods of heavy usage. The comprehensive peak management system has a range of components, with particular emphasis placed on demand response programmes (DRPs) and voltage control programmes, which are deemed significant. In addition, the transformer and line facilities of the DN are often constrained by their temperature limitations in order to mitigate the risk of overheating incidents. Alongside the constraint of line capacity, the thermal constraints might impose some restrictions on the voltage control capabilities of distribution networks (DNs). Additionally, the implementation of real-time monitoring of the system's state may significantly contribute to the effective management of the DN's operation, particularly during critical periods [2]. Therefore, the use of dynamic thermal rating (DTR) as opposed to traditional rating systems has potential for demand response (DN) in optimising energy management at elevated load levels, particularly during peak periods.
### _literature review_
Voltage regulation in distribution networks (DN) refers to a control approach used to ensure that the voltage inside the DN remains within a certain range. The function of voltage regulation in the distribution network (DN) varies depending on the temporary and long-term circumstances of the network. This regulation involves adjusting the voltage levels, either by raising or lowering them, to fulfil certain objectives within the DN [3]. Voltage regulation may be carried out in order to regulate reactive power or to avoid voltage drop in distribution network (DN) buses during periods of heavy loading. Regarding load management, voltage control may be implemented by conservative voltage reduction (CVR) in order to mitigate peak load levels and achieve energy savings [4]. Experimental investigations have examined the significant impact of Conservation Voltage Reduction (CVR) on the reduction of energy consumption and demand levels in Distribution Networks (DN) [5]. The assessment of the economic and engineering advantages of the CVR in the DN's projects has been conducted in a study by [6] alongside its technical merits. Furthermore, the paper by [7] presents a cost-benefit analysis of CVR, specifically focusing on the investment return derived from implementing CVR on distribution feeders. Furthermore, the advantages of Capacitor Voltage Regulation (CVR) have been substantiated in the context of unfavorable power system issues, such as the mitigation of power loss as examined in
the study conducted by [8]. The aforementioned advantages of CVR mostly pertain to the decrease in system demand and energy consumption resulting from voltage lowering. Another potential use of the CVR technique is in the decrease of peak loads in systems that are running in close proximity to their thermal limits [9]. Various approaches were presented by [4] to measure the impacts of the CVR (Conservation Voltage Reduction) at peak periods in the distribution network. All of the approaches used in the study demonstrated the substantial demand-saving effects of CVR at peak times, resulting in significant economic gains.
Transitioning from the traditional static rating approach to the real-time dynamic rating technique would result in an augmentation of the nominal current capacity of distribution networks, particularly in relation to transformers and other components. Multiple scholarly articles have presented thermal models that may be used for the purpose of real-time monitoring of DTR [10]. The Large Electric Systems Working Group (CIGRE) was first proposed by the International Council as a means to enhance the capacity of power transmission lines in a cost-effective way, particularly in comparison to other expensive approaches such as network expansion planning [11]. The findings of a concrete study on the DTR performed by the Electric Power Research Institute are elucidated in [12]. According to the findings presented in the study conducted by Douglass et al. (1996), the use of DTR techniques has been seen to enhance the capacity of transmission lines by around 1-5%. The DTR is influenced by several weather-related factors, such as wind speed and direction, ambient temperature, and sun irradiation. In the study by [13], the uncertainties associated with these parameters are addressed via the use of a fuzzy-based technique. Furthermore, [14] employs three variants of polynomial regressions in their study to develop a time-series-based approach for forecasting the DTR values of electricity lines. The DTR significantly impacts the reliability analysis of power lines.
In addition, the DRP incorporates the necessary programmes for managing peak loads in the distribution network [15]. DN's resilience, profitability, and dependability are all increased as a result of the DRP programmes' capacity to reduce the amount of power used at crucial periods [16]. According to reference [17], the network operator has a safe and efficient resource at their disposal in the form of the load of the customers who are already connected to the network in order to control the peak load. Grid operator involvement in the smart grid residential customer's load to critical peak load management utilising the customer engagement plans is explored in the paper cited above [18]. According to quote 24Alami2010, the incentive-based interruptible/curtailable DRPs are regarded to peak-load management, in which the practise of penalising consumers is considered in the event that they do not react to load reduction. This is done so that peak load management may be performed effectively.
Moreover, limited research has coordinated the CVR, DTR, and DRP methods in the peak load management. In [19], DRP and CVR are coordinated into an advanced DN management system to increase DN efficiency by minimizing the cost of power consumption in the day-ahead market. Furthermore, most of the reviewed DTR literature is related to the transmission network level, while very limited research has studied the DTR in DN. In [20], a methodology has been proposed for analyzing the potential advantages of the real-time-monitoring-based DTR in the DN from the reliability point of view.
From the documents that were looked through, it is possible to draw the conclusion that the CVR application in DN's peak load management solutions has not been explored in an adequate manner. In addition, DTR in DN has not been given the proper attention it deserves, particularly in the peak load management of DN, which has been preserved in its original state. The goal of the writers of this research is to discuss the CVR-based peak management of distribution networks while they are operating in overloaded situations. It is conceivable for older distribution networks to experience overloading of their substation transformers on hot summer days when there is a significant increase in the amount of power that is being used by the network. In these kinds of predicaments, the distribution network operator is required to implement the emergency DRPs, which results in an increase in the overall expense incurred by the distribution firm. In addition to that, this increase in consumption results in an increase in the cost of acquiring electricity from the network that is farther upstream. The CVR is built in such a manner so as to achieve the lowest possible cost while not violating any of the system indices in order to circumvent this issue and cut down on the expenses associated with both operational and DRP charges.
Thus, the contributions of this paper can be summarized as below:
* This paper addresses the peak management issue through voltage regulation and possible thermal capacity of the network components that,to the best of our knowledge, have not been considered for peak management in the literature.
* CVR method implementation provides the opportunity for peak management at a lower cost than NO-CVR condition.
* Considering the thermal rating of distribution network components and their dynamic characteristics, the CVR method with DTR standpoint releases the components' capacity and dramatically reduces the costs of peak management.
* The relevant power flow algorithm based on backward-forward sweep is introduced considering voltage reduction limits and dynamic line rating.
The remaining parts of the article are structured as described below. In Section 2, we go into more detail about the issue at hand and provide the mathematical formulation. In Section 3, we covered both the optimisation approach and the power flow that was applied in order to address the issue. Section 4 provides a validation of the suggested methodology, and Section 5 provides a representation of the eventual result.
## II Problem formulation
Within the scope of this article, a peak-load management framework for DN is put up for consideration. The suggested
framework is comprised of three different methods: DRT, DRP, and CVR, all of which are capable of being implemented in the majority of DNs. At these crucial peak moments, the emergent DRP option of mandatory load reduction is being investigated as a possible means of reducing the overall stress on the network. The mathematical formulation of the aforementioned approaches will be presented in this portion of the article.
### _Conservative voltage reduction_
It is reasonable to say that the electrical loads of all feeders in DN are dependent on the voltage level of the connecting bus, and this is true in both directions. In addition, the electrical loads on the DN have a unique reaction to the fluctuations in the network voltage. To represent the voltage dependence of DN'active and reactive loads, many functions have been presented in the investigations. These functions include the exponential model and the ZIP model. In this research, the ZIP model of voltage dependence is examined to model the active and reactive load changes against the voltage regulation. The goal of this modelling is to investigate the relationship between the two. You may see a representation of the mathematical formulation of the ZIP model down below [21].
\[P_{n,h}=P_{n,h}^{0}\left[C_{z_{p}}\Bigg{(}\frac{V_{n,h}}{V_{n,h}^{0}}\Bigg{)} ^{2}+C_{i_{p}}\left(\frac{V_{n,h}}{V_{n,h}^{0}}\right)+C_{p_{p}}\right] \tag{1}\]
\[Q_{n,h}=Q_{n,h}^{0}\left[C_{z_{q}}\Bigg{(}\frac{V_{n,h}}{V_{n,h}^{0}}\Bigg{)} ^{2}+C_{i_{q}}\left(\frac{V_{n,h}}{V_{n,h}^{0}}\right)+C_{p_{q}}\right] \tag{2}\]
In (1) and (2), \(P_{n,h}^{0}\), \(Q_{n,h}^{0}\), and \(V_{n,h}^{0}\) represent the hourly active reactive and voltage of DN buses before implementation of the CVR. In contrast, \(P_{n,h}\), \(Q_{n,h}\), and \(V_{n,h}\) respectively refer to the hourly active, reactive and voltage of the bus n by voltage reduction. The impacts of voltage reduction on the active power of the DN can be determined through the parameters \(C_{z_{p}}\), \(C_{i_{p}}\), \(C_{p_{p}}\) that are the active power constants of ZIP loads. Furthermore, the voltage-dependent behavior of the reactive loads determine through the constants \(C_{z_{q}}\), \(C_{i_{q}}\) and \(C_{p_{q}}\). The voltage dependency of the DN's loads leads to comprehensive changes in the distribution network, from power consumption to current changes, that will be evaluated in the next sections through power flow analysis.
### _Dynamic thermal rating_
The traditional static thermal rating of the DN is what is used to establish the rating of the equipment in a way that is cautious and is based on the parameters that are considered to be the worst possible weather, such as greater temperatures and the lowest wind speed for overhead lines. In most cases, the static rating is established independently for each season. The operators of some DNs have the ability to manually adjust the equipment rating depending on their own personal expertise as well as the current weather conditions in order to make the most efficient use of the available network capacity. Utilising DTR in order to unlock and put to good use the DN capacity is one of the fundamental approaches that may be used when constructing the infrastructures of a smart grid. In other words, the DN operator is able to permit more power to be transported via the network components by making use of the DTR, which may result in less load curtailment occurring during the on-peak hours. The IEEE standards [22, 23], and [24] are used in this study to simulate the DTR of the various components that make up the DN. In order for the DTR to function properly, the time constants of the components are necessary. The time constants of the component are determined by the capacitors, and the transient responses of the component are corresponding to the thermal resistance of the component. The time-constant definition is equal to the needed time interval for changing the temperature from the beginning temperature (temperature before step change in the load) to 63.2 percent of the final temperature level (final temperature a while after step change in the load). This time interval is equal to the final temperature a while after the step change in the load. There is a disparity in the time constants between the three elements that make up the DN: transformers, overhead lines, and subterranean cables. Transformers, overhead lines, and subterranean lines each have a thermal time constant that is about equal to 4 hours and 15 minutes, followed by 8 hours. This is true for each of the components that were described. While a shorter time constant makes it easier for components to cool down, a longer time constant makes them less sensitive to variations in the load that occur over short periods of time. Figure 1 illustrates a conventional thermal model for an underground cable based on the circumstances that have been provided. The thermal equivalent circuit of the DN components may be modelled in a manner similar to that of the subterranean cables seen in Fig. 1. After calculating the time constant of the components, the hourly rating of the components may be estimated by utilising the following formulae in such a way that the hottest-spot temperature of each component should be lower than its thermal limit (for example, ninety degrees Celsius). This ensures that the components will continue to function properly.
\[\theta_{j}(t_{I})=\theta_{d,j}+\theta_{amb}(t_{I})+\sum_{k}\theta_{j,k}(t_{I}) \tag{3}\]
\[\theta_{j,k}(t_{I})=\theta_{j,k}(t_{I-1})- T_{j,k}W_{c}(\theta_{c}(t_{I-1}))e^{-\frac{t_{I}-t_{I-1}}{\tau_{k}}} \tag{4}\] \[+ T_{j,k}W_{c}(\theta_{c}(t_{I-1}))\]
In (3), \(\theta_{j}(t_{I})\) refers to the temperature of the node \(j\) between all nodes indicated in Fig. 1 at time \(t_{I}\). Moreover, \(\theta_{j,k}\) represents the raised temperature arising from the dielectric losses in the node \(j\). Furthermore, \(\theta_{amb}\) and \(\theta_{j,k}\) respectively indicate the ambient temperature and temperature difference among nodes \(j\) and \(k\). The nodal time constant is also indicated with \(\tau_{k}\) in the above formulation. Besides, the control parameter \(T_{j,k}\) is used in (4) to control the exponential equations, which subscripts \(j\) and \(k\) represent the considered node and the ladder circuit's thermal loop, respectively. Finally, \(W_{c}(\theta_{c}(t_{I-1}))\) refers to the conductor loss in conductor temperature \(\theta_{c}(t_{I-1})\) at time \(t_{I-1}\).
The computation of the hourly rating of DN's components is negatively affected by the inclusion of real-time DTR. This computation involves accessing the previous hour's data as well as the initial state thermal conditions of the components and the environmental states such as ambient temperature, wind speed, and sun radiation. Hence, in accordance with the dynamic alterations in the loading and environmental conditions of the component, the thermal resistances and capacitance of the components are modified using (1) and (2). The computation of the updated ratings for overhead lines and transformers in the next hour is facilitated by the utilisation of IEEE standards 738 [22] and IEEE standard C57.91-2011 [23]. These standards take into account various factors such as the initial and previous hour thermal status, as well as environmental conditions including ambient temperature, solar irradiation, and wind.
### _Objective function_
The distribution network operator seeks to manage the peak load in a way that minimum cost is incurred by applying the curtailment and the least acceptable system indices are met. Suppose a day that the network is in overloading condition for the peak hour period of \(\Omega_{h}=\{h_{1},...,h_{n}\}\) known as event hours. The total cost function \(\Gamma\) over this period and its relevant constraints are given in (5)-(10).
\[\Gamma=\sum_{h\in\Omega_{h}}\rho_{h}^{G}P_{h}^{G}+\sum_{h\in\Omega_{h}}\sum_{i \in\Omega_{c}}\rho_{i,k}^{cur}\chi_{i,h}P_{i,h}^{0} \tag{5}\]
Subject to
\[P_{h}^{G}=\sum_{n\in\Omega_{L}}(1-\chi_{n,h})P_{n,h}+S_{base}\sum_{S\in\Omega _{S}}r_{s}I_{s,h}^{2} \tag{6}\]
\[[\mathbf{V_{h}},\mathbf{I_{h}}]=\mathbf{f}(\chi,\mathbf{P_{h}^{0}},\mathbf{Q _{h}^{0}},\mathbf{Z},V_{h}^{Sub}) \tag{7}\]
\[V^{\min}\leq V_{n,h}\leq V^{\max} \tag{8}\]
\[I_{s,h}\times I_{base}<I_{s}^{R}u_{s}(\theta,w,\varphi,h)I_{h}^{Tr}<I_{s}^{Tr, R}u_{Tr}(\theta,w,\varphi,h) \tag{9}\]
\[\chi_{i,h}\leq MCL \tag{10}\]
In (5), \(\rho_{h}^{G}\)is the hourly market energy price and \(P_{h}^{G}\) is the hourly purchased power from the upstream grid. Set of \(\Omega_{h}\) denotes the set of load points that are participated in the curtailment program, \(\rho_{i,k}^{cur}\) is the penalty price for every kW of load curtailment, considering the type of customers e.g. residential or industrial. Variable \(\chi_{i,h}\) determines the curtailment amount of each load and \(P_{i,h}^{0}\)is the hourly baseline load. Equation (6) states the hourly purchased power that is sum of the CVR-effected loads in the presence of curtailment and total network loss. \(\Omega_{L}\)and \(\Omega_{S}\) are respectively the set of load points and sections, \(S_{base}\)is base power, \(r_{s}\)is the Per-unit section resistance and \(I_{s,h}\) is the Per-unit current of sections. Function \(f(.)\), gives the bus voltages and section currents through the power flow in substation voltage of \(V_{h}^{sub}\). \(\mathbf{Z}\) is the impedance vector of the distribution network. Constraints (8)-(9) state the limitations on bus voltage, section current and HV/MV transformer current \(I_{h}^{Tr}\). Parameters \(V^{\min}\), \(V^{\max}\)and \(I_{base}\) respectively denote the voltage bounds and base current. \(I_{s}^{R}\)and \(I_{s}^{Tr,R}\) respectively stand for the rating current limits of feeders and HV/MV transformers. Both \(u_{s}(\theta,w,\varphi,h)\) and \(u_{Tr}(\theta,w,\varphi,h)\) are the functions of temperature, wind speed, solar irradiation and hour. Inequality (10) states that the curtailment amount is restricted within a maximum curtailment level (MCL) must not exceed this value.
## III Solving algorithm
The user's text is already academic in nature. As demonstrated in the preceding section, the aforementioned peak management issue exhibits nonlinearity and is characterised by intricate power flow limitations. In order to achieve this objective, the present study suggests the utilisation of the particle swarm optimisation algorithm (PSO) as a computational metaheuristic approach, particularly suitable for issues involving continuous variables.The fundamental principle underlying Particle Swarm Optimisation (PSO) involves the establishment of an initial population of solutions referred to as particles, which undergo repetitive movements aimed at converging towards the optimal solution. The process of convergence is achieved by the utilisation of updating equations for the velocity and position of particles, as seen in equations (11) and (12).
\[Vel_{n,t+1}=\omega Vel_{n,t}+ c_{1}r_{1}(pbest_{n,t}-Y_{n,t})\] \[+ c_{2}r_{2}(gbest_{t}-Y_{n,t}) \tag{11}\]
\[Y_{n,t+1}=Y_{n,t}+Vel_{n,t+1} \tag{12}\]
In the provided equations, the variables \(Vel_{n,t}\) and \(Y_{n,t}\) denote the velocity and position of then'th particle in the t'th iteration, correspondingly. The variables \(pbest\) and \(gbest\) denote the optimal positions discovered by individual particles and the overall best position, respectively. The symbol \(\omega\) represents the inertia weight, whereas \(r_{1}\) and \(r_{2}\) are random numbers that follow a uniform distribution. The acceleration coefficients are denoted as \(c_{1}\) and \(c_{2}\). To effectively tackle the matter of peak management, the particle placements demonstrate variability contingent upon the operator's chosen method. The fundamental purpose of the no-CVR technique is to ascertain the curtailment amounts of the generated solutions. On the other hand, the CVR and DTR methods prioritise the identification of the most favourable per-unit voltage at the substation. The configuration of particles during each hour of the event can be observed in Figure 2.
The CVR problem necessitates the utilisation of power flow analysis in order to determine the voltages at each bus and the currents flowing through each branch, as well as to identify any instances of voltage or current violation within the network. Due to the presence of curtailment in the loads, the use of
linear power flow is rendered ineffective, resulting in the emergence of a complex nonlinear restriction. Therefore, it appears that mathematical programming optimisation models are not well-suited for addressing this particular challenge, however heuristic optimisation methods can be highly advantageous in resolving similar issues. The backward-forward sweep method is an iterative approach commonly employed for the numerical resolution of power flow problems in radial distribution networks. The power flow solution technique presented in this study is founded on the utilisation of the bus injection branch current (BIBC) matrix \(\Psi\), which has a primary dimension of \(N_{S}\times N_{B}\)[25]. The provided binary matrix represents the currents of each section in the network in relation to the injected currents of the load points located downstream of each section. The matrix is structured as follows. It should be noted that when constructing the BIBC matrix, it is necessary to exclude the first column in order to obtain a square matrix.
\[\Psi=\begin{pmatrix}0&1&\cdots&1\\ 0&1&\ddots&0\\ 0&0&\cdots&1\end{pmatrix} \tag{13}\]
In order to determine the BIBC matrix of a radial distribution network, a straightforward approach is presented in the research paper by Teng et al. (2003) [25]. However, it is imperative to have a consistent numbering system for both nodes and sections, commencing with the substation and extending to the nodes further downstream. In order to address this limitation, a comprehensive algorithm is introduced in Algorithm 1, which facilitates the generation of the BIBC matrix. Algorithm 1 utilises the adjacency matrix \(A\) to represent the network, where the substation nodes are uniformly designated with the value 1. Algorithm 1 is founded on the identification of leaf nodes \(\Xi\) and their corresponding parent nodes \(s\) in the updated tree, achieved by the iterative process illustrated in Figure 3.
Adjacent to the generation of the BIBC matrix, the backward-forward sweep can be executed using the approach outlined in approach 2. as the present iterative approach, all variables and parameters are expressed as per-unit. The technique commences by constructing a diagonal matrix \(Z_{D}\), where each element corresponds to the impedance of the respective sections, as indicated in equation (14). Next, we generate a square matrix \(\Upsilon\) using equation (15) for utilisation in the method. The calculation of bus voltages is performed with respect to the per-unit voltage of the substation, denoted as \(V^{sub}\). The functions \(g^{P}\) and \(g^{Q}\) defined in equations (1) and (2) are used to determine the precise values of active and reactive power for the ZIP loads in every iteration. It is important to take into account the reduced capabilities of load points when analysing the power flow through the vector variable \(\chi\). Next, the current injection vector for each load is computed using the Hadamard product symbol \(\odot\). After the completion of each iteration, the bus voltages are changed using the equation specified in line 8 of the method.
\[Z_{D}=\begin{pmatrix}z_{1}&\cdots&0\\ \vdots&\ddots&\vdots\\ 0&\cdots&z_{N}\end{pmatrix} \tag{14}\]
\[\Upsilon=\Psi^{T}.Z_{D}.\Psi \tag{15}\]
```
0: Matrix A representing network topology
0:BIBC matrix Initialisation :
1: Construct the adjacency matrix \(A\)
2: Create an empty cell array \(\overrightarrow{\varphi}\) in length of network nodes
3:\(n=N_{S}\) LOOP Process
4:while (\(\Xi\neq\{\}\)) do
5:\(\Xi=\arg find(\sum_{j}A_{i,j}=1)\)
6:for\((i=1,i\leq|\Xi|)\)do
7:\(r=\Xi\{i\}\)
8:\(\phi\{r\}=[\phi\{r\},r]\)
9:\(\Psi(n,\phi\{r\})=1\)
10:\(s=\arg find(A(:r)=1)\)
11:\(A(s,r)=0,A=(r,s)=0\)
12:\(\phi\{s\}=[\phi\{s\},\phi\{r\}]\)
13:\(n=n-1\)
14:endfor
15:endwhile
16:return\(BIBCMatrix\)
```
**Algorithm 1** Algorithm of BIBC matrix construction
Figure 4 illustrates the comprehensive flowchart for addressing the issue of peak management within an overloaded distribution network. The ideal solution, which includes the substation voltage and curtailment parameters, is obtained for each event hour. In the event that the solutions produced by the Particle Swarm Optimisation (PSO) algorithm fail to adhere to the prescribed voltage and current constraints, a penalty is applied to the cost function.
Figure 1: Structure of particles for each event hour
Figure 2: Updated tree of BIBC matrix
## IV Simulation and Results
### _System under study_
The proposed Finnish DN bus, known as the 144 bus, is being seen as a suitable subject for examining the primary case studies discussed in this work under conditions of overload. Figure 5 depicts a concise visual representation of the system being investigated. The peak load of the test system under consideration is 11 MW per year. Additional information regarding the system being investigated, including as interruption charges, bus data, branch data, line size, and current rating, can be found in the document referenced as cite35Aaltodoc. As seen in Figure 5, the Finnish DN system for the 144 bus configuration has a single primary distribution substation operating at 110/20 kV. This substation is responsible for providing power to 144 secondary substations operating at 20/0.4 kV. The overall configuration of the system is radial in nature. In addition, the data pertaining to weather conditions, such as ambient temperature, sun irradiation, and wind speed, were sourced from the publication by Finnish et al. (2018) [26]. The weather data pertaining to the most extreme temperature recorded throughout the summer of 2019 is utilised in the simulation procedure.
### _Numerical results_
The obtained results are represented in three case studies as below.
* Case 1: Operation of DN in normal mode without CVR and DTR
* Case 2: Operation of DN by implementation of CVR without considering DTR during the peak times (hour 10- hour 21)
* Case 3: Operation of DN by implementation of CVR considering DTR during the peak times (hour 10 to hour 21)
Fig. 4: Single line representation of the under study test system.
Fig. 3: Solving algorithm of the problem.
Furthermore the problems related to increased lines current by implementation of the CVR due to the constant power loads will be discussed in the following of results.
#### Iv-A1 Cost results
The primary benefits of the proposed CVR-DTR, integrated with the DRP framework, are evident in the reduction of operational expenses for the distribution network. The cost findings of the introduced case studies are depicted in Figure 6-(a) (see Fig. 5-(a)). Based on the information shown in Figure 6-(a), it is evident that the operational cost of the DN is $476,000, $293,000, and $51,400 during the peak hours (from hour 10 to hour 21) for examples 1 to 3, respectively. Based on the aforementioned findings, it can be inferred that the deployment of the CVR alone leads to a decrease of 38.45% in system operation costs during peak times. However, this cost reduction can be further increased to 89.2% by concurrently implementing both DTR and CVR. Furthermore, it is important to note that the decrease in cost reduction observed in case 2 compared to case 3 can be attributed to the findings depicted in Figure 6-(b), which play a crucial role in accurately interpreting the results. Based on the findings shown in Figure 6-(b), it is evident that a limitation exists which hinders further lowering of voltage in example 2. This constraint is associated with certain buses that possess substantial constant-power loads. The excessive drop in voltage at the substation might lead to increased currents flowing through the substation transformer and its associated branches, potentially exceeding their thermal current restrictions. Therefore, the ability of the substation transformer and branches to adjust their thermal rating dynamically enables a greater reduction in substation voltage in scenario 3, as depicted in Figure 6-(b) (see Fig. 5-(b)). Given the significance of this matter, there has been an increased focus on the ongoing developments pertaining to the following.
#### Iv-A2 Substation power and load curtailment
To effectively demonstrate the optimal utilisation of the substation and lines capacity in various scenarios, the conveyed power originating from the 110/20 kV substation is visually depicted in Figure 7. As previously said, the utilisation of Dynamic Voltage Regulation (DTR) during the deployment of Constant Voltage Regulation (CVR) results in increased cost reduction as a consequence of the enhanced flexibility in voltage reduction. The decrease in cost can be attributed to the less load curtailment observed in example 3 in comparison to case 2. To clarify, the adoption of Demand Turn Reduction (DTR) will result in a decrease in load curtailment during the events. Additionally, the enhanced thermal capacity of DN's components will enable the provision of electricity to a greater number of loads. This fact is depicted in Figure 7. Based on the findings presented in Figure 7, it can be inferred that the dynamic thermal rating of the distribution network's components, specifically the branches and substation transformer as discussed in this study, leads to a substantial increase in the provided load through the substation in case 3 as compared to case 2. In order to effectively showcase the effects of CVR (Conservation Voltage Reduction) and DTR (Demand Time Response) in peak duration management, we give the illustrative Figure 8, which depicts the load curtailment seen in each of the case studies. Based on the information shown in Figure 8, there is a modest decrease in curtailed loads observed in instance 2 when compared to case 1. Moreover, a notable decrease is shown as a result of the probable impacts of the DTR in scenario 3, as depicted in Figure 8.
#### Iv-A3 Voltages and currents analysis
The figure labelled as Figure 9 illustrates the voltage levels of the buses in the various case studies conducted during two specific event hours, namely 10 and 14. These case studies differ in terms of temperature, wind speed, and irradiation. Based on the information presented in Figure 9, it is apparent that the application of CVR results in a noticeable decrease in voltage in case 2 when compared to case 1. The voltage reduction observed at both 10:00 and 14:00 hours is nearly same, as depicted in Figure 9. This similarity is evident in both scenarios 1 and 2. On the other hand, in scenario 3, the varying local weather conditions (including wind, solar radiation, and temperature) between hour 10 and hour 14 contribute to an increased potential for voltage reduction in the majority of buses. Additionally, the current flowing through the branch in case studies 2 and 3 is depicted in Figure 10. Based on
Fig. 5: Cost and voltage results of each case study.
Fig. 6: Carried power from the 110/20 kV substation during the event.
the information presented in Figure 10, it is evident that in Case 3, there is an increased allowance for currents to flow through the branches, particularly in the branches located in close proximity to the substation. This is mostly attributed to the potential effects of Distributed Temperature Regulation (DTR).
#### Iv-B4 Effects of DN's demand factor on CVR potential
As previously stated, the potential of the CVR (Conservation Voltage Reduction) in a Distribution Network (DN) is contingent upon the loading condition of the entire network, as it is constrained by the thermal limit of the DN's components. Figure 11 illustrates the potential of Demand Response (DR) in the context of Conservation Voltage Reduction (CVR) across various demand parameters. This observation is of particular significance when examining the efficacy of CVR in managing peak loads within the Distribution Network (DN). The increase in the DN demand factor leads to an observable rise in the minimum voltage of DN, as depicted in Figure 11. A considerable increase in the minimum CVR voltage is observed with a demand factor of 95%. Therefore, it can be inferred from Figure 11 that the use of CVR-based peak load management in distribution networks (DN) can be substantially mitigated by enhancing the loading condition of the DN.
#### Iv-B5 Current changes of DN under CVR considering the DTR
To effectively demonstrate the variations in currents of the feeders resulting from the adoption of CVR, a visual representation in the form of Figure 11 is utilised, employing a colour map for enhanced clarity. As anticipated, a higher magnitude of current variation is typically observed in the feeders closest to the substation. Figure 11 visually emphasises the initial feeders using warm colours, representing the more pronounced fluctuations in current inside these particular feeders. Based
Fig. 8: Buses voltages in each case study.
Fig. 10: Effects of demand factor on CVR.
Fig. 7: curtailed loads of buses in each case studies.
Fig. 9: Branches current in each case studies.
on the information presented in Figure 11, it can be observed that only a subset of the feeders have encountered significant fluctuations in current, which has subsequently hindered the successful implementation of CVR.
## V Conclusion
This article investigates three peak load control strategies of the distribution network (DN), including CVR, DTR, and DRP. The CVR has been extensively examined and introduced in past scholarly works, but there has been comparatively less focus on the application of the DTR in managing peak loads in Distribution Networks. In addition to examining the direct impacts of the DTR on peak load management of the Distribution Network (DN), our investigations extend beyond prior individual studies by exploring the integration of the DTR with CVR, which offers further advantages in terms of voltage reduction potential within the DN. According to the findings presented, the utilisation of the individual CVR results in a cost reduction of 38.45% during peak hours. In contrast, the implementation of the CVR considering DTR leads to a significantly higher cost reduction of 89.2% within the same peak duration. Moreover, the augmentation of the feeder's current due to the implementation of the CVR has been substantiated by the findings presented in the given results. Consequently, the examination of the DTR has been undertaken as a potential resolution to address this issue. The suggested peak management framework offers the advantage of reducing load curtailment during peak events. This reduction is achieved through the simultaneous utilisation of the CVR and DTR techniques. Moreover, according to the findings, the procurement of power from the energy market experiences an increase during periods of high demand through the utilisation of the CVR and DTR strategies. It is noteworthy to acknowledge that there was a significant level of load curtailment before to the implementation of CVR and DTR. However, the introduction of DTR and CVR has effectively decreased this curtailment, resulting in the provision of power to a greater number of loads.
|
2309.09745 | Topological light guiding and trapping via shifted photonic crystal
interfaces | Photonic crystals (PCs) are periodic dielectric structures that severed as an
excellent platform to manipulate light. A conventional way to guide/trap light
via PCs is to introduce a line or point defect by removing or modifying several
unit cells. Here we show that the light can be effectively guided and trapped
in the glided photonic crystal interfaces (GPCIs). The projected band gap of
GPCIs, which depends on the glide parameter, is characterized by a Dirac mass.
Interestingly, the GPCIs with zero Dirac mass is a glide-symmetric waveguide
featured with excellent transmission performance even in the presence of sharp
corners and disorders. Moreover, placing two GPCIs with opposite Dirac mass
together results in a photonic bound state due to the Jackiw-Rebbi theory. Our
work provides an alternative way towards the design of ultracompact photonic
devices such as GPCIs-induced coupled cavity-waveguide system and waveguide
splitter. | Zi-Mei Zhan, Peng-Yu Guo, Wei Li, Hai-Xiao Wang, Jian-Hua Jiang | 2023-09-18T13:20:19Z | http://arxiv.org/abs/2309.09745v2 | # Topological light guiding and trapping via shifted photonic crystal interfaces
###### Abstract
The exploration of topological states in photonic crystals have inspired a number of intriguing discoveries, which in turn provide new mechanisms for the manipulation of light in unprecedented ways. Here we show that light can be effectively guided and trapped at the shifted photonic crystal interfaces (SPCIs). The projected band gap of SPCIs, which depends on the shift parameter, is characterized by a Dirac mass. Interestingly, the SPCI with zero Dirac mass is a glide-symmetric waveguide featured with gapless interface states that exhibit excellent transmission performance even in the presence of disorders and sharp corners. Moreover, placing two SPCIs with opposite Dirac mass together results in a photonic bound state due to the Jackiw-Rebbi theory. Our work provides an alternative way towards the design of ultracompact photonic devices such as robust waveguides and cavities as well as the cavity-waveguide coupled systems that can serve as high-performance building blocks of miniature integrated topological photonic circuits.
_Introduction.--_ The past decades have witnessed the rapid development of various topological states in photonic systems [1; 2; 3]. Typical photonic topological phases, including photonic quantum anomalous Hall effects [4; 5; 6; 7], photonic Floquet topological insulators [8; 9; 10], and photonic quantum spin Hall insulators [11; 12; 13; 14; 15; 16], support topologically protected edge states that are appealing in guiding light with suppressed back scattering. Recently, the concept of Wannier-type higher-order topological insulators [17] was proposed to give rise to topological corner or hinge states, which provide an alternative way to trap light in lower-dimensional boundaries [18; 19; 20; 21; 22; 23; 24; 25; 26]. The basic idea is to deal with the Wannier configuration in the unit cells through the lattice deformation. For example, the breathing kagome lattice generates nontrivial Wannier configuration, and manifests its higher-order topology in the gapped edge states and in-gap corner states [20; 25; 26]. In fact, placing photonic systems with different Wannier configurations provides an effective way to guide and trap light [27; 28; 29; 30; 31], which may found potential applications on photonic devices like topological lasers [21] and rainbow light trappings [32]. On the other hand, it is recognized that glide symmetry, a composite symmetry operator that consists of mirror and translation operation, is beneficial for engineering the topological band structures. For example, the glide symmetry provides a powerful tool to synthetic Kramers degeneracy in topological classical wave systems [33; 34; 35; 36; 37]. In particular, it was shown that glide-symmetric interface can induce topological Wannier cycles from which gapless spin-Hall-like interface states emerge in the bulk band gap [34]. Very recently, a glide-symmetric phononic crystal interface that supports wide-bandwidth, single-mode topological acoustic waves is proposed [38; 39], and is suggested to have potential applications on the extremely sensing and isolation [40]. It is believed that such a topological acoustic wave is originated from the Wannier configuration on the interface rather than the bulk topology, which is differ from conventional topological insulator [32]. Nevertheless, the glide-symmetric photonic crystal (PC) interface remains unexplored and it is unknown whether or not such an interface can support any localized mode similar to that in the Wannier-type higher-order topological corner states.
In this letter, we demonstrate that shifted photonic crystals interfaces (SPCIs) can support both gapless interface states and in-gap bound states, which can have potential applications on robust light guiding and trapping. When the shift parameter equals to half lattice constant, the SPCI with a nontrivial Wannier configuration support gapless interface states that can be utilized as a waveguide featured with excellent transmission performance even in the presence of sharp corner. Moreover, when two SPCIs with opposite shift parameters are placed together, the in-gap localized states emerged as the Jackiw-Rebbi solitons of the gapped interface states. Finally, we discuss the coupled waveguide-cavity systems formed by the SPCIs-induced waveguide and localized states, showing that such systems can serve as elementary units for tunable, high-performance integrated photonic circuits.
_SPCIs.--_We start from a two-dimensional PCs with circular air holes arrayed in square lattice in a dielectric background [see Fig. 1(a)]. The radius of each circular air hole is \(r=0.5a\), where \(a\) is lattice constant. Throughout this work, we use silicon (\(\epsilon=11.9\)) as the dielectric background, and consider only the transverse-magnetic (TM) mode. This setup is simple and effective, as shown in the results below. Moreover, the same design can be applied to a broad frequency range, from optical frequencies to microwave frequencies. All simulations here are carried out with the commercial software COMSOL Mul
tiphysics. We first present the photonic band structure in Fig. 1(b), where the grey and light yellow regions refer to the states above light cone and complete band gap, respectively. Since we only focus on the first band gap, it is worthy to point out that the Wannier center for the first band are located at the corners of the primitive cell [red dots in Fig. 1(a)]. Generally, the Wannier center position is connected to the bulk topological polarization and hence can be employed as a bulk topological index for crystalline insulators [see calculation details of the bulk topological polarization in Supplementary Material].
To illustrate the SPCI, a geometry parameter \(g\) is employed to describe the relative displacement of the PCs between upper and lower interfaces. As indicated by dashed rectangular box in Fig. 1(c), a SPCI forms by moving the lower half PC along \(x\)-directions with a distance \(g\) while the upper half PC keeps unchanged. In general, the parameter \(g\) ranges from \(-a/2\) to \(+a/2\) due to the periodicity of the PC. Here, the positive and negative signs refer to the rightward and leftward moving directions, respectively. Obviously, the initial PCs represent the situation with \(g=0\). For \(g=\pm a/2\), the SPCIs can be regarded as an interface formed by square PCs with two different Wannier configurations. As depicted by the green and red dots in Fig. 1(c), the Wannier centers in the lower half PC is shift compared to the upper half PC. Remarkably, such a simple shift results in two interface bands crossing linearly at \(k_{x}=\pi/a\) [see the solid colored lines in Fig. 1(d)]. To understand the formation of such a band crossing (also known as a Dirac point), we construct an anti-unitary operator \(\Theta_{x}=G_{x}\star T\), where \(G_{x}:(x,y)\rightarrow(x\pm a/2,-y)\) and \(T\) are glide operator and time-reversal symmetry, respectively. Hence, acting twice \(\Theta_{x}\) on a photonic Bloch states give additional phase factor \(e^{-ik_{x}\cdot a}\). In other words, \(\Theta_{x}\) transforms \((k_{x},k_{y})\) into \((-k_{x},k_{y})\) and we have
\[{\Theta_{x}}^{2}=e^{-ik_{x}a}|_{k_{x}=\pi/a}=-1, \tag{1}\]
which, as an analog of the Kramers theorem fermions, guarantees that all bands are doubly degenerate and forming pairs at \(k_{x}=\pi/a\). Such gapless interface states protected by the synthetic Kramers theorem exhibit excellent performance in guiding waves [also see in Fig. 2] and have been utilized as the glide-symmetric waveguide (GSWG) [38; 39; 40] in the context of phononic crystals.
_Topological light guiding.--_Inspired by GSWG, which actually is a SPCI with \(g=\pm a/2\), we construct L-shaped (U-shaped) SPCIs [see Supplementary Material for the construction details] by shifting PCs along both \(x\) and \(y\) directions with half lattice constant. We first plot the transmission of GSWG, and L-shaped (U-shaped) SPCIs, as indicated by the black and solid red (blue) lines, respectively, in Fig. 2(a). It is seen that all three cases exhibit perfect (near unity) transmission within the whole bulk band gap along the SPCIs. Furthermore, the electric field patterns of GSWG, L-shaped, and U-shaped SPCIs in Fig. 2(b-d), respectively, at a frequency of \(0.289(c/a)\), show that electromagnetic wave propagate along the L-shape and U-shaped SPCIs with negligible back scattering even in presence of sharp corner, which is more superior than the conventional PC waveguide bending. We emphasize that the perfect transmission of these cases originated from the nontrivial Wannier configuration of the interface, rather than the bulk topology, which differ from the Wannier-type higher-order topological PCs [see the transmission of the Wannier-type higher-order topological PCs in Supplementary Material]. Moreover, we also demonstrate the SPCIs exhibit high fabrication tolerance by implementing numerical simulations in GSWG with disorders [see details in Supplementary Material].
_Topological light trapping.--_On the other hand, when \(g\neq\pm a/2\), the synthetic Kramers theorem in Eq. 1 is no longer valid and hence, the degeneracy of two interface bands at \(k_{x}=\pi/a\) will be lifted. As expected, two interface bands separated by a frequency gap emerge in the bulk band gap for the case of \(g=0.3a\) [see the dashed colored lines in Fig. 1(d)]. One step further, Fig. 3(b) presents the frequency ranges (indicated by orange and blue areas) of the two projected bands versus the shift parameter \(g\). It is observed that the frequency ranges of
Figure 1: (a) Schematic of two-dimensional PCs consisting of circular air holes with \(r=0.5a\) arrayed in square lattice. The positions of the Wannier centers for the first band are indicated by the red dots. (b) The photonic band structure for the proposed PCs. Inset: the first Brillouin zone. (c) Schematic of a SPCI that formed by moving the lower half PC with shift parameter \(g\). The red and green dots refer to the position of Wannier center of the PCs upper and below the SPCIs with \(g=a/2\). (d) Left panel: the projected band structure of SPCIs, where the solid (dashed) lines refer to interfaces states with \(g=\pm 0.5a(g=\pm 0.3a)\). Right panel: the eigen electric fields of interface states at \(k_{x}=\pi/a\) with odd and even parities.
the two projected interface bands are symmetric about \(g=0\) since two SPCIs with opposite \(g\) share the identical eigen spectrum. Taking \(g\in[0,a/2]\) into consideration, the frequency gap between two projected interface bands is equal to the bulk gap when \(g=0\), and gradually decreases and finally closed at \(g=a/2\). Meanwhile, the frequency ranges of projected interface bands gradually increase and finally fulfill the whole bulk band gap at \(g=a/2\). It is note worthing that the frequency gap between two projected interface bands (described by Dirac mass \(m\)) experiences a process of the opening and reclosing. Such a Dirac mass \(m\) is proportional to the overlapping integral of the electromagnetic field of the two interface states at \(k_{x}=\pi/a\), i.e., \(m\propto\int d\mathbf{r}(\mathbf{E}_{+}\cdot\hat{\epsilon}\cdot\mathbf{E}_{-}^{*}+c.c.)\), where the subscripts "\(\pm\)" represent the upper and lower interface bands. Obviously, for \(g=\pm a/2\), the parities of \(E_{\pm}\) are well defined, namely, they are of opposite parities about \(x=0\) [also see the left panel of Fig. 1(d)], which makes the zero overlapping integral, i.e., \(m=0\). Remarkably, a frequency gap with positive (negative) Dirac mass is induced by SPCIs with \(g(-g)\) [see details in Supplementary Material]. Hence, one can build a mass domain-wall system by placing two SPCIs with opposite \(g(g\neq\pm a/2)\) together.
In general, there are two configurations of the mass domain-wall systems. As depicted in Fig. 3(b), the first configuration (denoted as type-I, see upper panel) is formed by shifting PCs of region II (III) with positive (negative) \(g\), while the second configuration (denoted as type-II, see lower panel) is formed by shifting PCs of region II (III) with negative (positive) \(g\). According to the Jackiw-Rebbi theory [41], a photonic bound states (PBS) shall be localized at the boundary of two SPCIs with opposite \(g\), as schematically indicated by the red and blue stars in Fig. 3(b). Note that Fig. 3(a) also plots the frequency of PBS (indicated by red and blue lines) versus the displacement of PCs between regions I and II. As expected, for both type-I and type-II configurations, the PBS emerges within the frequency gap of the projected interface bands. Specifically, we also present the eigen spectra of type-I and type-II mass domain-wall systems in Figs. 3(c) and (d), respectively, which are formed by SPCIs with \(g=\pm 0.3281a\). The corresponding electric field pattern \(E_{z}\) are also shown in the insets. It is seen that the PBS in the type-I mass domain-wall system is even-symmetric, while that with type-II is odd-symmetric. Remarkably, these PBS induced by two SPCIs with opposite \(g\) can be acted as photonic cavity modes, which offer an alternative way to realize those cavity-based photonic devices.
_Topological coupled cavity-waveguide systems.--_ As an example, we propose a coupled cavity-waveguide (CCW) system by combining the SPCIs-induced waveguide and PBS in a single systems [see Figs. 4(a) and 4(b)]. Based on the mass domain-wall system with type-I and type-II configurations, we further introduce a GSWG by shifting the PCs of Region I with half lattice constant along \(x\)-direction and hence, forming type-I (upper panel) and
Figure 2: (a) Transmission of GSWG (black line), L-shaped (red line) and U-shaped (blue line) SPCI versus frequency. light yellow refers to the bulk band gap. (b-d) The electric field patterns of GSWG, (c) L-shaped and (d) U-shaped SPICs, respectively, at a frequency of \(0.289(c/a)\).
Figure 3: (a) The eigen spectra of the SPCIs versus the shift parameter \(g\). The orange and blue areas represent the interface states, while the grey areas refer to the bulk states. The red line refers to the SPCIs-induced PBS. (b) Schematic of the formation of PBS by combining two SPCIs with \(|g|\) and \(-|g|\) together. Upper panel: the SPCIs with \(|g|\) is on the left side of that with \(-|g|\), termed as type-I domain-wall system. Lower panel: the SPCIs with \(|g|\) is on the right side of that with \(-|g|\), termed as type-II domain-wall system. (c,d) Eigen spectra of the (c) type-I and (d) type-II mass domain-wall systems with \(g=0.3281a\).
type-II (lower panel) SPCIs-induced CCW systems. The transmission of type-I and type-II CCW systems are displayed in Figs. 4(c) and 4(d), respectively. It is observed that the transmissions of both systems are nearly unity except for the resonance frequency, which possesses a Lorentzian lineshape. Nevertheless, around the resonance frequency, the half maximum width of transmission of the type-I CCW is much larger than that of the type-II CCW, indicating that Q-factor of the type-I SPCIs-induced PBS is smaller than that of type-II SPCIs-induced PBS. Moreover, we display the normalized electric field patterns of type-I and type-II CCW around the resonance frequency in Fig. 4(e) and 4(f), respectively.
Last but not least, it is note worthing that the CCW systems formed by SPCIs are robust against perturbations. As shown in Figs. 5(a) and 5(b), typical perturbations, including two radii disorders, two locations disorders, and a vacancy defect, which indicated by the red, green and blue boxes, respectively, are randomly introduced into the type-I and type-II SPCIs-induced CCW systems. As a comparison, we also introduce the same perturbations into a conventional CCW system based on the square PC [see Fig. 5(c)], in which the cavity and waveguide are formed by revising the radius of an air hole to \(0.229a\) (see the red circle) and replacing one row of the air holes with a row of the dielectric rods (the permittivity of the replaced rods is 4.2, see the dark green circles). It is seen that for both type-I and type-II SPCIs-induced CCW systems with perturbations, the normalized electric field patterns in Figs. 5(d) and 5(e) are almost remain unchanged compared to the cases that without perturbations in Figs. 4(e) and 4(f), implying that the coupling at resonant dip does not interrupted by nearby defects. In contrast, for the conventional CCW system with perturbations, the normalized electric field patterns in Fig. 5(f) at the resonance dip makes a huge difference with the case that without perturbations [see the inset of Fig. 5(f)], indicating that the cavity mode is destroyed by the defect.
Moreover, we display the transmission of the type-I and type-II SPCIs-induced and conventional CCW systems with and without perturbations around the resonance dip in Fig. 5(g-i). For all these cases, resonance dip experience a blue shift due to the introducing of the defects. Nevertheless, the frequency shifts of the type-I and type-II SPCIs-induced CCW, namely \(0.0002(c/a)\) and \(0.0011(c/a)\), are much smaller than that of the conventional CCW, namely \(0.0047(c/a)\), indicating the resonance frequency of the SPCIs-induced CCW is also stable against perturbations. The robustness of the SPCIs-induced CCW against both disorder and defect make it a potential candidate for high-performance building blocks
Figure 4: (a,b) Schematic of the formation of (a) type-I, and (b) type-II coupled cavity-waveguide systems. (c,d) Transmission of the (c) type-I, and (d) type-II coupled cavity-waveguide systems. (e,f) The electric field patterns of (e) type-I, and (f) type-II coupled cavity-waveguide systems around the resonance frequencies.
Figure 5: (a-c) Schematic of (a) type-I SPCIs-induced, (b) type-II SPCIs-induced, (c) conventional CCW systems with perturbation. The red, green and blue boxes refer to the radius disorder, location disorder and vacancy defect. (d-f) The normalized electric field patterns at the resonance dip corresponding to (a-c). The inset in (f) refers to the zoom-in normalized electric field pattern at the resonance dip without perturbation. (g-i) The corresponding transmission with (black line) and without (red line) perturbations of (a-c) around resonance frequency. Inset of (i) refers to the zoom-in transmission lineshape around the resonance frequencies.
of miniature integrated topological photonic circuits.
_Conclusion.--_ To conclude, the SPCIs that simply formed by gliding the PC are proposed to guide and trap light. The SPCIs are closely rely on the shift parameter \(g\). We demonstrate that SPCIs with \(g=\pm a/2\) forms a GSWG and exhibit excellent performance on the ability of guiding waves even in the presence of sharp bends and disorders. Moreover, placing two SPCIs with opposite Dirac mass results in a PBS due to the Jackiw-Rabbi mechanism, which can be acted as a photonic cavity mode. Our work provides an alternative way towards the design of ultracompact integrated photonics.
_Acknowledgement.--_This work is supported from the Natural Science Foundation of Guangxi Province (Grant No. 2023GXNSFAA026048),and the project of all-English course construction for graduate students in Guangxi Normal University.
|
2309.06476 | Machine Learning the Dark Matter Halo Mass of Milky Way-Like Systems | Despite the Milky Way's proximity to us, our knowledge of its dark matter
halo is fairly limited, and there is still considerable uncertainty in its halo
mass. Many past techniques have been limited by assumptions such as the Galaxy
being in dynamical equilibrium as well as nearby galaxies being true satellites
of the Galaxy, and/or the need to find large samples of Milky Way analogs in
simulations.Here, we propose a new technique based on neural networks that
obtains high precision ($<0.14$ dex mass uncertainty) without assuming halo
dynamical equilibrium or that neighboring galaxies are all satellites, and
which can use information from a wide variety of simulated halos (even those
dissimilar to the Milky Way) to improve its performance. This method uses only
observable information including satellite orbits, distances to nearby larger
halos, and the maximum circular velocity of the largest satellite galaxy. In
this paper, we demonstrate a proof-of-concept method on simulated dark matter
halos; in future papers in this series, we will apply neural networks to
estimate the masses of the Milky Way's and M31's dark matter halos, and we will
train variations of these networks to estimate other halo properties including
concentration, assembly history, and spin axis. | Elaheh Hayati, Peter Behroozi, Ekta Patel | 2023-09-12T18:00:01Z | http://arxiv.org/abs/2309.06476v2 | # Machine Learning the Dark Matter Halo Mass of Milky Way-Like Systems
###### Abstract
Despite the Milky Way's proximity to us, our knowledge of its dark matter halo is fairly limited, and there is still considerable uncertainty in its halo mass. Many past techniques have been limited by assumptions such as the Galaxy being in dynamical equilibrium as well as nearby galaxies being true satellites of the Galaxy, and/or the need to find large samples of Milky Way analogs in simulations. Here, we propose a new technique based on neural networks that obtains high precision (\(<0.14\) dex mass uncertainty) without assuming halo dynamical equilibrium or that neighboring galaxies are all satellites, and which can use information from a wide variety of simulated halos (even those dissimilar to the Milky Way) to improve its performance. This method uses only observable information including satellite orbits, distances to nearby larger halos, and the maximum circular velocity of the most massive satellite galaxy. In this paper, we demonstrate a proof-of-concept method on simulated dark matter halos; in future papers in this series, we will apply neural networks to estimate the masses of the Milky Way's and M31's dark matter halos, and we will train variations of these networks to estimate other halo properties including concentration, assembly history, and spin axis.
Subject headings:Galaxy: halo +
Footnote †: slugcomment: Version November 8, 2021
## 1. Introduction
In the current \(\Lambda\)CDM paradigm, dark matter is the dominant type of matter. For example, we expect that the Milky Way is surrounded by a dark matter halo that makes up most of its total mass. Because dark matter is not visible, it has been difficult to directly measure this mass around the Milky Way (MW), and hence there have been many studies that have attempted to estimate the Milky Way's dark matter content via other means (e.g., Oort, 1926; Morrison et al., 2000; Yanny et al., 2000; Battaglia et al., 2005; Frinchaboy and Majewski, 2008; Li and White, 2008; Busha et al., 2011; van der Marel et al., 2012; King et al., 2015; Lowing et al., 2015; Patel et al., 2017; McMillan, 2017; Patel et al., 2018).
Recently, Wang et al. (2020) reviewed the most common techniques that have been used to measure the Milky Way's halo mass, which we summarize here:
1. _Estimating the Galactic escape velocity using high-velocity objects_: High-velocity stars do not remain in the Milky Way's potential well for a long time, and therefore the velocity distribution of MW stars rapidly decreases above the escape velocity. Since the escape velocity is related to the halo mass profile, it is then possible to estimate halo mass from the measured stellar velocity distribution (e.g., Smith et al., 2007; Piffl et al., 2014; Williams et al., 2017; Monari et al., 2018; Deason et al., 2019; Grand et al., 2019).
2. _Measuring the rotation curve_: Circular velocities can be measured for gas in the interstellar medium (ISM) as well as maser sources and disk stars. In dynamical equilibrium, these are related to the enclosed mass via \(M_{\rm enc}\propto V^{2}R/G\), with the constant of proportionality dependent on the assumed asphericity of the mass distribution (e.g., Klypin et al., 2002; McMillan, 2011; Pawlowski et al., 2012; Irrgang et al., 2013; McMillan, 2017; Nesti and Salucci, 2013; Cautun et al., 2020).
3. _Modeling tracers (halo stars, globular clusters, and satellite galaxies) with the Spherical Jeans equation_: For regions beyond the Galactic disk, one can measure the radial velocity dispersion and velocity anisotropy of tracers and infer the enclosed mass using the Jeans equation. This method requires an assumption for the density profile, which has been determined to have a power-law form locally; this form is typically assumed valid to very large distances. The radial velocity dispersion is often measured observationally by assuming that it is the same as the line-of-sight velocity dispersion. The velocity anisotropy is determined by proper motion measurements of the tracers, which is a key uncertainty in this method since it is difficult to obtain high-quality proper motion data for tracers at large distances (e.g., Battaglia et al., 2005; Dehnen et al., 2006; Xue et al., 2008; Watkins et al., 2010; Gnedin et al., 2010; Bhattacharjee et al., 2014; Huang et al., 2016; Ablimit and Zhao, 2017; Sohn et al., 2018; Zhai et al., 2018; Fritz et al., 2020).
4. _Modeling tracers (halo stars, globular clusters, and satellite galaxies) with phase-space distribution func
tions_: Using the assumption of steady state structure as well as an assumption about the shape of the potential, one can calculate phase-space distribution functions, i.e., the observed distributions of orbital energy and angular momentum for tracers of the potential. Via forward modeling of the true observations, it is then possible to reverse this process to infer the underlying gravitational potential well and the halo mass (e.g., Zaritsky et al., 1989; Kochanek, 1996; Wilkinson and Evans, 1999; Sakamoto et al., 2003; Deason et al., 2012; Eadie et al., 2015, 2017; Eadie and Juric, 2019).
5. _Simulating and modeling the dynamics of stellar streams_: Stellar stream shapes around the Galaxy provide information about galactic evolution and the underlying gravitational potential. The path of the stream and the different orbital speeds of objects along the streams tell us about the tidal forces that the object experienced, which can then be related to the potential well shape and the halo mass (e.g., Lin et al., 1995; Law et al., 2005; Newberg et al., 2010; Gibbons et al., 2014; Kupper et al., 2015; Hendel et al., 2018; Malhan and Ibata, 2019; Erkal et al., 2019).
6. _Modeling the motion of the Milky Way, M31, and other distant satellites under the framework of the Local Group timing argument_: Despite the expansion of the Universe, Andromeda and the Milky Way are approaching each other because of their gravitational pull. Under the assumption that the two galaxies are in a Keplerian orbit, one may infer their total mass by measuring other orbital properties including their relative velocity, their distance, and the age of the Universe (e.g., Kahn and Woltjer, 1959; Zaritsky et al., 1989; Li and White, 2008; van der Marel et al., 2012; Sohn et al., 2013; Zaritsky et al., 2020; Zhai et al., 2020; Chamberlain et al., 2023).
7. _Measurements made by linking the brightest Galactic satellites to their counterparts in simulations_: In this method, one uses a Bayesian framework to measure the mass of the Milky Way by selecting simulated halos (i.e., from a dark matter simulation) that have satellites that are most similar to the satellites of the Milky Way. To select the best matches, it is important to have the proper motion of the satellites, as it has been shown that specific angular momentum is often a better constraint than knowing only the position, radial velocity, or orbital energy of the satellites (e.g., Busha et al., 2011; Cautun et al., 2014; Patel et al., 2017; Li et al., 2017; Patel et al., 2018).
Each of the above techniques requires assumptions, which contribute to systematic uncertainties in constraining the Milky Way's dark matter halo mass. Most of the techniques above assume dynamical equilibrium for the Milky Way's halo. Dynamical equilibrium is known to be violated at small radii due to the passage of the Large Magellanic Cloud (e.g., Laporte et al., 2018; Garavito-Camargo et al., 2019) near the center of the Milky Way, and at large radii by continued accretion onto the halo (e.g., McBride et al., 2009; Behroozi et al., 2013). Nonetheless, dynamical equilibrium techniques share a strength that observations from arbitrary numbers of tracers can be combined.
The techniques that do not assume dynamical equilibrium rely on \(\Lambda\)CDM simulations. While these methods can be designed to avoid systematic biases from out-of-equilibrium systems, they are limited in the amount of data they can combine: the more observational data one has, the more difficult it is to find simulated halos that match all the observational constraints simultaneously (see discussion in, e.g., Patel et al., 2018).
Here, we use a new approach for measuring the Milky Way's dark matter halo mass. We train a neural network on simulated galaxies to learn the transformation for linking observable galaxy properties (starting with the specific angular momenta of satellites of the Milky Way) to halo masses. This method has the following benefits:
1. No dynamical equilibrium assumptions are made.
2. No assumptions about most nearby galaxies being satellites are made.
3. The approach can learn about relationships between observables (e.g., satellite orbits) and mass even from halos that do not match the MW or M31, leading to greater constraining power.
4. Arbitrary constraints from the local or larger-scale environment (e.g., distance and/or velocity offsets to the nearest larger halo) can be self-consistently included.
This paper is the first in a series that will explore the ability of neural networks to constrain the properties of the Local Group's dark matter distribution. While beyond the scope of the current paper, neural networks in the future will also provide the advantages of:
1. Being able to use arbitrary non-dark matter tracers (e.g., gas rotation curves in hydrodynamical simulations) as input features to neural networks to achieve the most accurate mass constraints.
2. Being able to use domain adaptation techniques (e.g., Ciprijanovic et al., 2022) to identify mass-observable relationships that are independent of baryonic physics differences across hydrodynamical simulations.
3. Being able to estimate other halo properties as well, just by changing the training target to other halo properties. Such properties could include the halo spin axis, halo concentration, and halo mass assembly history, with minimal additional effort.
In this paper, we use dark matter halo simulations to train neural networks to estimate masses across a broad halo mass range (\(10^{8}-10^{14}\,M_{\odot}\)). Inputs to the neural networks are based on observables including neighboring galaxy orbits, maximum circular velocity of the largest satellite, and distances to nearby more massive halos. In this paper, we take the limit of perfect information, assuming that no observational errors exist. In the second paper in this series, we will convolve simulated halo and galaxy properties with realistic observational errors, re-train the network, and use observed satellite orbits from _Gaia_ DR3 to estimate the mass of the Milky Way's and Andromeda's dark matter halos. In the third paper in this series, we will extend the analysis to predict Milky Way halo properties beyond mass, including concentration, spin axis, and assembly history.
This paper is organized as follows. In Section 2, we describe the training process and dark matter simulations; in Section 3,
we illustrate the performance of the resulting neural networks; and we discuss these results and provide conclusions in Section 4. We assume a flat, \(\Lambda\)CDM universe with \(\Omega_{m}=0.307\), \(\Omega_{\Lambda}=0.693\), \(n_{s}=0.96\), \(h=0.68\) and \(\sigma_{8}=0.823\). We adopt the virial halo mass definition \(M_{\rm vir}\) from (Bryan & Norman, 1998), i.e., the total mass (dark + baryonic) within a radius \(R_{\rm vir}\) of a density peak.
## 2. Methods
### Dark Matter Simulation
For this work, we use the public Very Small MultiDark Planck (VSMDPL) simulation with \(3840^{3}\) dark matter particles, each of mass \(6.2\times 10^{6}M_{\odot}/h\). The simulation is based on a flat, \(\Lambda\)CDM universe with \(\Omega_{m}=0.307\), \(\Omega_{\Lambda}=0.693\), \(n_{s}=0.96\), \(h=0.68\) and \(\sigma_{8}=0.823\). It evolves matter from \(z=150\) to \(z=0\) within a periodic cube of side length \(160\) comoving Mpc/\(h\). There are \(151\) snapshots with identified halos between \(z=0\) and \(z=25\). Halos are identified using Rockstar (Behroozi et al., 2013), and merger trees are identified using the Consistent Trees algorithm (Behroozi et al., 2013). Each halo is identified in the merger trees as a central halo or as a satellite halo (i.e., a halo contained within the virial radius of a larger halo). We adopt the virial halo mass definition \(M_{\rm vir}\), i.e., the total mass (dark + baryonic) within a radius \(R_{\rm vir}\) of a density peak, such that the average density enclosed is \(\rho_{\rm vir}\) from Bryan & Norman (1998).
### Intuition for Using Specific Angular Momenta
One of the principal inputs to our neural networks is specific angular momenta of neighboring galaxies. Under our halo definition, both the halo radius and the halo circular velocity (\(\sqrt{GM/R}\)) scale as halo mass to the one-third power.
As a result, the characteristic distances and velocities of the satellite halos with respect to the host halo (which by dimensional analysis are proportional to the halo radius and circular velocity) both scale as host halo mass to the one-third power. The characteristic specific angular momenta of satellites then depends on halo mass to the two-thirds power:
\[j=(R\times V)\propto R_{\rm vir}\times v_{\rm circ,vir}\propto M_{\rm vir}^{2 /3}. \tag{1}\]
This characteristic scaling is evident across a broad mass range for all central halos in our simulation in Fig. 1, which demonstrates the average specific angular momenta of the \(30\) largest neighbors versus central halo mass.
As discussed in Patel et al. (2018), the specific angular momentum of the satellite galaxies provides strong constraints on host halo mass. As shown in Fig. 1, the \(M_{\rm vir}^{2/3}\) scaling is evident for a very wide range of halo masses. Only halos above \(10^{13.5}M_{\odot}\) start to show a bend in the scaling relation, due to more radial orbits for massive halos. Additionally, halos below \(M_{\rm vir}=10^{12.5}M_{\odot}\) show scatter towards high specific angular momenta, which occurs for lower-mass halos that are near much more massive halos.
In this paper, we do not assume advanced knowledge of which nearby galaxies are satellites and which are not. Nonetheless, satellite angular momenta are approximately conserved throughout their orbits (Patel et al., 2018). Hence, even when bound and unbound galaxies are mixed in a given vicinity of a halo, the bound galaxies' orbits will appear as an overdensity in the specific angular momentum distribution of the neighboring galaxies, and so specific angular momenta still provide useful information about host halo mass.
### Halo Selection and Input Features
To train our deep neural networks, we first select halos with peak masses (i.e., their largest historical halo mass) larger than \(10^{8}M_{\odot}\) from the VSMDPL simulation, as the simulation does not resolve lower-mass halos well. These are also the only halos expected to host galaxies for which proper motions can be measured, due to the atomic cooling limit suppressing star formation in lower-mass halos (e.g., O'Shea et al., 2015). In contrast to past studies, we place no additional prior or selection on host halo masses, as this information comes from observables alone in our method.
Past studies to infer mass have typically assumed that all nearby galaxies are satellites of the Milky Way, which places a strong prior on host halo mass. Because we do not know this to be the case in reality, we drop this assumption in this study, instead using the orbital properties (including specific angular momentum \(j\), radial distance \(R\), and relative velocity \(V\)) of the largest neighboring halos out to a fixed distance as our main input features. For this paper, we select neighboring halos out to \(300\) kpc from central halos, corresponding approximately to the distance out to which proper motions can be measured for Milky Way satellite candidates with _Gaia_.
In particular, we do not make any cuts on whether the neighbors are bound or not, as this information is not known _a priori_ from the observations. Past studies, including Patel et al. (2018), used the specific angular momenta of \(\sim\)10 satellites to infer the mass of the Milky Way's halo, whereas the _Gaia_ mission has now provided 6D phase space information (and therefore angular momenta) for \(\sim\)50 satellites (Li et al., 2021; Fritz et al., 2018; McConnachie & Venn, 2020) within \(300\) kpc. Hence, we train a 10-neighbor neural network to compare our approach with past approaches, and we also train a 30-neighbor neural network to show the improvement possible with our new approach.
In tests, we found that dropping the assumption of satellite membership made it very difficult for networks that used angular momenta alone to reliably estimate host halo mass. As discussed in later sections, the neighbors of low-mass halos
Figure 1.— The average specific angular momenta of the 30 largest satellites (selected by highest peak \(v_{\rm max}\)) versus central halo mass, for dark matter halos in the VSMDPL simulation. The expected dependence on halo mass (\(j\propto M_{\rm halo}^{2/3}\)) is shown by the red line, which is generally tightly followed by the simulated halos.
(\(<10^{11}M_{\odot}\)) do not have specific angular momentum distributions that correlate with halo mass; because low-mass halos are much more numerous than high-mass halos, training results in networks that try to limit the worst-case performance for low-mass halos, rather than improve the best-case performance for higher-mass halos. However, adding some observable information that correlates broadly with host halo mass can help networks discriminate between the cases where the specific angular momentum of neighboring halos correlates with halo mass and where it does not.
In this work, we use the maximum circular velocity, \(v_{\rm max}\), of the most massive satellite (the Large Magellanic Cloud in the case of the Milky Way, or M33 in the case of Andromeda) to help the networks distinguish between whether they are in the low-mass (neighbor angular momenta uncorrelated with host halo mass) or high-mass (neighbor angular momenta correlated with host halo mass) regimes. Using \(v_{\rm max}\) of the largest satellite in this way follows from past studies that have also done so (see, e.g., Busha et al., 2011; Patel et al., 2017, 2018; Patel & Mandel, 2023).
From Fig. 1, we know that nearby massive halos can influence the angular momentum distributions of satellites. Hence, we also include input features corresponding to the distance to the nearest larger halo (\(D_{\rm larger}\)) and the distance to the nearest larger halo with \(M_{\rm vir}\geq 10^{14}M_{\odot}\) (\(D_{14}\)). At high mass, these quantities converge by definition.
### Network Training
Neural networks consist of interconnected nodes organized into layers, and they are capable of learning intricate patterns and relationships from data. We have used a deep neural network (NN) for our regression task of estimating halo mass from galaxy observables. Deep NN's are commonly used for image- and language-related tasks, but they can also be applied to arbitrary structured data as in this paper.
The hyper-parameters and structure that we used in our neural networks are as follows:
1. Input Size: Our input layer has 3 features for the orbital properties (\(j\) [specific angular momentum], \(R\) [distance from halo center], \(V\) [velocity offset from halo center]) of each neighboring halo, as well as an additional 3 features for the target halo's environment (\(v_{\rm max}\) of most massive satellite, distance to nearest larger halo, and distance to nearest \(10^{14}M_{\odot}\) halo). For the 10-neighbor network, this totals 33 input features, and for the 30-neighbor network, this totals 93 input features.
2. Layer Architecture: We use 5 fully-connected hidden layers. Each hidden layer (i.e., a layer in between the input and output layers) contains neurons that apply a nonlinear transformation to the input features, which are taken from the outputs of the previous layer. Fully connected layers are those in which every neuron in a given layer receives an input from every neuron in the previous layer. Initially, we have 10 neurons in the first hidden layer. Progressing through the network, we decrease the number of neurons in each subsequent layer (8, 6, 4, 2). This is known as a decreasing architecture, and it helps in reducing the complexity of the information passed through each layer as we go deeper into the network.
3. Activation Function: We have used Rectified Linear Unit (ReLU) activation functions in our hidden layers. ReLU is a common choice because it introduces nonlinearity into the model while being computationally efficient. Nonlinearity is essential in neural networks-otherwise the action of the neural network could be represented by a linear transform (i.e., a matrix multiplication), which would prevent it from learning complex, nonlinear relationships between the input and output data.
4. Output Layer: We have a single neuron in the output layer, since our network is performing regression to predict a single output (i.e., the mass of a central halo).
5. Loss Function: We have chosen Mean Squared Error (MSE) as our loss function, i.e., the metric by which we judge the neural network's performance. MSE is commonly used for regression tasks and calculates the average of the squared differences between predicted and actual values. It penalizes larger errors more heavily.
6. Optimizer: We have chosen the Adam optimizer. Adam is an adaptive learning rate optimization algorithm that combines the benefits of two other popular optimizers, RMSprop and Momentum, in that it adaptively chooses how far to proceed along the gradient of the loss function for each update to the neural network parameters. It is well-suited for a wide range of problems and often converges faster than traditional stochastic gradient descent (SGD).
7. Learning Rate: Our learning rate is set to 0.001. This parameter controls the initial step size during optimization. The value of 0.001 is a common starting point, but its value can be tuned depending on the specific problem and data set.
8. Batch Size: Our batch size is 64. This determines the number of input data points used in each update of the neural network's weights during training. Smaller batch sizes can lead to noisier updates but are more computationally efficient, while larger batch sizes provide smoother updates but require more time to compute each update.
For training, we select all central halos with at least \(N\) neighbors within 300 kpc (with \(N=10\) or 30, as appropriate). As above, we place no prior on central halo mass, so these halos range from \(\sim 10^{8}-10^{15}M_{\odot}\). We use three orbital parameters (\(j\), \(R\), and \(V\)) for each of the \(N\) neighbors with the highest peak \(v_{\rm max}\) as inputs to the neural network, as a proxy for the brightest galaxies (Reddick et al., 2013). We also use the \(v_{\rm max}\) of the most massive satellite (corresponding to the \(v_{\rm max}\) of the Large Magellanic Cloud for the Milky Way), the distance to the nearest larger halo (corresponding to the distance to M31 for the Milky Way), and the distance to the nearest \(10^{14}M_{\odot}\) or larger halo (corresponding to the Virgo Cluster for the Milky Way) as input parameters. As above, the 10-neighbor network has 33 input features, and the 30-neighbor has 93 input features.
We used simulation snapshots from \(z=0\) to \(z=0.25\) from the VSNDPL simulation to increase the diversity of neighboring halo orbital configurations available for training. We found that including training data from earlier snapshots did not cause a measurable bias in median predicted masses
for \(z=0\) halos, suggesting that the distribution of orbital configurations has not changed significantly over this redshift interval. Halos are split into a training sample (63%) and a test sample (37%) according to whether the halos have an X-coordinate less than or greater than 96 Mpc/\(h\) (compared to an overall box length of 160 Mpc/\(h\)). This division is made to capture the uncertainties arising both from Poisson statistics and larger-scale cosmic variance.
To pre-process, we ordered neighboring halos by increasing specific angular momenta, took the logarithms of all input features, subtracted the mean values across all neighbors, and scaled to unit variance. We then trained two 5-layer fully connected neural networks on the 10- and 30-neighbor input feature vectors to predict the masses of the corresponding central halos. The details of the network structure are shown in Fig. 2, and the details of the hyper-parameters are shown in Table 1.
We varied several different hyper-parameters for the training process: the number of layers, the learning rate, the number of nodes per layer, the loss function, and the batch size. We used a hand search to tune the learning rate, batch size, and loss function. For the rest of the hyper-parameters, we started with a simple network and increased the size until the mean-squared error did not improve further.
We did not find any substantial improvements over the fiducial choice of parameters in Table 1, and in some cases found worse performance. For example, when using optimizers such as RMSprop or Adagrad, we observed that the network exhibited a loss of prediction accuracy, particularly at the high mass end. This suggests that these optimizer choices may have gotten stuck in local minima, as performance for the vast majority of the halo sample (i.e., low mass halos) was prioritized over performance for high-mass halos.
## 3. Results
### Performance of the neural network approach
We measure the performance of the neural network approach by applying the trained network to halos that it has never seen before (i.e., halos in our test set). The variance of the predicted halo masses at fixed actual halo mass then corresponds to the expected uncertainties of the network when applied to new data, such as for the Milky Way and M31. Hereafter, we quote network uncertainties at an actual halo mass of \(10^{12}M_{\odot}\) to represent the expected performance for the Milky Way and M31.
Fig. 3 summarizes the results of our work, demonstrating that the specific angular momenta of neighboring galaxies can be used to accurately infer the masses of central halos. The medians of the neural networks' predicted masses (in bins of actual halo mass) closely match actual halo masses, with typical median offsets of \(\lesssim 0.07\) dex at halo masses of \(10^{12}M_{\odot}\). However, the uncertainty in the predicted masses is significantly larger for low-mass halos (below a threshold of \(\sim 10^{11.7}M_{\odot}\)) compared to high-mass halos. The size of the uncertainty is primarily influenced by whether the neighboring galaxies within 300 kpc are satellites or not. We investigate this aspect further in the next subsection.
The bottom plots in Fig. 3, show the RMS magnitudes of the errors across the full range of predicted masses. Specifically for MW-mass halos (again considering a threshold of \(M_{\rm vir}\gtrsim 10^{11.7}M_{\odot}\)), the typical errors are \(\sim 0.2\) dex when using 10 neighboring halos, and they are \(\sim 0.14\) dex when using 30 neighboring halos, corresponding to a 30% reduction in uncertainty. Since the ratio of these errors is less than expected from Poisson statistics (\(0.2/0.14\sim 1.4<\sqrt{30/10}\sim 1.7\)), this likely indicates the presence of correlated orbits among some of the neighbors, such as satellites coming in along the same filaments or even some satellites being satellites of other satellites (e.g., Patel et al., 2020; Erkal et al., 2020; Battaglia et al., 2022).
### Understanding what information constrains halo masses
To analyze the relationships between satellite specific angular momenta (\(j\)), relative velocities (\(V\)), and radial distances (\(R\)) with respect to halo mass, we present Figures 4 and 5. These figures illustrate the distributions of neighboring halos' orbital properties, where the left-hand panels are color-coded by the most massive satellite's maximum circular velocity (\(v_{\rm max,sat}\)), and the right-hand panels are color-coded by \(D_{14}\), the distance to the nearest massive halo (\(M_{\rm vir}>10^{14}{\rm M}_{\odot}\)).
The overall distributions of \(j\), \(R\), and \(V\) exhibit distinct patterns, particularly with larger spreads observed for low-mass halos compared to high-mass halos. This can be attributed to the neighbors of high-mass halos being predominantly satellites of the high-mass halo, so the high-mass halo has a strong influence on its neighbors' orbits. However, neighbors of low-mass halos are typically not satellites and hence the presence of the low-mass halo does not strongly influence their orbits. Therefore, the distributions of neighbors' \(j\) and \(V\) are much more correlated with halo mass for high-mass than low-mass halos.
The color coding in the left-hand plots shows a smooth progression with actual halo mass, demonstrating a strong correlation between halo mass and the maximum radial velocity of the most massive satellite (\(v_{\rm max,\,sat}\)). Hence, the neural networks can effectively utilize satellite orbit information for large halo masses (when orbit information correlates with halo mass), while relying on the most massive satellite's maximum radial velocity as the best estimate when neighboring objects are not satellites.
The right-hand plots reveal that the presence of massive nearby halos biases neighbors' orbits, especially for low-mass halos. This outcome is expected since tidal forces from high-mass halos exert influence on the orbits of all neighboring halos, resulting in increased relative velocities between the low-mass halo and its neighbors. Additionally, massive halos
Figure 2.— The neural network geometry we use to predict halo masses. Input features include neighboring halos’ specific angular momenta (\(j\)), radial distances (\(R\)), and relative velocities (\(V\)), as well as the maximum circular velocity of the most massive satellite (\(v_{\rm max,sat}\)), the distance to the nearest larger halo (\(D_{\rm larger}\)), and the distance to the nearest halo with \(M_{\rm vir}>10^{14}M_{\odot}\) (\(D_{14}\)). For all networks (regardless of the number of inputs), there are 5 hidden layers gradually decreasing from 10 nodes to 2 nodes, with one output layer corresponding to the predicted halo mass.
have high satellite velocities, and because the orbits of satellite halos often extend beyond halos' virial radii (where they become known as "backsplash" or "flyby" halos; see, e.g., Diemer 2021; O'Donnell et al. 2021), some neighboring halos around low-mass halos will have orbits that are strongly influenced by their high-mass neighbors.
Figures 6 and 7 show the relationship between three variables: distance to the nearest \(M_{\rm vir}>10^{14}M_{\odot}\) halo (\(D_{14}\)), distance to the nearest larger halo (\(D_{\rm larger}\)), and \(v_{\rm max}\) of the most massive satellite (\(v_{\rm max,sat}\)), with respect to halo mass. The parameter \(v_{\rm max,sat}\) has a very strong correlation with host halo mass, as larger halos typically host larger satellites.
The parameter \(D_{\rm larger}\) also exhibits a correlation with halo mass. However, the relationship is weaker and exhibits a different shape from that of \(v_{\rm max,sat}\). Larger halos are relatively less common, which directly implies that the distances between large halos tend to be larger than the distances between small halos, despite the fact that larger halos are more biased relative to the underlying dark matter distribution. We also note that there is a kink in the median relation between \(D_{\rm larger}\) and halo mass at \(M_{\rm vir}\sim 10^{11}M_{\odot}\), which occurs because we are selecting halos with at least 30 neighbors within a 300 kpc radius; for low-mass halos, this preferentially selects halos in dense environments, i.e., for which the distance to surrounding halos is significantly decreased.
Finally, unlike \(v_{\rm max,sat}\) and \(D_{\rm larger}\), \(D_{14}\) does not exhibit much correlation with halo mass. This indicates that halos of varying masses are present across different environments, leading to a wide range of \(D_{14}\) values irrespective of halo mass.
To confirm our interpretation that neighbors of low-mass satellites are not providing any information about host halo masses, we trained a network just with three parameters (\(D_{14}\), \(D_{\rm larger}\) and \(v_{\rm max,sat}\)), and found similar errors for low-mass halos as compared to the network provided with full information about satellites (Figure 8). At the same time, the errors from this network (\(>0.3\) dex) imply that, for halos with \(M_{\rm vir}>10^{11.7}M_{\odot}\), adding orbital information for neighboring
Figure 3.— Predicted halo mass versus actual halo mass for the neural networks in this paper applied to dark matter simulations. Input features to the networks correspond to observables, primarily including neighboring galaxies’ specific angular momenta and other orbital properties. The left figure shows the result from halos with at least 30 neighboring galaxies, with reduced errors compared to the right figure, which used halos having at least 10 neighboring galaxies. In each figure, the bottom panels show the root mean square error (RMSE) as a function of actual halo mass. Typical errors are very good in both cases, about 0.21 dex for Milky Way-mass halos for the network using 10 neighbors and 0.14 dex for Milky Way-mass halos for the network using 30 neighbors. Error bars show the standard deviations of the predicted halo masses as a function of actual halo mass. The black line shows medians of predicted halo masses in bins of actual halo mass. The red line serves as a reference to indicate where the predicted mass would be equal to the actual mass.
halos reduces the variance in predicted masses by \(>90\%\). Hence, although \(v_{\rm max,sat}\) is helpful to establish a broad prior on host halo mass, most of the information leading to the final predicted mass for MW-mass and larger halos is coming from neighboring halos' orbits.
Since we have shown that nearby massive halos impact neighboring halos' orbital distributions, we also consider a network trained on isolated halos (Fig. 9). Since the Milky Way and M31 are \(\sim 11\) Mpc/\(h\) from the Virgo Cluster (e.g., Mei et al., 2007), we trained a separate network using only halos with \(D_{14}>10\) Mpc/\(h\). This network performed only marginally better (0.133 dex vs. 0.138 dex errors for \(10^{12}M_{\odot}\) halos) than the network with no selection on \(D_{14}\), suggesting that the network with no selection is nonetheless able to compensate well for the presence of a larger nearby halo.
## 4. Discussion and Conclusions
We find that applying a neural network with information from neighboring halo orbits can place tight constraints on the masses of Milky Way-like halos, with typical errors as
Figure 4.— **Left:** the median specific angular momentum of neighboring halos as a function of halo mass, for halos that have at least 30 neighbors within 300 kpc. Halos are color-coded by the most massive satellite’s maximum circular velocity, which correlates with host halo mass. Here, the neighbors of high-mass halos are much more likely to be satellites and thus have orbits with correlated specific angular momenta. In contrast, low-mass halos usually have non-satellite neighbors, which are less influenced by the low-mass halo’s presence. So, the distributions of neighbors’ specific angular momenta are much more correlated with halo mass for high-mass than low-mass halos. **Right:** the median specific angular momentum of halos’ neighbors, now color-coded by the distance to the nearest massive halo (\(M_{h}>10^{14}M_{\odot}\)). Gravitational forces from high-mass halos impact the orbits of all nearby halos, leading to higher relative velocities between low-mass halos and their neighbors. Moreover, massive halos have satellites that possess high velocities, and as these satellites’ orbits can extend beyond the virial radii of the massive halos, they can pass nearby other lower-mass halos even as they have very large specific angular momentum offsets. Hence, the largest median specific angular momenta typically occur near massive halos.
Figure 5.— **Left:** the median relative velocities of neighboring halos as a function of halo mass, for those halos with 30 neighbors within 300 kpc from their centers. Halos are color-coded by the most massive satellite’s maximum circular velocity, which correlates with host halo mass. Here, the neighbors of high-mass halos are much more likely to be satellites and thus have orbits with correlated relative velocities. In contrast, low-mass halos usually have non-satellite neighbors. As in Fig. 4, the distributions of neighbors’ relative velocities are much more correlated with halo mass for high-mass than low-mass halos. **Right:** median relative velocities of halos’ neighbors, now color-coded by the distance to the nearest massive halo (\(M_{h}>10^{14}M_{\odot}\)). As in Fig. 4, the largest median neighbor relative velocities typically occur near massive halos.
low as 0.14 dex. In our analysis, using information from 30 neighboring galaxies yields more accurate predictions of central halo masses compared to using only 10 neighboring galaxies, for which the uncertainties rise to \(\sim 0.2\) dex. This finding is consistent with the result reported by Patel et al. (2018), in that incorporating specific angular momenta as input variables allows for tight constraints in predicting central halo masses.
Our approach offers several advantages over previous methods, addressing certain limitations and paving the way for future advancements. First, we have shown that it is not necessary to assume dynamical equilibrium or to assume satellite status to achieve tight constraints on halo masses, at least for halos with enough nearby satellites. Secondly, past simulation-based methods, such as those employed in Patel et al. (2018) and others' previous works, may have slightly underestimated errors due to correlations between satellite orbits, regardless of whether the measurement errors are included or not. In our case, we find that going from 10 satellites to 30 satellites gives a factor of \(\sqrt{2}\) improvement in uncertainties, whereas Poisson statistics would suggest a factor of \(\sqrt{3}\). Part of the barrier in achieving lower (Poisson-limited) uncertainties could be due to correlations between satellite orbits, such as satellites arriving along the same filament. However, part of the barrier could also be limitations in characterizing the environment. For example, we showed that nearby high-mass halos cause contamination in satellite orbits, but other aspects of the environment could correlate with satellite orbits in as yet unexplored/unknown ways.
This study did not investigate the impact of observational errors, in part because we wished to understand the maximal amount of information present in satellite orbits. For a study that is applicable to the Milky Way and/or M31 systems, one would need to account for observational errors that
Figure 6.— **Left:** There is a strong correlation between the maximum circular velocity of the most massive satellite and the host halo mass. The color coding indicates the distance to the nearest larger halo, which is also correlated with host halo mass, but more weakly than the maximum circular velocity of the most massive satellite. **Right**: This plot shows the correlation between the distance to the nearest larger halo and the host halo mass. Larger halos are less prevalent, which leads to larger distances between them when compared to smaller halos. So, the distribution of larger halos contributes to a distinct pattern for \(D_{\rm larger}\), different from that of \(\nu_{\rm max,tar}\). There is a noticeable kink in the median relation between \(D_{\rm larger}\) and halo mass around \(M_{\rm vir}\sim 10^{11}\,M_{\odot}\). This kink arises due to our selection criteria, where we focus on halos with a minimum of 30 neighbors within a 300 kpc radius.
Figure 7.— There is little correlation between the distance to the nearest massive halo and the host halo mass. Halos of all masses can be found near massive halos, which in turn can significantly impact orbital properties of their neighboring halos.
correlate with heliocentric distance. This is the next planned step in our paper series, which will involve training a neural network on simulations with realistic observational errors and then using the resulting network to measure the masses of the Milky Way and Andromeda. Furthermore, our current work, similar to many previous studies, did not extensively test the method on hydrodynamical simulations. We recognize the importance of investigating the effectiveness of our approach on non-dark matter-only simulations, and we also plan to perform such tests. In particular, we plan to cross-validate the method by training on one hydrodynamical simulation and testing on another hydrodynamical simulation with a different physics implementation.
Beyond halo mass, we also plan to train new neural networks to estimate additional parameters such as the halo's spin axis, concentration, and assembly history. This would provide important context to our understanding of our own halo, including orbit modeling for satellites, as present halo models tend to assume a static mass and concentration history for the Milky Way.
## Acknowledgments
EH and PB were funded through a Fellowship from the Packard Foundation, Grant #2019-69646. EP acknowledges financial support provided by a grant for _HST_ archival program AR-16628 through the Space Telescope Science Institute (STScI). EP also acknowledges financial support provided by NASA through the Hubble Fellowship grant # HST-HF2-51540.001-A awarded by STScI. STScI is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
This research is based upon High Performance Computing (HPC) resources supported by the University of Arizona TRIF, UITS, and Research, Innovation, and Impact (RII) and maintained by the UArizona Research Technologies department. The University of Arizona sits on the original homelands of Indigenous Peoples (including the Tohono O'odham and the Pascua Yaqui) who have stewarded the Land since time immernorial.
EH thanks the LSSTC Data Science Fellowship Program, which is funded by LSSTC, NSF Cybertraining Grant #1829740, the Brinson Foundation, and the Moore Foundation; her participation in the program has benefited this work.
The VSMDPL simulation was performed by Gustavo Yepes on the SuperMUC supercomputer at LRZ (Leibniz-Rechenzentrum) using time granted by PRACE, project number 012060963 (PI Stefan Gottloeber).
|
2305.20002 | Representer Point Selection for Explaining Regularized High-dimensional
Models | We introduce a novel class of sample-based explanations we term
high-dimensional representers, that can be used to explain the predictions of a
regularized high-dimensional model in terms of importance weights for each of
the training samples. Our workhorse is a novel representer theorem for general
regularized high-dimensional models, which decomposes the model prediction in
terms of contributions from each of the training samples: with positive
(negative) values corresponding to positive (negative) impact training samples
to the model's prediction. We derive consequences for the canonical instances
of $\ell_1$ regularized sparse models, and nuclear norm regularized low-rank
models. As a case study, we further investigate the application of low-rank
models in the context of collaborative filtering, where we instantiate
high-dimensional representers for specific popular classes of models. Finally,
we study the empirical performance of our proposed methods on three real-world
binary classification datasets and two recommender system datasets. We also
showcase the utility of high-dimensional representers in explaining model
recommendations. | Che-Ping Tsai, Jiong Zhang, Eli Chien, Hsiang-Fu Yu, Cho-Jui Hsieh, Pradeep Ravikumar | 2023-05-31T16:23:58Z | http://arxiv.org/abs/2305.20002v2 | # Representer Point Selection for Explaining Regularized High-dimensional Models
###### Abstract
We introduce a novel class of sample-based explanations we term _high-dimensional representers_, that can be used to explain the predictions of a regularized high-dimensional model in terms of importance weights for each of the training samples. Our workhorse is a novel representer theorem for general regularized high-dimensional models, which decomposes the model prediction in terms of contributions from each of the training samples: with positive (negative) values corresponding to positive (negative) impact training samples to the model's prediction. We derive consequences for the canonical instances of \(\ell_{1}\) regularized sparse models, and nuclear norm regularized low-rank models. As a case study, we further investigate the application of low-rank models in the context of collaborative filtering, where we instantiate high-dimensional representers for specific popular classes of models. Finally, we study the empirical performance of our proposed methods on three real-world binary classification datasets and two recommender system datasets. We also showcase the utility of high-dimensional representers in explaining model recommendations.
## 1 Introduction
Sample-based explanations aim to explain a machine learning model's prediction by identifying the most influential training samples that led to the prediction. This is usually done by measuring the influence of each training sample on the model's prediction scores. The explanations not only assist users in understanding the rationale behind the prediction, but also allow model designers to debug or de-bias the training data [37, 52].
To measure the impact of each training sample on the prediction score, a classical technique is to compute the derivative of the prediction score with respect to each training instance using implicit function theory, an approach also known as influence functions [10, 36]. However, computing the influence function requires the inversion of the Hessian matrix, causing significant scalability issues when handling large models. To compute sample-based explanations in an efficient manner, another method called **Representer Point Selection** has been developed [61]. This method is based on the classical representer theorem [50], which states that a regularized empirical risk minimizer over a reproducing kernel Hilbert space (RKHS) can be decomposed into a linear combination of kernel functions evaluated on each training sample. While functions parameterized with neural networks do not necessarily lie in a pre-specified RKHS, Yeh et al. [61] propose to treat the last layer of a neural network as a linear machine and the remaining part as a fixed feature encoder. Upon fine-tuning the last layer with \(\ell_{2}\) regularization, the representer theorem can then be applied, allowing us to obtain importance scores of the training data. In this development, the use of \(\ell_{2}\) regularization served as a RKHS norm with respect to linear kernels, which was key to recruiting the representer theorem.
However, \(\ell_{2}\) regularizers are not always suitable for high-dimensional models where the number of parameters might even be larger than the number of samples, and where the model parameters might lie in a lower dimensional sub-space. In such settings, in order for the resulting estimators to have strong statistical guarantees, it is often critical to employ
high-dimensional regularizations that encourage the model parameter to lie in such lower-dimensional structured subspaces [44]. Two canonical instances of such high-dimensional regularizers include the \(\ell_{1}\) norm regularization that encourages parameter vectors to have sparse structure, and the nuclear norm regularization imposes low-rank structure on parameter matrices. The caveat however is that these regularizations cannot typically be cast as RKHS norms, and thus the classical representer theorem does not apply. Therefore, it remains unclear how to select representer points for high-dimensional models, despite the widespread use of high-dimensional models in practical applications such as compressed sensing [15] and recommender systems [6, 48].
We first present a general theorem that provides a representer theorem for regularized high-dimensional models, where we leverage the rich structure of the regularization sub-differentials, as well as the analytical framework of Negahban et al. [45] that associates the regularization functions with a collection of structured low-dimensional subspaces. We term the resulting sample-based explanations for these high-dimensional models as _high-dimensional representers_. As with the original representer points for \(\ell_{2}\) regularized models, there is a global importance score per training sample, as well as a local importance score that measures the similarity between the test point and the training sample. But unlike the \(\ell_{2}\) regularized case, the representer theorem entails that this local similarity is measured after an appropriate linear projection of the test input and the training sample to the structured model parameter subspace. Thus, even in cases where the model parameters might be quite high-dimensional, the local similarity is quite meaningful, as well as scalable and efficient since it is computed over a much lower dimensional structured subspace.
Given the general theorem, we then derive its consequences for the important settings of sparse vectors with \(\ell_{1}\) regularization, and low-rank matrices with nuclear norm regularization, leading to sample-based explanation methods under those high-dimensional regularizers. Equipped with the results, we explore the use of our technique in the context of collaborative filtering, including various specific model instances such as collaborative matrix factorization models [38]. We also investigate deep neural network variations of these models, the two-tower models [42, 41], by treating the final interaction layer is treated as a bilinear matrix factorization model and the other layers are fixed encoders when applying our method. This cannot be done with the \(\ell_{2}\) representer methods as the final layer is a product of two matrices. Lastly, we evaluate the empirical performance of the high-dimensional representers on three real-world binary classification datasets and two recommender system datasets. We also demonstrate the practical utility of high-dimensional representers in explaining the recommendations generated by our models.
## 2 Related Work
Prominent approaches for estimating training data influence to a test point include influence functions [36], representer point selection [61], and TracIn [47]. Influence functions [59, 35, 2] estimate training sample importance by measuring "how the model's prediction change if we remove a particular training sample and retrain the model." However, computing influence functions requires calculating the inverse of the Hessian matrix. Exact estimation requires time complexity at least quadratic to the number of parameters and is thus unsuitable for large or high-dimensional models [22, 25, 49].
TracIn quantifies training data importance by measuring similarities between gradient at training and test samples over trajectories [62, 8]. However, their approach only applies to models trained with stochastic gradient descent, which may not be an efficient way for high-dimensional model training. Also, TracIn requires storing and accessing checkpoints of models during training and is not applicable to off-the-shelf models. The most relevant work to ours is the (\(\ell_{2}\)) representer point selection: Brophy et al. [4] extends it to explain decision trees using supervised tree kernels. Sui et al. [51] improves it with local Jacobian expansion. Another line of sample-based explanations relies on repeated retraining [21, 33, 39, 19], which are more costly compared to the methods mentioned above since it requires retraining models multiple times.
On the other hand, representer theorems [50] in machine learning have targeted non-parametric regression in RKHS. Bohn et al. [3] connect representer theorems and composition of kernels. Unser [56] derive general representer theorems for deep neural networks and make a connection with deep spline estimation. Unser et al. [57] also propose representer theorems for \(\ell_{1}\) regularization, but their theorems have a different formulation for a difference purpose: they attribute model parameters to basis on the nonzero coordinates to show that the minimizer is sparse. In our work, we consider a simpler task of explaining regularized high-dimensional models and develop novel representer theorems for this purpose.
## 3 Preliminary
Before providing our general framework for high-dimensional representers, it is instructive to recall classical machinery in high-dimensional estimation. As Negahban et al. [45] show, we can think of structure in high-dimensional models as being specified by collections of lower-dimensional subspaces.
**Example: Sparse Vectors:** Consider the set of \(s\)-sparse vectors in \(p\) dimensions. For any particular subset \(S\subseteq\{1,\ldots,p\}\), with cardinality \(s\), define the subspace: \(A(S)=\{\theta\in\mathbb{R}^{p}\,:\,\theta_{j}=0,\quad\forall j\not\in S\}\). It can then be seen an \(s\)-sparse vector lies in one of the collection of low-dimensional subspaces \(\{A(S)\}_{S\subseteq[p]}\).
**Example: Low-Rank Matrices:** For any matrix \(\Theta\in\mathbb{R}^{d_{1}\times d_{2}}\), let \(\text{col}(\Theta)\in\mathbb{R}^{d_{1}}\) its column space, and \(\text{row}(\Theta)\in\mathbb{R}^{d_{2}}\) denote its row space. For a given pair \((U,V)\) or \(k\)-dimensional subspaces \(U\subset\mathbb{R}^{d_{1}}\) and \(V\subseteq\mathbb{R}^{d_{2}}\), we can define the subspaces: \(A(U,V)=\{\Theta\in\mathbb{R}^{d_{1}\times d_{2}}\,:\,\text{col}(\Theta)\subseteq U, \,\text{row}(\Theta)\subseteq V\}\). It can then be seen that any low-rank matrix \(\Theta\in\mathbb{R}^{d_{1}\times d_{2}}\) of rank \(k\leq\min(d_{1},d_{2})\) lies in a collection of the low-dimensional subspaces above.
A critical question in such high-dimensional settings is how to automatically extract and leverage such low-dimensional subspace structure. Negahban et al. [45] showed that so long as regularization functions \(r(\cdot)\) satisfy a property known as decomposability with respect to one of the collections of subspaces, regularized empirical loss minimizers yield solutions that lie in a low-dimensional subspace within that collection. Towards defining this, they require another ingredient which is a collection of orthogonal subspaces of parameters with orthogonal structure. For sparse vectors, the orthogonal subspace \(B(S)=A(S)^{\perp}\). For low-rank matrices, the orthogonal subspace is \(B(U,V)=\{\Theta\in\mathbb{R}^{d_{1}\times d_{2}}\,:\,\text{row}(\Theta)\subseteq U ^{\perp},\,\text{col}(\Theta)\subseteq V^{\perp}\}\). It can be seen that in this case, we have that \(B(U,V)\subseteq A^{\perp}(U,V)\), since we do not simply want all orthogonal parameters to the structured subspace, but want orthogonal parameters which are also structured with respect to the collection of subspaces. A regularization \(r(\cdot)\) is said to be decomposable with respect to collection of subspaces if for any such structured subspace pair \((A,B)\), we have that: \(r(u+v)=r(u)+r(v)\)\(\forall u\in A,v\in B\). For the case of sparse vector subspaces, the \(\ell_{1}\) norm \(r(\theta)=\|\theta\|_{1}\), and for the case of low-rank matrices, the nuclear norm \(r(\Theta)=\|\Theta\|_{*}\) can be shown to be decomposable [45].
The sub-differential of the regularization function can be written as: \(\partial r(\theta)=\{u\,|\,r(\theta^{\prime})-r(\theta)\geq\langle u,\theta^ {\prime}-\theta\rangle,\forall\theta^{\prime}\in\Theta\}\). In the case of structured parameters above, the sub-differential in turn has additional structure. Suppose \((A,B)\) is the subspace pair corresponding to the structured parameter \(\theta\). Then, for any \(g\in\partial r(\theta)\), we have that \(g=u_{\theta}+v\), where \(u_{\theta}\in A\) has a unique representation that depends on \(\theta\), and \(v\in B\). Moreover, there exists a (non-unique) inverse transform \((\partial_{\theta}r)^{+}\) of the partial differential, so that \((\partial_{\theta}r)^{+}(g)=\theta\), for all \(g\in\partial r(\theta)\), with the property that \((\partial_{\theta}r)^{+}\) is a positive-definite linear operator, with range within the structured subspace \(A\).
## 4 Representer Theorem for High-Dimensional Models
We are interested in regularized empirical risk minimizers. Given \(n\) training samples \((\mathbf{x}_{1},y_{1}),(\mathbf{x}_{2},y_{2}),\cdots,(\mathbf{x}_{n},y_{n})\)\(\in\mathcal{X}\times\mathbb{R}\), a loss function \(\ell(\cdot,\cdot):\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}\), and parameters of a linear model \(\theta\in\mathbf{\Theta}\), where \(\mathbf{\Theta}\subseteq\mathcal{X}\) we consider the following optimization problem:
\[\hat{\theta}=\operatorname*{argmin}_{\theta\in\mathbf{\Theta}}\frac{1}{n}\sum_{i= 1}^{n}\ell(y_{i},\langle x_{i},\theta\rangle)+\lambda r(\theta). \tag{1}\]
In the sequel, we assume that the regularization function \(r(\cdot)\) is decomposable with respect to some collection of low-dimensional structured subspace, as briefly reviewed in Section 3. Its role is to encourage the model parameter \(\theta\) to have the appropriate low-dimensional structure, while the hyper-parameter \(\lambda\) balances loss and regularization.
**Theorem 1**.: _(high-dim representer theorem) The minimizer \(\hat{\theta}\) of Eqn.(1) can be written as_
\[\hat{\theta}=\sum_{i=1}^{n}\left(-\frac{1}{n\lambda}\ell^{\prime}(y_{i}, \langle x_{i},\hat{\theta}\rangle)\right)\left((\partial_{\theta}r)^{+}x_{i} \right), \tag{2}\]
_where \(\ell^{\prime}=\partial\ell/\partial(\langle x_{i},\hat{\theta}\rangle)\) denotes the partial derivative of \(\ell\) with respect to its second input variable, and \((\partial_{\hat{\theta}}r)^{+}\) is the (non-unique) inverse transform of the regularization sub-differential. For any given test sample \(x^{\prime}\in\mathcal{X}\), its prediction can be decomposed according to training samples:_
\[\langle x^{\prime},\hat{\theta}\rangle=\sum_{i=1}^{n}\underbrace{-\frac{1}{n \lambda}\ell^{\prime}(y_{i},\langle x_{i},\hat{\theta}\rangle)}_{\text{global importance}}\underbrace{\langle(\partial_{\theta}r)^{\frac{1}{2}}x_{i},( \partial_{\theta}r)^{\frac{1}{2}}x^{\prime}\rangle}_{\text{local importance}}, \tag{3}\]
_where \((\partial_{\hat{\theta}}r)^{\frac{1}{2}}\) is the square-root of the sub-differential inverse transform._
Eqn.(3) provides the attribution of each training sample \(x_{i}\) to a test sample \(x^{\prime}\), which can be decomposed into the global importance and local importance. The global importance is a measure of how sensitive the training sample \(x_{i}\) is to the objective and depends on the derivative of the loss function. The local importance measures the similarity between the training sample \(x_{i}\) and the test sample \(x^{\prime}\).
The local importance similarity focuses on the projection of the data points onto a structured low-dimensional subspace \(A\) since the range of the sub-differential inverse transform \((\partial_{\hat{g}}r)^{\frac{1}{2}}\) is the structured subspace within which the parameter lies. We can thus think of such high-dimensional model estimation as specifying the local kernel \(k(x,x^{\prime})=\langle(\partial_{\hat{g}}r)^{\frac{1}{2}}x,(\partial_{\hat{g}} r)^{\frac{1}{2}}x^{\prime}\rangle\). To see a crucial difference with \(\ell_{2}\) regularized models [61], where the local importance is simply an inner product between \(x_{i}\) and \(x^{\prime}\), high-dimensional representers ignore the features in the orthogonal space \(B\) since they have no impact on test predictions.
The theorem is derived from solving first-order optimality condition on the low-dimensional subspace \(A\), i.e. one subgradient of the minimizer with respect to the objective equals zero. Next, we utilize the fact that the sub-differential \(\partial r(\hat{\theta})\) has a unique representation in the model subspace \(A\). It allows us to develop the inverse transform operator \((\partial_{\hat{g}}r)^{+}\) and use it to recover the model parameter.
In cases where the inverse transform is non-unique, we would obtain multiple local importance, one for each inverse transform, and we can then take an average of these when computing the local importance.
While the above development was quite abstract, in the following sections, we derive its consequences for the important settings of sparse vectors with \(\ell_{1}\) regularization, and low-rank matrices with nuclear norm regularization.
### \(\ell_{1}\)-regularized Linear Optimization
Based on the general theorem, we derive the representer point selection method for \(\ell_{1}\) regularization. We consider the following special case of Eqn.(1):
\[\hat{\theta}=\operatorname*{argmin}_{\theta\in\mathbb{R}^{p}}\frac{1}{n}\sum_ {i=1}^{n}\ell(y_{i},\langle x_{i},\theta\rangle)+\lambda\|\theta\|_{1}, \tag{4}\]
where the \(\ell_{1}\) regularization encourages the model to be sparse. Some examples of Eqn.(4) include \(\ell_{1}\)-regularized generalized linear models [53], compressed sensing [15], and sparse estimation of Gaussian graphical models [63, 20].
We develop the representer theorem for \(\ell_{1}\) regularized problems using Theorem 1. In this case, the structural model subspace is specified by the sparse model parameter \(\hat{\theta}\), \(A(S(\hat{\theta}))=\{\theta\in\mathbb{R}^{p}:\theta_{j}=0,\forall j\notin S\}\), where \(S(\hat{\theta})\) denotes a set of coordinates that \(\hat{\theta}\) has non-zero values. The orthogonal subspace \(B(S(\hat{\theta}))\) is in turn a set of vectors in \(\mathbb{R}^{p}\) whose coordinates on \(S(\hat{\theta})\) are zero.
Next, the sub-differential of the \(\ell_{1}\) norm is \(\partial\|\hat{\theta}\|_{1}=\{g\in\mathbb{R}^{p}|g_{i}=\text{sign}(\hat{ \theta})\text{ if }\hat{\theta}_{i}\neq 0,\text{ and }|g_{i}|\leq 1\text{ if }\hat{\theta}_{i}=0\}\), which has a unique representation, \(\text{sign}(\hat{\theta})\), in \(A(S(\hat{\theta}))\). Next, the inverse transform can be developed by reconstructing \(\hat{\theta}\) in the model subspace and zeroing out the sub-differential in the orthogonal space, where the model parameters are zero. Specifically, we use \((\partial_{\theta}r)^{+}(x)=|\hat{\theta}|\odot x,\forall x\in\mathbb{R}^{p}\), where \(|\cdot|\) denotes a coordinate-wise absolute value operator, and \(\odot\) denotes element-wise multiplication. Clearly, we have \((\partial_{\theta}r)^{+}(g)=\hat{\theta}\) for all \(g\in\partial\|\hat{\theta}\|_{1}\) since \(\hat{\theta}_{j}g_{j}=\hat{\theta}_{j}\) if \(j\in S(\hat{\theta})\) and \(\hat{\theta}_{j}g_{j}=0\) if \(j\notin S(\hat{\theta})\). By plugging these notations to Theorem 1, we obtain the following representer theorem for \(\ell_{1}\) regularized linear optimization problems.
**Corollary 2**.: _(high-dim representer theorem for \(\ell_{1}\)-regularizaion) The minimizer \(\hat{\theta}\) of Eqn.(4) can be written as_
\[\hat{\theta}=\sum_{i=1}^{n}\left(-\frac{1}{n\lambda}\ell^{\prime}(y_{i}, \langle x_{i},\hat{\theta}\rangle)\right)\left(|\hat{\theta}|\odot x_{i}\right), \tag{5}\]
_For any given test sample \(x^{\prime}\in\mathbb{R}^{p}\), its prediction can be decomposed according to training samples:_
\[\langle x^{\prime},\hat{\theta}\rangle=\sum_{i=1}^{n}-\frac{1}{\frac{n\lambda}{ n\lambda}\ell^{\prime}(y_{i},\langle x_{i},\hat{\theta}\rangle)}\underbrace{ \langle\sqrt{|\hat{\theta}|}\odot x_{i},\sqrt{|\hat{\theta}|}\odot x^{\prime} \rangle}_{\text{local importance}}, \tag{6}\]
_where \(\sqrt{\cdot}\) is a coordinate-wise square root operation._
With Corollary 2, we can quantify training data influence on a specific test sample \((x^{\prime},y^{\prime})\). The sign of \(\alpha_{i}\langle\sqrt{|\hat{\theta}|}\odot x_{i},\sqrt{|\hat{\theta}|}\odot x ^{\prime}\rangle\) indicates whether a training sample \((x_{i},y_{i})\) has positive or negative influence on the test sample. Also, if a training sample \((x_{i},y_{i})\) has a large importance value to a test sample \(x^{\prime}\), two conditions must be satisfied: (1) global importance \(\alpha_{i}\) is large (2) \(\sqrt{|\hat{\theta}|}\odot x_{i}\) is close to \(\sqrt{|\hat{\theta}|}\odot x^{\prime}\). That is, \(x_{i}\) and \(x^{\prime}\) are close on the coordinates where the model parameters \(\hat{\theta}\) have non-zero values.
### Nuclear-norm Regularized Linear Optimization
We consider the following canonical nuclear norm regularized linear optimization problem with inputs and model parameters being matrices. Given \(n\) training samples \((X_{1},y_{1}),\cdots,(X_{n},y_{n})\in\mathbb{R}^{d_{1}\times d_{2}}\times\mathbb{R}\), a loss function \(\ell(\cdot,\cdot):\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}\), and parameters of a linear model \(\Theta\in\mathbb{R}^{d_{1}\times d_{2}}\), we consider the following problem:
\[\hat{\Theta}=\operatorname*{argmin}_{\Theta\in\mathbb{R}^{d_{1}\times d_{2}}} \frac{1}{n}\sum_{i=1}^{n}\ell(y_{i},\langle X_{i},\Theta\rangle_{F})+\lambda \|\Theta\|_{*}, \tag{7}\]
where \(\langle\cdot,\cdot\rangle_{F}\) is a Frobenius inner product operator, and \(\|\cdot\|_{*}\) is the Nuclear norm, defined as the sum of \(\ell_{1}\) norm of singular values. This formulation has been applied in matrix completion [7], matrix regression [60], and matrix compressed sensing [16] with low-rank constraints.
As in Negahban et al. [45], the low-rank model subspace \(A(U,V)\) is specified by a full singular value decomposition (SVD) of the model parameter \(\hat{\Theta}=U\Sigma V^{\top}\), where the columns of \(U\in\mathbb{R}^{d_{1}\times k}\) and \(V\in\mathbb{R}^{d_{2}\times k}\) are orthogonal, \(\Sigma\in\mathbb{R}^{k\times k}\) is a diagonal matrix, and \(k=\text{rank}(\hat{\Theta})\). The orthogonal subspace is \(B(U,V)=\{\Theta\in\mathbb{R}^{d_{1}\times d_{2}}\,:\,\text{row}(\Theta)\subseteq U ^{\perp},\,\text{col}(\Theta)\subseteq V^{\perp}\}\).
The sub-differential of the nuclear norm [58] is \(\partial\|\hat{\Theta}\|_{*}=\{UV^{\top}+W:W\in\mathbb{R}^{d_{1}\times d_{2}}, \|W\|_{2}\leq 1,WV=\mathbf{0},U^{\top}W=\mathbf{0}\}\), which can be decomposed as a unique representation in the model subspace (\(UV^{\top}\in A(U,V)\)) and \(W\in B(U,V)\) in the orthogonal space. In this case, the inverse transform of sub-differential is not unique: it can be either \((\partial_{\hat{g}}r)^{+}(X)=U\Sigma U^{\top}X\) or \((\partial_{\hat{g}}r)^{+}(X)=XV\Sigma V^{\top}\) for any \(X\in\mathbb{R}^{d_{1}\times d_{2}}\). One can easily verify that the inverse transform recovers \(\hat{\Theta}\), \((\partial_{\hat{g}}r)^{+}(\partial\|\hat{\Theta}\|_{*})=\hat{\Theta}\) using the fact that \(U^{\top}U=VV^{\top}=I_{k}\). By instantiating the inverse transform to Theorem 1, we obtain the following corollary.
**Corollary 3**.: _(high-dim representer theorem for nuclear-norm regularization) Let \(U\Sigma V^{\top}=\hat{\Theta}\) be a full SVD of the minimizer \(\hat{\Theta}\) of Eqn.(7). The minimizer of Eqn.(7) can be written as_
\[\hat{\Theta}=\sum_{i=1}^{n}-\frac{1}{n\lambda}\ell^{\prime}(y_{i},\langle X_{ i},\hat{\Theta}\rangle_{F})\left(U\Sigma U^{\top}X_{i}\right)=\sum_{i=1}^{n}- \frac{1}{n\lambda}\ell^{\prime}(y_{i},\langle X_{i},\hat{\Theta}\rangle_{F}) \left(X_{i}V^{\top}\Sigma V\right). \tag{8}\]
_For any given test sample \(X^{\prime}\in\mathbb{R}^{d_{1}\times d_{2}}\), its prediction can be decomposed according to training samples:_
\[\langle X^{\prime},\hat{\Theta}\rangle= \sum_{i=1}^{n}-\frac{1}{n\lambda}\ell^{\prime}(y_{i},\langle X_{i},\hat{\Theta}\rangle))\langle\sqrt{\Sigma}U^{\top}X_{i},\sqrt{\Sigma}U^{\top }X^{\prime}\rangle_{F} \tag{9}\] \[= \sum_{i=1}^{n}-\frac{1}{n\lambda}\ell^{\prime}(y_{i},\langle X_{i},\hat{\Theta}\rangle))\langle X_{i}V\sqrt{\Sigma},X^{\prime}V\sqrt{\Sigma} \rangle_{F}, \tag{10}\]
_where \(\sqrt{\Sigma}=\text{diag}[\sqrt{\Sigma_{11}},\cdots,\sqrt{\Sigma_{kk}}]\)._
Again, the first term in Eqn.(9) and Eqn.(10), \(-\frac{1}{n\lambda}\ell^{\prime}(y_{i},\langle X_{i},\hat{\Theta}\rangle))\), is the _global importance_ and the second inner product terms are the _local importance_. We first project input matrices \(X_{i}\) and \(X^{\prime}\) onto the column or row spaces by multiplying with \(\sqrt{\Sigma}U^{\top}\) or \(V\sqrt{\Sigma}\), respectively, and then computing the Frobenius inner product. This term measures local similarities between a test sample and training samples in the column or row spaces of the minimizer \(\hat{\Theta}\).
Unlike Corollay 2, Eqn.(9) and Eqn.(10) provide two distinct ways to decompose the learned model, leading to two different ways for data attribution. We refer to Eqn.(9) as _column-based attribution_ and Eqn.(10) as _row-based attribution_, since they compute local importance on the column/row spaces of \(\hat{\Theta}\), respectively. The interpretation of these two attributions may depend on applications. For example, as we will show in Corollary 4, the two attributions correspond to user-based attribution and item-based attributions when \(U\) and \(V\) are user and item embeddings in recommender systems. In other cases, we may take the average of the two local importance.
### Computation of High-dimensional Representers
In this section, we introduce the computation of high-dimensional representers. To explain a model's prediction on \(x^{\prime}\), one needs to compute the high-dimensional representers for the test sample \(x^{\prime}\) with respect to all training samples \(\{(x_{i},y_{i})\}_{i=1}^{n}\). In practice, we could pre-process the training to accelerate the computation. Recall that high-dimensional representers in Eqn.(3) consist of two components: a global importance \(\alpha_{i}=-\frac{1}{n\lambda}\ell^{\prime}(y_{i},\langle x_{i},\hat{\theta} \rangle)\), and a local importance \(\langle(\partial_{\hat{g}}r)^{\frac{\lambda}{2}}x_{i},(\partial_{\hat{g}}r)^{ \frac{\lambda}{2}}x^{\prime}\rangle\). At the pre-processing step, we compute the global importances \(\alpha\) for all training samples and their projections onto the low-dimensional model space, i.e. \((\partial_{\hat{g}}r)^{\frac{\lambda}{2}}x_{i}\) for all \(i\in[n]\).
Note that global importances can be obtained by inferring all training data and calculating their derivatives. The projection operator can usually be obtained from the training stage since the model parameter \(\hat{\theta}\) is available in the \(\ell_{1}\) case, and the full SVD can usually be obtained from the training stage [43] in the nuclear norm case. The pre-processing step requires \(O(np)\) and \(O(nkd_{1}d_{2})\) time for the \(\ell_{1}\)-norm and nuclear norm cases respectively. We note that in the nuclear-norm case, the pre-processing step typically takes no longer than training the regularized models with a single epoch. This is because the training samples typically need to be projected to the low-dimensional space to calculate the update formula [31].
Next, to explain a test prediction, we need to (1) project the test sample to the model subspace and (2) compute the inner product between the test and training samples in the model subspace. While step (1) only needs to tackle one sample, step (2) takes \(O(np)\) and \(O(n\max(d_{1},d_{2})k)\) time for the \(\ell_{1}\)-norm and nuclear norm cases respectively.
In many applications of sample-based explanations, such as generating human-understandable explanations, we only care about the top influential samples for a test prediction. This can be significantly sped up by approximate nearest neighbor search algorithms which can be run in sublinear time since we only need to find training samples with the highest inner product values.
## 5 Applications to Collaborative Filtering (CF)
With the widespread deployment of recommender systems across various online platforms, the significance of explainable recommender systems has grown substantially [64]. Studies have indicated that users prefer recommendations that are explainable, and explanation tools are vital for debugging recommendation models [54]. In this section, we showcase how high-dimensional representers can effectively explain collaborative filtering models and (deep) recommender systems.
Notations:Given a set of users \(\mathcal{U}\), a set of items \(\mathcal{I}\) and a set of user-item interactions \(\mathcal{D}=\{(i,j)|\ |\ i\in\mathcal{U},j\in\mathcal{I},y_{ij}\text{ is observed }\}\), CF aims to learn a \(k\)-dimensional embedding for each user and item, and utilizes inner products between user and item embeddings to predict unknown elements in the matrix.
### Matrix Factorization (MF) with Nuclear Norm Regularization
Matrix factorization with nuclear norm regularizations [7, 5] is a successful model in CF. Given an incomplete rating matrix \(Y\in\mathbb{R}^{|\mathcal{U}|\times|\mathcal{I}|}\) with each entry \(Y_{ij}=y_{ij},\ \forall\ (i,j)\in\mathcal{D}\), the model assumes that the rating matrix \(Y\) is low-rank and solves the following optimization problem:
\[\hat{\Theta}=\operatorname*{argmin}_{\Theta\in\mathbb{R}^{|\mathcal{U}|\times |\mathcal{I}|}}\frac{1}{|\mathcal{D}|}\sum_{(i,j)\in\mathcal{D}}\ell(y_{ij}, \Theta_{ij})+\lambda\|\Theta\|_{*}, \tag{11}\]
where \(\hat{\Theta}\in\mathbb{R}^{|\mathcal{U}|\times|\mathcal{I}|}\) is a predicted low-rank rating matrix, \(\ell(\cdot,\cdot)\) is a loss function such as square loss, and \(\lambda\) is the regularization parameter.
We apply Corollary 3 to Eq.(11) to obtain sample-based explanations. We represent each training pair \((i,j)\) by a matrix \(X\in\mathbb{R}^{|\mathcal{U}|\times|\mathcal{V}|}\) which contains only one nonzero entry \(X_{ij}=1\), so that \(\langle X,\Theta\rangle_{F}=\Theta_{ij}\). The resulting theorem is as below:
**Corollary 4**.: _(high-dim representers for matrix factorization) Let \(\hat{\Theta}\) be the minimizer of Eqn.(11) with \(\text{rank}(\hat{\Theta})=k\). Let \(U\Sigma V^{\top}=\hat{\Theta}\) be its full SVD decomposition. For any test sample \((i^{\prime},j^{\prime})\) with \(1\leq i^{\prime}\leq|\mathcal{U}|\) and \(1\leq j^{\prime}\leq|\mathcal{I}|\), its prediction can be decomposed according to training samples:_
\[\hat{\Theta}_{i^{\prime}j^{\prime}} =\sum_{i:(i,j^{\prime})\in\mathcal{D}}-\frac{1}{\lambda|\mathcal{ D}|}\ell^{\prime}(y_{ij},\hat{\Theta}_{ij})(\sqrt{\Sigma}U_{i},\sqrt{\Sigma}U_{i^ {\prime}}) \tag{12}\] \[=\sum_{j:(i^{\prime},j)\in\mathcal{D}}-\frac{1}{\lambda|\mathcal{ D}|}\ell^{\prime}(y_{ij},\hat{\Theta}_{ij})(\sqrt{\Sigma}V_{j},\sqrt{\Sigma}V_{j^ {\prime}}), \tag{13}\]
_where \(\sqrt{\Sigma}=\text{diag}[\sqrt{\Sigma_{11}},\cdots,\sqrt{\Sigma_{kk}}]\), \(U_{i}\in\mathbb{R}^{k\times 1}\) and \(V_{j}\in\mathbb{R}^{k\times 1}\) denote \(i^{th}\) and \(j^{th}\) row of \(U\) and \(V\) respectively._
Corollary 4 shows that the predicted score between user \(i^{\prime}\) and item \(j^{\prime}\), \(\hat{\Theta}_{i^{\prime}j^{\prime}}\), can be represented as the sum of attributions to each observed interaction \((i,j)\in\mathcal{D}\). Specifically, Eqn.(12) decomposes predictions according to other users interacted with the same item \(j^{\prime}\), while Eqn.(13) decomposes predictions according to other items interacted with the same user \(i^{\prime}\), They are referred to as _user-based attributions_ and _item-based attributions_, respectively. Also, we can observe that a test sample \((i^{\prime},j^{\prime})\) is only relevant to training samples with the same user \(i^{\prime}\) or the same item \(j^{\prime}\). Combining the two attributions, we define the importance score of each training data to a test sample \((i^{\prime},j^{\prime})\) as follows:
**Definition 1**.: _(high-dim representers for CF) The importance of a training point \((i,j)\in\mathcal{D}\) to a test sample \((i^{\prime},j^{\prime})\), \(\mathbf{I}((i,j),(i^{\prime},j^{\prime}))\), is given by_
\[\begin{cases}-\frac{1}{\lambda|\mathcal{D}|}\ell^{\prime}(y_{ij},\langle\tilde{U }_{i},\tilde{V}_{j}\rangle)\ \langle\tilde{U}_{i},\tilde{U}_{i^{\prime}}\rangle&\text{if $j=j^{ \prime}$.}\\ -\frac{1}{\lambda|\mathcal{D}|}\ell^{\prime}(y_{ij},\langle\tilde{U}_{i}, \tilde{V}_{j}\rangle)\ \langle\tilde{V}_{j},\tilde{V}_{j^{\prime}}\rangle&\text{if $i=i^{ \prime}$.}\\ 0&\text{otherwise.}\end{cases} \tag{14}\]
_where \(\tilde{U}=U\sqrt{\Sigma}\) and \(\tilde{V}=V\sqrt{\Sigma}\) are normalized embedding matrices for user and item respectively._
Note that we replace \(\hat{\Theta}_{ij}\) with \(\langle\tilde{U}_{i},\tilde{V}_{j}\rangle\) as they are equivalent. If a training sample \((i,j)\) has a large importance score, three conditions must be satisfied: (1) It has the same user or item as the test sample. (2) \(|\ell^{\prime}(y_{ij},\langle\tilde{U}_{i},\tilde{V}_{j}\rangle)|\) must be large. When the loss function \(\ell(\cdot,\cdot)\) is strongly convex, it implies that the training sample incurs a large loss. (3) Their normalized user (or item) embeddings are close.
### General Matrix-factorization-based Models
Instead of using the nuclear norm, many matrix factorization methods directly reparameterizing the rating matrix \(\Theta\) with the product of two low-rank matrices \(U\) and \(V\)[38, 42], corresponding to user and item embeddings. They then directly solve the following optimization problem:
\[\hat{U},\hat{V}=\operatorname*{argmin}_{U\in\mathbb{R}^{|\mathcal{U}|\times k },V\in\mathbb{R}^{|\mathcal{I}|\times k}}\sum_{(i,j)\in\mathcal{D}}\ell(y_{ ij},\langle U_{i},V_{j}\rangle), \tag{15}\]
where the loss function \(\ell(\cdot,\cdot)\) is point-wise, and training data \(\mathcal{D}\) may include negative samples for implicit CF. Popular choices include binary cross-entropy (BCE) [28], mean square error (MSE) [18], and triplet loss [13].
Theorem 3 does not apply to this formulation since it does not have nuclear norm regularization. While it is possible to replace \(UV^{\top}\) with \(\Theta\) and retrain the model with nuclear norm regularization, the retrained model may behave differently compared to the given model. However, the formulation does enforce hard low-rank constraints on the rating matrix through reparameterization. Therefore, to conduct sample-based attribution, we assume Eqn.(15) is implicitly regularized and use Definition 1 to obtain the high-dimensional representer. For this formulation, we drop the constant term, \(1/\lambda|\mathcal{D}|\), since \(\lambda\) is unavailable and does not affect relative importance among training samples. The process of computing the high-dimensional representer for CF and its time complexity analysis are provided in Section E in the supplementary material.
### Two-tower models
Two-tower networks are widely used in deep recommender systems [30, 12, 42, 41]. They encode user information and item information with two separate neural networks, which are called towers. The user tower maps each user (e.g., user history, features, and id) to a \(k\)-dimensional user embedding, while the item tower maps each item (e.g., product description and id) to the same embedding space. The prediction score is then calculated by the inner product of the user and item embeddings. Formally, let the two separate towers be \(f_{\theta_{1}}\) and \(g_{\theta_{2}}\). The training objective function can be written as:
\[\hat{\theta}_{1},\hat{\theta}_{2}=\operatorname*{argmin}_{\theta_{1},\theta_{ 2}}\sum_{(i,j)\in\mathcal{D}}\ell(y_{ij},\langle f_{\theta_{1}}(u_{i}),g_{ \theta_{2}}(v_{j})\rangle), \tag{16}\]
where \(u_{i}\) and \(v_{j}\) denote features of user \(i\) and item \(j\). Again, we focus on models trained with point-wise loss functions.
To explain two-tower models, we consider the final interaction layers as a bilinear matrix factorization model and the remaining layers as fixed feature encoders. Then we apply the same explanation technique as MF models to explain them. Specifically, we concatenate embeddings of all users and items to form a user matrix and an item matrix, i.e.
\[\hat{U} =[f_{\hat{\theta}_{1}}(u_{1});\cdots;f_{\hat{\theta}_{1}}(u_{| \mathcal{U}|})]\in\mathbb{R}^{|\mathcal{U}|\times k}\] \[\text{and}\ \hat{V} =[g_{\hat{\theta}_{2}}(v_{1});\cdots;g_{\hat{\theta}_{2}}(v_{| \mathcal{I}|})]\in\mathbb{R}^{|\mathcal{I}|\times k}. \tag{17}\]
Then we use Definition 1 to obtain its sample-based explanations.
## 6 Experimental Results
We perform experiments on multiple datasets to validate that the proposed method is a preferable choice compared with other sample-based explanation methods such as \(\ell_{2}\) representer point selection and influence function, under the high dimensional setting. Moreover, we showcase the utility of the high-dimensional representer in understanding predictions of recommender systems. We also provide another use case for improving negative sampling strategies for collaborative filtering in Appendix 6.5 and additional comparisons with other approaches in Appendix F.
### Evaluation Metrics
For quantitative evaluations, we use _case deletion diagnostics_[62, 26, 11] as our primary evaluation metric. This metric measures the difference in models' prediction score at a particular test sample \(z^{\prime}\) after removing (a group of) influential training samples and retraining whole models. This metric helps validate the efficacy of sample-based explanation methods and provides a quantitative measurement.
We denote two metrics as \(\text{DEL}_{+}(z^{\prime},k,\mathbf{I})\) and \(\text{DEL}_{-}(z^{\prime},k,\mathbf{I})\) separately. These two metrics measure _the difference between models' prediction scores when we remove top-\(k\) positive (negative) impact samples given by method \(\mathbf{I}\) and the prediction scores of the original models._ We expect \(\text{DEL}_{+}\) to be negative and \(\text{DEL}_{-}\) to be positive since models' prediction scores should decrease (increase) when we remove positive (negative) impact samples.
To evaluate deletion metric at different \(k\), we follow Yeh et al. [62] and report area under the curve (AUC):
\[\text{AUC-DEL}_{+}=\sum_{i=1}^{m}\frac{\text{DEL}_{+}(z^{\prime},k_{i}, \mathbf{I})}{m},\text{AUC-DEL}_{-}=\sum_{i=1}^{m}\frac{\text{DEL}_{-}(z^{ \prime},k_{i},\mathbf{I})}{m},\]
where \(k_{1}<k_{2}<\dots<k_{m}\) is a predefined sequence of \(k\).
### Quantitative Evaluation on \(\ell_{1}\)-regularized Models
In this section, we evaluate the effectiveness of the high-dimensional representer in explaining \(\ell_{1}\)-regularized logistic regression.
#### 6.2.1 Experimental Settings
Datasets and models being explained:We use the following three datasets on binary classification. **(1) 20 newsgroups1**: This dataset contains roughly \(20,000\) newsgroups posts on 20 topics. It contains \(19,996\) samples with \(1,355,191\) features. We randomly split \(10\%\) data for the test set. **(2) Gisette [23]:** It is a handwritten digit recognition problem, which contains highly confusible digits '4' and '9'. It contains \(6,000/1,000\) samples with each containing \(5,000\) features for training/testing. **(3) Rev1 [40]:** It is a benchmark dataset on text categorization. It has \(20,242/677,399\) samples for training/testing. We use bag-of-words features with dimensions \(47,236\). We train logistic regression models with \(\ell_{1}\) regularization using LIBLINEAR [17] on the three datasets. The accuracy of models on the three datasets is above \(97\%\).
Footnote 1: [http://qwone.com/](http://qwone.com/) jason/20Newsgroups/
Baselines:We compare the high-dimensional representer with the \(\ell_{2}\) representer, the influence function (IF) and random deletions. Given a test sample \(x^{\prime}\), the \(\ell_{2}\) representer calculates importance score of a training point \((x_{i},y_{i})\) to the test sample \(x^{\prime}\) with the following formula:
\[\mathbf{I}_{\ell_{2}}((x_{i},y_{i}),x^{\prime})=-\ell^{\prime}(y_{i},\langle x _{i},\hat{\theta}\rangle))\langle x_{i},x^{\prime}\rangle.\]
For the influence function, we adopt the formula in Proposition 5.3 of Avella-Medina [1]. Assume only the first \(q\leq p\) entries of the minimizer \(\hat{\theta}\) are nonzero, the influence function, \(\mathbf{I}_{IF}((x_{i},y_{i}),x^{\prime})\), is given by
\[-(\frac{1}{n}\nabla_{\theta_{1:q}}\ell(y_{i},\langle x_{i},\hat{\theta}\rangle )+\lambda\text{sign}(\hat{\theta})_{1:q})^{\top}H_{\theta_{1:q}}^{-1}x_{1:q}^ {\prime},\]
where \(H_{\hat{\theta}_{1:q}}=\sum_{i=1}^{n}\nabla_{\theta_{1:q}}^{2}\ell(y_{i}, \langle x_{i},\hat{\theta}\rangle)\in\mathbb{R}^{q\times q}\). The calculation of the influence function can be simply viewed as first projecting features \(x\), \(x^{\prime}\), and the model parameter \(\hat{\theta}\) to nonzero entries of \(\hat{\theta}\) and then computing the influence function normally. Notice that the naive implementation takes \(O(nq^{3}+np)\) time complexity to compute inverse hessian matrix, while the high-dim and \(\ell_{2}\) representers only take \(O(np)\) to compute importance scores of all training samples to a test prediction.
To compute AUC-DEL scores, we set \(k_{i}=0.01iN\) for \(1\leq i\leq 5\). We remove \(1\%\) to \(5\%\) of positive (negative) impact training samples and report the averaged prediction difference after removing these samples. Each metric is reported over \(40\) trials with each trial containing \(40\) test samples.
#### 6.2.2 Results
The results of the four methods are presented in Table 1. We also report the averaged runtime of computing the importance of one test prediction to all training data on a single CPU. The results show that the high-dimensional representer outperforms the other three methods and is over 25x faster than the influence function. Also, the \(\ell_{2}\) representer is slightly faster than the high-dimensional representer since inner product is fast when the training data is sparse, and the high-dimensional representer requires one extra step to project vectors to low-dimensional model subspace.
### Quantitative Evaluation on Collaborative Filtering
In this section, we evaluate the effectiveness of the high-dimensional representer on explaining CF models in recommender systems.
#### 6.3.1 Experimental Settings
Datasets: (1) Movielens-1M [27]: It contains about 1M ratings (1-5) from 6,040 users on 3,706 movies. (2) Amazon review (2018) [46]: This dataset contains reviews and ratings (1-5) of products on Amazon. Since the whole dataset is too large, we use data in the video games category, which contains 284,867 ratings from 15,517 users to 37,077 items. We follow the preprocessing procedure in Cheng et al. [9]. We filter out users and items with less than 10 interactions. For every user, we randomly held out two items' ratings to construct the validation and test sets. Also, we normalize all ratings to \([-1,1]\).
Models being explained:We test the high-dimensional representer on three different models: (1) Matrix factorization with nuclear norm regularization (MF w. nuclear norm) as in Eqn.(7). We do not run this model on Amazon review dataset because the rating matrix is too large. (2) Matrix Factorization (MF) as in Eqn.(15). (3) YoutubeNet [12], which uses a deep neural network to encode user features and is one of the representative deep two-tower models.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Datasets & 20 newsgroups & Gisette & Rcv1 \\ \hline \hline \multicolumn{4}{|c|}{AUC-DEL\({}_{+}\)} \\ \hline High-dim Rep. & \(\mathbf{-3.733\pm 0.093}\) & \(\mathbf{-1.000\pm 0.081}\) & \(\mathbf{-3.208\pm 0.060}\) \\ \hline \(\ell_{2}\) Rep. & \(-2.472\pm 0.067\) & \(-0.577\pm 0.073\) & \(-2.780\pm 0.057\) \\ \hline IF & \(-2.583\pm 0.043\) & \(-0.531\pm 0.011\) & \(-2.652\pm 0.040\) \\ \hline Random & \(0.006\pm 0.014\) & \(0.010\pm 0.022\) & \(0.009\pm 0.005\) \\ \hline \hline \multicolumn{4}{|c|}{AUC-DEL\({}_{-}\)} \\ \hline High-dim Rep. & \(\mathbf{7.478\pm 0.194}\) & \(\mathbf{3.116\pm 0.110}\) & \(\mathbf{3.170\pm 0.077}\) \\ \hline \(\ell_{2}\) Rep. & \(5.214\pm 0.143\) & \(2.118\pm 0.063\) & \(2.726\pm 0.067\) \\ \hline IF & \(4.894\pm 0.086\) & \(0.523\pm 0.013\) & \(3.065\pm 0.082\) \\ \hline Random & \(0.003\pm 0.014\) & \(0.007\pm 0.024\) & \(0.007\pm 0.005\) \\ \hline \hline \multicolumn{4}{|c|}{Running (ms)} \\ \hline High-dim Rep. & \(61.35\pm 0.59\) & \(87.34\pm 0.71\) & \(10.61\pm 0.13\) \\ \hline \(\ell_{2}\) Rep. & \(\mathbf{59.47\pm 0.58}\) & \(130.16\pm 0.34\) & \(\mathbf{6.14\pm 0.22}\) \\ \hline IF & \(2678.38\pm 3.19\) & \(3628.70\pm 2.007\) & \(263.90\pm 1.01\) \\ \hline \end{tabular}
\end{table}
Table 1: Case deletion diagnostics for removing positive (negative) impact training samples on various datasets and models and run time comparison. \(95\%\) confidence interval of averaged deletion diagnostics on \(40\times 40=1,600\) samples is reported. Averaged runtimes over \(100\) samples are also reported. Smaller (larger) AUC-DEL\({}_{+}\) (AUC-DEL\({}_{-}\)) is better.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Models} & \multirow{2}{*}{Metrics} & \multicolumn{4}{c|}{Methods} \\ \cline{3-6} & & & High-dim Rep. & FIA & Random \\ \hline \multirow{4}{*}{MovieLens-1M} & MF w. nucl- & AUC-DEL\({}_{+}\) & \(\mathbf{-0.225\pm 0.006}\) & - & \(-0.002\pm 0.002\) \\ & ear norm & AUC-DEL\({}_{-}\) & \(\mathbf{0.160\pm 0.004}\) & - & \(-0.002\pm 0.002\) \\ \cline{2-6} & MF & AUC-DEL\({}_{+}\) & \(\mathbf{-0.196\pm 0.006}\) & \(-0.101\pm 0.004\) & \(-0.002\pm 0.002\) \\ \cline{2-6} & Youtube- & AUC-DEL\({}_{-}\) & \(\mathbf{0.169\pm 0.004}\) & \(0.072\pm 0.004\) & \(-0.001\pm 0.002\) \\ \cline{2-6} & Youtube- & AUC-DEL\({}_{+}\) & \(\mathbf{-0.227\pm 0.008}\) & \(-0.096\pm 0.006\) & \(-0.001\pm 0.004\) \\ \cline{2-6} & Net & AUC-DEL\({}_{-}\) & \(\mathbf{0.214\pm 0.007}\) & \(0.113\pm 0.007\) & \(0.006\pm 0.004\) \\ \hline \multirow{4}{*}{
\begin{tabular}{c} Amazon reviews 2018 \\ (video games) \\ \end{tabular} } & MF & AUC-DEL\({}_{+}\) & \(\mathbf{-0.184\pm 0.012}\) & \(-0.123\pm 0.011\) & \(-0.070\pm 0.011\) \\ \cline{2-6} & AUC-DEL\({}_{-}\) & \(\mathbf{0.080\pm 0.012}\) & \(-0.009\pm 0.012\) & \(-0.077\pm 0.011\) \\ \cline{1-1} \cline{2-6} & Youtube- & AUC-DEL\({}_{+}\) & \(\mathbf{-0.234\pm 0.014}\) & \(-0.056\pm 0.013\) & \(-0.032\pm 0.011\) \\ \cline{1-1} \cline{2-6} & Net & AUC-DEL\({}_{-}\) & \(\mathbf{0.294\pm 0.011}\) & \(0.069\pm 0.013\) & \(-0.032\pm 0.011\) \\ \hline \end{tabular}
\end{table}
Table 2: Case deletion diagnostics for removing positive (negative) impact training samples on various datasets and models. \(95\%\) confidence interval of averaged deletion diagnostics on \(40\times 40=1,600\) samples is reported. Smaller (larger) AUC-DEL\({}_{+}\) (AUC-DEL\({}_{-}\)) is better.
All models are trained with squared loss. We use soft-impute [43] algorithm to train the Model (1). The models (2) and (3) are optimized by stochastic gradient descent. Hyper-parameters and model structures are detailed in Appendix D.
**Baselines:** We compare the high-dimensional representer with the following three baselines: (1) Fast influence analysis (FIA): since the influence function is not scalable to the size of common recommender system benchmarks, Cheng et al. [9] propose FIA as an approximation of the influence function for MF-based models. (2) Random deletion, which we randomly delete training samples with the same user or item as the given test sample.
Notice that the FIA are not applicable to the MF with nuclear norm model since it is only applicable to MF models in Eqn.(15). Also, the \(\ell_{2}\) representer is not applicable to models with two separate encoders since it cannot be treated as a linear mapping. We leave the comparison to TracIn [47], which is only applicable to models trained with SGD-based optimizers, to the supplementary material.
#### 6.3.2 Setup
We combine user-based and item-based explanations and sort them according to their importance scores. For MovieLens-1M, we drop \(k=10,20,30,40,50\) samples. For Amazon reviews, we drop \(k=3,6,9,12,15\) samples. Each metric is averaged over \(40\) trials with each trial having \(40\) test samples.
#### 6.3.3 Results
Table 2 summarises the results of different methods. First, we observe that randomly removing samples have roughly no/negative effects on models' predictions for MovieLens-1M/Amazon reviews, and all other methods outperform the random deletion baseline. Second, the high-dimensional representer outperforms FIA and random deletion in all settings, indicating that the high-dimensional representer is able to estimate the importance of each training sample more accurately.
### Use Case 1 : Explaining Recommender Systems' Predictions
In this section, we show that the high-dimensional representer generates explanations based on users' historically interacted products for collaborative filtering models.
Table 3 shows an example of an explanation for movie recommendations. We use an MF model trained with square loss to predict users' ratings from 1 to 5 on Movielens-100k, a smaller version of Movielens-1M. We first choose a user with \(87\) historical ratings and predicts their rating on "Star Trek VI: The Undiscovered Country", calculate similarity scores with the high-dimensional representer on the user's past ratings, and then sort the items according the absolute importance scores. The explanation can be interpreted as "the MF model predicts your rating on _Star Trek VI: The Undiscovered Country_ to be \(3.89\) mostly because of your ratings on the following six movies."
The explanation consists of movies with similar genres and prequels of "Star Trek VI". We see that the model learns the relations of movies from the explanation since movie names and genres are not provided during training. Also, the user's past ratings of 2 or 3 negatively impact the prediction, and ratings of 5 have positive influence. It is reasonable since the user's preference for similar movies would impact the model's predicted ratings. Notice that the high-dimensional representer can also be used to provide user-based explanations in terms of the influence of other users' ratings on the same movie. We do not show these explanations here since user information is lacking in most publicly available datasets. More examples can be found in Appendix C.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Movies & User’s rating & Movie genre & Importance \\ \hline Men in Black & 3 & \begin{tabular}{c} Action,Sci-Fi, \\ Comedy,Adventure \\ \end{tabular} & -4.55 \\ \hline Diabolique & 2 & \begin{tabular}{c} Drama/Thiller \\ \end{tabular} & -4.03 \\ \hline Independence & \begin{tabular}{c} Action,Sci-Fi, \\ Day (ID+) \\ \end{tabular} & 5 & \begin{tabular}{c} Action,Sci-Fi, \\ War \\ \end{tabular} & 3.52 \\ \hline Star Trek IV: & \begin{tabular}{c} Action,Sci-Fi, \\ The Vowage Home \\ \end{tabular} & 3.12 \\ \hline Star Trek V: & \begin{tabular}{c} Action,Sci-Fi, \\ The Final Frontier \\ \end{tabular} & 2 & \begin{tabular}{c} Action,Sci-Fi, \\ Adventure \\ \end{tabular} & -2.86 \\ \hline Star Trek: & \begin{tabular}{c} Action,Sci-Fi, \\ First Contact \\ \end{tabular} & 5 &
\begin{tabular}{c} Action,Sci-Fi, \\ Adventure \\ \end{tabular} & 2.59 \\ \hline \end{tabular}
\end{table}
Table 3: An example of item-based explanations. In the example, a MF model predicts one user’s rating for the movie “Star Trek VI: The Undiscovered Country” to be 3.89. The genres of the movie are action, sci-fi, and adventure.
### Use Case 2: Improving Negative Sampling Strategies
In this experiment, we show that high-dimensional representers can be used to improve negative sampling strategies that are widely used to train collaborative filtering models for implicit signals.
Motivation:Implicit CF learns from users' behavior that implicitly affects users' preferences. For example, it may learn from the clicks of users or users' watching history. In this setting, user-item interactions usually contain only positive interactions, and practitioners usually regard all other unobserved interactions as negative samples. However, these unobserved interactions may include false negatives. For instance, users may ignore items not displayed to them, not necessarily because users dislike them. Such false negatives have been demonstrated to be harmful to models [14]. However, identifying false negatives is challenging since it is impossible to ask users to look over all items and mark their preferences.
Proposed approach:We propose to measure _aggregated importance scores_ of negative samples to identify these false negatives. These scores quantitatively measure the extent to which negative pairs contribute to the decrease in prediction scores for observed positive interactions. Larger aggregated importance scores indicate that the negative pair reduces the model's confidence in other known positive interactions, suggesting a higher likelihood of being a false negative.
Let \(\mathcal{D}=\mathcal{P}\cup\mathcal{N}\) be the training set comprising positive interactions \(\mathcal{P}\) and negative samples \(\mathcal{N}\) selected through a negative sampling strategy. The aggregated importance scores are defined as follows:
\[\mathbf{I}_{neg}((i,j))=\sum_{(i^{\prime},j^{\prime})\in\mathcal{P}}\mathbf{I }((i,j),(i^{\prime},j^{\prime})), \tag{18}\]
where \(\mathbf{I}(\cdot,\cdot)\) is the importance score provided by the high-dimensional representer as in Definition 1. \(\mathbf{I}_{neg}((i,j))\) can be interpreted as the sum of importance scores of a negative sample to all positive samples in the training set.
#### 6.5.1 Experimental Setup
To validate the effectiveness of high-dimensional representers in improving negative sampling strategies, we first train a base model using a normal negative sampling strategy, and then retrain the model after removing identified negative samples. We use the change in the models' performance to measure the performance of the proposed method.
Datasets:We use a binarized MovieLens-100k dataset, which contains \(100,000\) ratings (1-5) from 943 users on 1,682 movies. We transform user ratings into binary signals by dropping user ratings less than \(4\) and treating other interactions as positive samples. In accordance with Toh & Yun [55], Jaggi & Sulovsky [32], we randomly selected 50% of the ratings for training and the others for the test set.
Base models:We first train a matrix factorization model with uniformly selected negative samples. The model is trained with binary cross entropy loss function with the following formulation:
\[\operatorname*{argmin}_{\begin{subarray}{c}U\in\mathbb{R}^{|\mathcal{N}| \times k},\\ V\in\mathbb{R}^{|\mathcal{I}|\times k},\end{subarray}}-\sum_{(i,j)\in \mathcal{P}}\log(\sigma(\langle U_{i},V_{j}\rangle))-0.05\sum_{(i,j)\in \mathcal{N}}\log(1-\sigma(\langle U_{i},V_{j}\rangle)),\]
where \(\sigma(\cdot)\) denotes a sigmoid function, and \(\mathcal{N}=\mathcal{D}\backslash\mathcal{P}\) contains all unknown user-item interactions. We multiply loss functions with negative samples with \(0.05\) since it improves the models' performance. After calculating aggregated importance scores of all negative samples, we remove the top \(p\%\) samples with the least scores from \(\mathcal{N}\) and train a new MF model with the same objective.
Evaluation metrics:In order to assess the effectiveness of the proposed methods, we utilize the following two evaluation metrics:
1. Number of false negatives identified: Given the impracticality of labeling all user-item interactions, we consider only the positive interactions in the test set as potential false negatives. This metric evaluates the number of false negatives correctly identified by each method.
2. Performance improvement of the base model after retraining: We measure the change in performance of the base model after removing the top \(p\%\) of negative samples identified by each method. The models' performance is evaluated using the recall@20 metric on the test set [29].
These evaluation metrics enable us to assess the ability of the proposed methods to accurately identify false negatives and quantify the improvement achieved in the performance of the base model through retraining.
Baselines:We compare the high-dimensional representer with (1) fast influence analysis, (2) loss functions, and (3) random selections. For FIA, we use importance scores provided by FIA to compute aggregated importance scores in Eqn.(18). For loss functions, we filter out top \(p\%\) negative samples with highest loss. For random selection, we randomly remove \(p\%\) of negative samples from \(\mathcal{N}\).
#### 6.5.2 Experimental Results
The results of our experiments are presented in Figure 1(a) and Figure 1(b). We observe that the high-dimensional representer, loss functions, and FIA outperform random selection on both evaluation metrics. Notably, while the high-dimensional representer identifies slightly fewer false negatives compared to the loss functions and FIA, it identifies more influential false negatives that contribute the most to performance improvement. These findings indicate that the performance of implicit collaborative filtering can be enhanced by removing harmful samples. As a potential future direction, it would be interesting to explore the integration of the high-dimensional representer into negative sampling procedures.
## 7 Conclusion
In this paper, we present _high-dimensional representers_ to explain predictions of high-dimensional models in terms of contributions from each of the training samples. We investigate its consequences for canonical instances of sparse models, as well as low-rank models, together with a case study on collaborative filtering, which we consider low-rank matrix-factorization-based models as well as their deep neural variants. In future work, it would be of interest to derive corollaries of our general result for additional instances of high-dimensional models such as group-structured models, as well as additional applications such compressed sensing and sparse Gaussian graphical model estimation.
|
2302.14300 | On the consistency of $Λ$CDM with CMB measurements in light of the
latest Planck, ACT, and SPT data | Using Gaussian Processes we perform a thorough, non-parametric consistency
test of the $\Lambda$CDM model when confronted with state-of-the-art TT, TE,
and EE measurements of the anisotropies in the Cosmic Microwave Background by
the Planck, ACT, and SPT collaborations. Using $\Lambda$CDM's best-fit
predictions to the TTTEEE data from Planck, we find no statistically
significant deviations when looking for signatures in the residuals across the
different datasets. The results of SPT are in good agreement with the
$\Lambda$CDM best-fit predictions to the Planck data, while the results of ACT
are only marginally consistent. However, when using the best-fit predictions to
CamSpec -- a recent reanalysis of the Planck data -- as the mean function, we
find larger discrepancies between the datasets. Our analysis also reveals an
interesting feature in the polarisation (EE) measurements from the CamSpec
analysis, which could be explained by a slight underestimation of the
covariance matrix. Interestingly, the disagreement between CamSpec and
Planck/ACT is mainly visible in the residuals of the TT spectrum, the latter
favoring a scale-invariant tilt $n_s\simeq1$, which is consistent with previous
findings from parametric analyses. We also report some features in the EE
measurements captured both by ACT and SPT which are independent of the chosen
mean function and could be hinting towards a common physical origin. For
completeness, we repeat our analysis using the best-fit spectra to ACT+WMAP as
the mean function. Finally, we test the internal consistency of the Planck data
alone by studying the high and low-$\ell$ ranges separately, finding no
discrepancy between small and large angular scales. | Rodrigo Calderón, Arman Shafieloo, Dhiraj Kumar Hazra, Wuhyun Sohn | 2023-02-28T04:20:43Z | http://arxiv.org/abs/2302.14300v2 | On the consistency of \(\Lambda\)CDM with CMB measurements in light of the latest _Planck_, ACT and SPT data
###### Abstract
Using Gaussian Processes we perform a thorough, non-parametric consistency test of the \(\Lambda\)CDM model when confronted with state-of-the-art TT, TE and EE measurements of the anisotropies in the Cosmic Microwave Background by _Planck_, ACT, and SPT collaborations. We find no statistically significant deviation from \(\Lambda\)CDM's best fit predictions when looking for signatures in the residuals. The results of SPT are in good agreement with the \(\Lambda\)CDM best fit predictions to _Planck_ data, while the results of ACT are only marginally consistent. Interestingly, the slight disagreement between _Planck_/SPT and ACT is mainly visible in the residuals of the TT spectrum, the latter favoring a scale-invariant tilt \(n_{s}\simeq 1\), consistent with previous findings using parametric analyses. We also report some features in the EE measurements captured both by ACT and SPT which could be hinting towards a common physical origin, or unknown systematics in the data. Finally, we test the internal consistency of the _Planck_ data alone by studying the high and low-\(\ell\) ranges separately, finding no discrepancy between small and large angular scales. Apart from the mentioned mild inconsistencies in TT and EE, our results show the overall agreement between the various ground and space-based CMB experiments with the standard model of cosmology.
keywords: Cosmology: Dark Energy - Cosmology: Cosmic Microwave Background - Methods: Statistical
## 1 Introduction
Observations stemming from the anisotropies in the Cosmic Microwave Background (CMB) have played a major role in establishing the standard model of Cosmology -- the \(\Lambda\)CDM paradigm with power law primordial spectrum. Despite suffering from certain theoretical issues, this paradigm has been extremely successful in accounting for a wide variety of observations across many scales and epochs in the cosmic history. The _Planck_ satellite provided the most precise estimation of its 6 main cosmological parameters to date (Aghanim et al., 2020, 2020). In addition, other ground-based CMB experiments from the Atacama Cosmology Telescope (ACT) (Aiola et al., 2020; Choi et al., 2020) and South Pole Telescope (SPT) (Dutcher et al., 2021; Balkenhol et al., 2022) collaborations have recently provided complementary measurements of temperature and polarisation of the CMB anisotropies. These focus on smaller, sub-degree angular scales (larger multipoles \(\ell\gtrsim 650\)) and offer a new way of testing the robustness of \(\Lambda\)CDM with higher-resolution CMB maps and independently of _Planck_.
Despite the success of the standard model, increasingly precise (low-redshift) measurements have reported a few statistically significant discrepancies (Di Valentino et al., 2021, 2021). The most notable example is the \(\gtrsim 5\sigma\) discrepancy in the value of the Hubble constant \(H_{0}\), as measured by low-\(z\) probes using the distance ladder Riess et al. (2022) and high-\(z\) estimations, assuming \(\Lambda\)CDM. A milder (\(\sim 2\sigma\)) but longstanding discrepancy has also been reported between high and low-redshift estimations of the amplitude of matter fluctuations--characterized by \(S_{8}\) in \(\sigma_{8,0}\sqrt{\Omega_{\rm m,0}/0.3}\)-- the latter preferring lower values compared to the early universe predictions (Hikage et al., 2019; Asgari et al., 2020; Heymans et al., 2021; Abbott et al., 2022; Amon and Efstathiou, 2022). The \(\Lambda\)CDM model is also facing other (less-relevant) observational challenges, see _e.g._(Bull et al., 2016; Bullock and Boylan-Kolchin, 2017; Perivolaropoulos and Skara, 2021; Bernal et al., 2021).
Moreover, CMB measurements are known to have mild inconsistencies between the different angular scales (high vs low multipoles Addison et al. (2016)), and as measured by the different collaborations Handley and Lemos (2021). Indeed, even within _Planck_ data alone, the TT spectrum seems to favor a lensing amplitude \(A_{\rm L}>1\) and provides "evidence" for a non-vanishing (positive) spatial curvature (Aghanim et al., 2020; Di Valentino et al., 2019), although it has been argued that these are purely stemming from statistical fluctuations Efstathiou and Gratton (2021); see also (Handley, 2021; Valentino et al., 2021; Vagnozzi et al., 2021; Yang et al., 2022) and references therein for further discussions on this. Furthermore, the results from the ACT collaboration seem to prefer a scale-invariant spectrum of primordial fluctuations, with \(n_{s}\simeq 1\), while _Planck_ data excludes such a value at more than \(3\sigma\). If these inconsistencies are not coming from systematics, they may hint towards new physics
beyond the standard model.
In recent years, a lot of effort has gone into investigating extensions of \(\Lambda\)CDM to try and provide physical explanations for some of the aforementioned discrepancies; see _e.g._(Schoneberg et al., 2022; Abdalla et al., 2022). These often change the Universe's growth and/or expansion history at late times (Pogosian et al., 2022; Heisenberg et al., 2022), or introduce new physics at early times such that (i) the physical size of the sound horizon \(r_{d}\equiv r_{s}(z_{d})\) decreases with respect to \(\Lambda\)CDM (Poulin et al., 2019; Niedermann and Sloth, 2021; Hill et al., 2021; Cruz et al., 2022), (ii) the redshift of recombination is shifted (Jedamzik and Pogosian, 2020; Galli et al., 2022; Franchino-Vinas and Mosquera, 2021; Sekiguchi and Takahashi, 2021) or (iii) new features are introduced in the primordial spectrum of fluctuations (Hazra et al., 2022; Antony et al., 2022). While appealing from the theoretical standpoint, very few of the proposed solutions are actually able to simultaneously address these tensions. For example, it has been argued that no late-time modification to \(H(z)\) is able to raise the value of \(H_{0}\)(Knox and Millea, 2020; Keeley and Shafieloo, 2022), while modifications to the early universe might create or exacerbate the tensions with low-\(z\) observations; see _e.g._(Ivanov et al., 2020; Hill et al., 2020; Smith et al., 2021; Murgia et al., 2021; Niedermann and Sloth, 2021; D'Amico et al., 2021; Jedamzik et al., 2021; Simon et al., 2022) for discussions on this topic.
Given the fundamental role of the CMB in cosmological analyses, it is crucial to understand whether the differences between the latest observations are coming from either statistical fluctuations, unaccounted systematics or new physics beyond \(\Lambda\)CDM. In this work, we test their statistical consistency using Gaussian Processes (GP) - a non-parametric method which can effectively represent smooth deformations away from the model under consideration. If the differences between the datasets are entirely consistent with random fluctuations, then the GP regression should yield a curve consistent with zero. If not, then the GP can provide insights into what shape of deformation, either from systematics or theoretical inconsistency, is preferred by the data.
The structure of this paper is as follows. In Section 2, we describe in detail the method and the data used in the analysis. We then proceed to perform the consistency tests using the best fit \(\Lambda\)CDM predictions, by looking for structures in the residuals with respect to the mean function. We start by confronting \(\Lambda\)CDM to the most recent CamSpec data (with the highest sky fraction) in Section 3.1. We then repeat the analysis using ground-based measurements by the Atacama Cosmology Telescope (ACT) and the South Pole Telescope (SPT) to look for discrepancies between the experiments in Section 3.2. Finally, we test the robustness of our conclusions by using a different mean function in the analysis1 and investigate the consistency of the _Planck_ data alone by studying the low-\(\ell\) (\(\ell<650\)) and high-\(\ell\) ranges (\(\ell>650\)) separately in Appendix A.1 and A.2. In Appendix A.3, we assess whether an absolute scaling of the spectra (difference in calibration) can account for the mild inconsistencies between the different collaborations. The conclusion and future prospects are summarized in Section 4.
Footnote 1: Namely, we use \(\Lambda\)CDM’s best fit to ACT+WMAP data as mean function.
## 2 Method and Data
### Data
In this work, we confront the predictions of \(\Lambda\)CDM with different space and ground-based CMB experiments as a way of testing the consistency of the model and the robustness of the measurements. Namely, we consider the following data
* Temperature (TT), Polarisation (EE) and their cross-correlation (TE) from the final _Planck_ 2018 data release (Aghanim et al., 2020, 2020). More specifically, we use data from the latest (cleanest) CamSpec NPIPE PR4_v12.6 likelihood Rosenberg et al. (2022); see Efstathiou and Gratton (2021) for details on the CamSpec likelihood. These cover the range \(\ell\in[30,2500]\) in TT and \(\ell\in[30,2000]\) in TE and EE. We refer to this data simply as CamSpec.
* Similarly, we use the Atacama Cosmology Telescope (ACT) Temperature (TT), \(E\)-mode Polarisation (EE) and their cross-correlation (TE) from the ACTPolliteDR4 likelihood in the latest ACT data release (Aiola et al., 2020; Choi et al., 2020). We refer to these simply as ACT.
* Finally, we also include the latest results from the South Pole Telescope 2018 collaboration (SPT-3G) Balkenhol et al. (2022). These are updated measurements of both E-mode Polarisation (EE) and Temperature-Polarisation cross-correlation (TE) from Dutcher et al. (2021), but with the inclusion of TT measurements. These cover angular scales \(\ell\in[750,3000]\) for TT and \(\ell\in[350,3000]\) in TE and EE.
We should note that we use the minimum-variance-combined bandpowers for ACT and SPT data, which might not accurately reflect the full information contained in these datasets. We believe however that this can be seen as a zeroth-order approximation. A
Figure 1: TT, TE and EE residuals with respect to \(\Lambda\)CDM best fit spectra to CamSpec data. Solid red lines correspond to the differences in the best fit predictions from ACT+WMAP and CamSpec data.
more rigorous analysis using the full (multi-frequency) likelihood might be needed for a more robust interpretation of the results.
### Gaussian Process Regression
Gaussian Processes (GP) Rasmussen and Williams (2006) have been extensively used in the literature to fit a smooth curve from noisy and/or sparse data without the need to write down an explicit parametric model. GP excels when the noise in the data is well approximated by a (multivariate) Gaussian distribution. It provides a posterior distribution of smooth functions given the data based on two assumptions on the functional form: the mean function (\(\mu(x)\)) and the kernel (\(k(x,x^{\prime})\)). In-depth analyses of GP's dependence on these assumptions are given in (e.g. Shafieloo et al., 2012, 2013; Hwang et al., 2022).
The mean of the GP posterior distribution evaluated at a set of 'test' points \(x_{\star}\) can be easily calculated through
\[\mathbf{\mu}=\mathbf{m_{\star}}+\mathbf{K_{\star}K^{-1}(\mathbf{y-m})} \tag{1}\]
where \(\mathbf{m_{\star}}=\mu(x_{\star})\), \(\mathbf{m}\equiv\mu(x)\), \(\mathbf{K_{\star}}\equiv k(x_{\star},x)\), \(\mathbf{K}\equiv k(x,x)+\Sigma\), where the observations \(\mathbf{y}\) are made at data points \(\mathbf{x}\) with the data covariance matrix \(\Sigma\). Similarly, the posterior of the covariance is obtained using
\[\mathbf{C}=\mathbf{K_{\star\star}}-\mathbf{K_{\star}K^{-1}K_{\star}^{T}} \tag{2}\]
In practice, the calculation of such quantities amounts to a matrix inversion of \(\mathbf{K}\). Computationally, a Cholesky decomposition is often preferred as it is a faster and numerically more stable procedure. Finally, the log-marginal likelihood (LML) under a GP is given by
\[\ln\mathcal{L}=-\frac{1}{2}\big{[}\mathbf{r}^{T}\mathbf{K^{-1}}\mathbf{r}+\ln|\mathbf{ K}|+N\ln\left(2\pi\right)\big{]} \tag{3}\]
where \(\mathbf{r}=\mathbf{y-m}\) is the residual vector, \(N\) is the number of (observed) datapoints and \(|\mathbf{K}|\) denotes the determinant of the full covariance matrix. The GP predictions depend on the choice of kernel describing the correlations between the data points. In this work, we use a _squared exponential_ (SE) kernel given by
\[k(x,x^{\prime};\sigma_{f},\ell_{f})=\sigma_{f}^{2}\ e^{-(x-x^{\prime})^{2}/2 \ell_{f}^{2}}, \tag{4}\]
where \(\sigma_{f}\) and \(\ell_{f}\) determine the amplitude and typical length-scale of the correlations, respectively. These hyperparameters are optimized by maximizing the log-marginal likelihood in (3); we refer to Rasmussen and Williams (2006) for a more detailed discussion.
In this work, we focus on testing the consistency of the \(\Lambda\)CDM model, and thus decide to work in residual space where the best-fit (\(\Lambda\)CDM) predictions have been subtracted from the data--effectively choosing \(\Lambda\)CDM as a GP mean function. More specifically, we decide to work in the space of \(\mathcal{D}_{\ell}=\left(\ell+1\right)\mathcal{D}_{\ell}/2\pi\), where the physical (oscillatory) features would be more pronounced. Having a closer look at Eq. (3), it is seen that if the mean function is a good (enough) fit to the data, the first and second (penalty) terms in (3) will tend to prefer _no extra-correlations_ (_i.e._\(\sigma_{f}\simeq 0\)) or _diverging correlation-lengths_ (\(\ell_{f}\rightarrow\infty\)), as encoded in the GP kernel (4). In the presence of hidden systematics or in the need for a modification of the mean function, however, a finite value for \((\sigma_{f},\ell_{f})\) might be statistically preferred. Therefore, inspecting the two-dimensional likelihood profile \(\mathcal{L}(\sigma_{f},\ell_{f})\) can yield valuable information on the model and the dataset under consideration (Shafieloo et al., 2013; Aghamousa et al., 2017; Keeley et al., 2020; Krishak and Hazra, 2021). Thus, if the likelihood is maximized for \(\sigma_{f}\to 0\) (or \(\ell_{f}\rightarrow\infty\)), the mean function is consistent with the data. On the other hand, any significant detection of \(\sigma_{f}\neq 0\) can be interpreted as hints of underlying structures or systematics in the data that cannot be properly accounted for by the model, given by a smooth deformation with a typical amplitude and correlation length given by the preferred values of \((\sigma_{f},\ell_{f})\).
## 3 Results and Discussions
In this section, we confront the best-fit \(\Lambda\)CDM predictions with the various CMB observations. More specifically, we choose the \(\Lambda\)CDM best-fit to CamSpec PR4 data as a mean function in our Gaussian Process since it is obtained from the analysis of the latest and the most constraining data. Following Aghamousa et al. (2017), our goal is to test the consistency of the \(\Lambda\)CDM model, update the analysis to include the most recent CMB measurements described before, namely the final _Planck CamSpec-PR4v12.6_, ACT DR4 and SPT-3G data releases, and test the consistency between these datasets. In Section 3.1, we will extensively discuss the results using _Planck_ data. The results of the analysis using ground-based observations, namely ACT and SPT, will be discussed in Section 3.2. Finally, to explore possible systematics affecting the low-\(\ell\) part of the _Planck_ data, we use \(\Lambda\)CDM's best-fit predictions to ACT+WMAP as the mean function instead, effectively replacing _Planck_'s large scale constraints by WMAP. The results using this choice of mean function are discussed in Appendix A1. Furthermore, we investigate the consistency of the _Planck_ data alone by studying the low-\(\ell\) (\(\ell<650\)) and high-\(\ell\) ranges (\(\ell>650\)) separately in Appendix A2.
### Consistency of \(\Lambda\)CDM with _Planck_ PR4
We start by considering the \(\Lambda\)CDM's best fit to the latest CamSpec data Rosenberg et al. (2022) as the mean function in our analysis. In Fig. 2, we show the two-dimensional likelihood profiles for the CamSpec residuals with respect to \(\Lambda\)CDM's best-fit \(\mathcal{D}_{\ell}\)'s, as a function of \((\sigma_{f},\ell_{f})\). The color bar shows the goodness of fit, where \(\Delta\chi^{2}=-2(\ln\mathcal{L}^{\rm GP}-\ln\mathcal{L}^{\Lambda\rm CDM})\) and \(\ln\mathcal{L}^{\rm GP}\) is the log-long marginal likelihood (LML), defined in Eq. (3). Negative values of \(\Delta\chi^{2}\) (in blue) reflect regions in parameter space yielding an improvement in the fit with respect to \(\Lambda\)CDM. Conversely, red colored regions correspond to deviations from the mean function leading to a degraded fit to the data (\(\Delta\chi^{2}>0\)), whereas gray shaded regions represent no improvement at all. The black dot represents the set of hyperparameters \((\sigma_{f},\ell_{f})\) yielding the highest likelihood and the black solid line on the colour the corresponding improvement in fit (\(\Delta\chi^{2}\)). As mentioned before, if the mean function is a good description of the data, the LML in (3) should peak at \(\sigma_{f}\to 0\) and/or \(\ell_{f}\rightarrow\infty\). In other words, no smooth deviations away from the best-fit \(\Lambda\)CDM are needed to explain the data. However, if the LML peaks at finite (possibly large) values of \((\sigma_{f},\ell_{f})\), it might point towards the need for a different mean function, or indicate the existence of hidden structures/systematics in the data.
In this case, as can be seen from Fig. 2, the \(\Lambda\)CDM model provides a very good fit to TT, TE, and EE data. The LML seems to prefer small deviations from the best fit \(\Lambda\)CDM spectra: \(\sigma_{f}\lesssim 1\) for TT
\begin{table}
\begin{tabular}{c|c|c} \hline Parameter & log\({}_{10}\)\(\sigma_{f}\) & log\({}_{10}\)\(\ell_{f}\) \\ \hline TT & [\(-3,2\)] & [\(0,4\)] \\ \hline TE & [\(-3,0.5\)] & [\(0,4\)] \\ EE & [\(-3,0.5\)] & [\(0,4\)] \\ \hline \end{tabular}
\end{table}
Table 1: Uniform prior ranges in the hyperparameters for TT,TE and EE.
and TE and \(\sigma_{f}\lesssim 0.1\) for the case of EE. Any larger deviation from the mean function is highly penalized by the data, as can be seen by the color bar on the right (\(\bar{\sigma}>0\) meaning a degraded fit), and the improvement in fit by the GP is negligible in both TT and TE; see Table 2. Meanwhile, there is a noticeable improvement in fit in EE for \(\ell_{f}\lesssim 1\). Such a GP realisation is essentially a white noise where the values at each \(\ell\) are uncorrelated with each other. Preferences towards the inclusion of white noise indicate that the fluctuations in the data around the mean are more than what is expected from the data covariance matrix. In other words, the covariance matrix may have been slightly underestimated for this EE data. This may alter the weights and hence the optimality of the cosmological parameter estimation, but is less likely to have a significant effect on the estimated parameter values.
In fact, the LML is expected to have a local minimum at some \(\sigma_{f}\) with \(\ell_{f}\ll 1\) about half of the time. Taking the limit \(\ell_{f}\to 0\) in (3), we found that the necessary and sufficient condition for such a minimum is given by \(\|\Sigma^{-1}\mathbf{r}\|^{2}>\mathrm{tr}\left(\Sigma^{-1}\right)\), where \(\mathbf{r}\) is the residual vector. Assuming that the mean and the covariance matrix are exact, the expected value of the left-hand side is equal to the right-hand side. When the residual vector is large enough either by statistical fluctuations and/or underestimation of the errors, then we expect to see a local minimum at some \(\sigma_{f}\) satisfying \(\|(\sigma_{f}^{2}I+\Sigma)^{-1}\mathbf{r}\|^{2}=\mathrm{tr}\left[(\sigma_{f} ^{2}I+\Sigma)^{-1}\right]\).
Despite this fact, it is still intriguing that we find \(\Delta\chi^{2}\) as low as \(-10.21\) in EE for CamSpec PR4. As we discuss in Appendix A2, most of these improvements in fit come from low-\(\ell\) data, and notably is _not_ present in the CamSpec Public Release 3 (PR3). Our non-parametric approach using GP indicates that the updated analysis pipeline of CamSpec PR4 has caused the residuals in the EE coadded spectrum to vary more than the expected amount given by the covariance matrix. Indeed, we confirm that the usual chi-squared statistic for the EE data in the range \(30\leq\ell\leq 650\) is \(\chi^{2}=709\), larger than the expected value of 620 by \(2.51\sigma\).2 This excess has been investigated in (Rosenberg et al., 2022, see e.g. Table 2) and is statistically consistent with random fluctuations. However, it is interesting that our GP analysis indicates that these excess variances appear to be uncorrelated random fluctuations rather than smooth deformations in the mean, unlike the cases of TT and TE. Nonetheless, the deviations from zero are \(\mathcal{O}(10^{-2})\), with relatively minor improvements in fit, suggesting \(\Lambda\)CDM is a good description of the _Planck_ data.
Footnote 2: For the full data range of \(30\leq\ell\leq 2000\), \(\chi^{2}=2023\), which is \(0.83\sigma\) above the expected value of 1971. Note that here we look at the TT+TE+EE best-fit values.
We then use the set of (\(\sigma_{f}\), \(\ell_{f}\)) maximizing the likelihood in Eq. (3) (shown as a black dot in Fig. 2) to obtain the mean (in orange), \(68\%\) and \(95\%\) C.L. (gray-shaded bands) shown in Fig. 3, using Eqs. (1) and (2), respectively. Again, we see that the reconstructions are perfectly consistent with zero across the entire multipole range covered by the data.
These results suggest that the \(\Lambda\)CDM model is consistent with the _Planck_ CMB data. This should not come as a surprise, since we have chosen the best-fit predictions to CamSpec data as the mean function in our analysis. However, as explained before, this serves as a consistency test for the different CMB measurements. In the presence of systematics, or physics beyond \(\Lambda\)CDM, some inconsistencies might appear when using a (possibly incorrect) \(\Lambda\)CDM mean function. Finally, we would like to mention that the _Planck_ analysis has been reproduced to \(0.1\sigma\) accuracy using the new pipeline for the Simons
Figure 3: GP reconstructions for the set of hyperparameters maximizing the log-marginal likelihood in Eq. (3) when using CamSpec PR4 data and the corresponding \(\Lambda\)CDM best-fit spectra as mean functions. The solid line and shaded regions correspond to the mean and \(2\sigma\) confidence intervals around it, respectively.
Observatory Li et al. (2021), providing an independent cross-check of the _Planck_ results.
### Consistency of \(\Lambda\)CDM with ground-based experiments (ACT & SPT)
Next, we use the same mean function as before and look for potential structures in the residuals of ACT and SPT data. If \(\Lambda\)CDM is the correct model describing the CMB anisotropies up to \(\ell\simeq 4000\), and their parameters are accurately estimated by _Planck_, then the two-dimensional distributions of the hyperparameters \((\sigma_{f},\ell_{f})\) should not prefer any finite, non-vanishing values. The presence of unaccounted systematics, or discrepancies between the experiments, would however be reflected in the two-dimensional likelihood profiles if there are any.
#### 3.2.1 Act Dr4
In Fig. 4 we show the posteriors for \((\sigma_{f},\ell_{f})\) when using ACT DR4 data and \(\Lambda\)CDM's best-fit to _Planck_ as mean function. In this case, an interesting feature appears in the TT data. The LML peaks at \((\sigma_{f},\ell_{f})\simeq(9,2\times 10^{3})\) where the GP finds an improvement in fit with respect to the mean function (\(\Lambda\)CDM), corresponding to a \(\Delta\chi^{2}=-15.11\); see Table 2. Interestingly, the TE and EE posteriors show similar (bimodal) distributions, with a preference for non-vanishing values of \((\sigma_{f},\ell_{f})\), although the statistical significance of these deviations from \(\Lambda\)CDM is milder than in the TT case. The improvements in fit are reported in the middle column of Table 2. In Fig. 5, we show the mean and \(2\sigma\) reconstructions from the GP when using the set of hyperparameters maximizing the LML in Eq. (3), shown as black dots in Fig. 4. Note that the ACT reconstructions of the TT spectra seem to prefer lower amplitudes at \(\ell\lesssim 3000\) (at more than \(2\sigma\)) and a slightly larger amplitude at \(\ell\gtrsim 3000\) with respect to what is predicted by \(\Lambda\)CDM's best-fit to the CamSpec (_Planck_) data. This is yet another (non-parametric) indication that the ACT data seem to favor a scale-invariant, Harrison-Zel'dovich (\(n_{\rm S}\simeq 1\)) spectrum of fluctuations (Handley and Lemos, 2021; Jiang et al., 2022; Corona et al., 2022). At the same time, a larger value for \(n_{\rm S}\) might also imply an increased value of \(H_{0}\)3, through a reduction of the size of the sound horizon (Ye et al., 2021; Jiang and Piao, 2022).
Footnote 3: However, we should note that such a shift in the cosmological parameters, \(H_{0}\to 73~{}{\rm km/s/Mpc}\) and \(n_{s}\to 1\), would typically lead to larger values of \(S_{\rm S}\), worsening the fit to low-redshift (weak-lensing) measurements of the clustering amplitude.
While the LML improvements at \(\ell_{f}\sim 2000\) relate to the overall scaling through \(n_{\rm S}\), the other mode in LML found at \(\ell_{f}\sim 90\) in TE and EE spectra may be closely related to the cosmological parameters affecting the width and height of the acoustic peaks. Roughly speaking, a realisation of a GP with \(\ell_{f}\sim 90\) has a typical full width at half maximum (FWHM) of \(\sim 210\) and is likely to have oscillations that mimic the acoustic peaks in the CMB (\(\Delta\ell\sim 300\)). Indeed, the GP reconstruction of Fig. 5 for EE spectra has oscillation scales similar to those of the differences in the best-fit predictions from ACT+\(\Lambda\)MAP and CamSpec data (red line in Fig. 1). The features present in our non-parametric reconstructions can therefore be
Figure 5: GP reconstructions for the set of hyperparameters maximizing the log-marginal likelihood in Eq. (3) when using ACT DR4 data and \(\Lambda\)CDM best-fit to CamSpec PR4 as mean function. Solid line and shaded regions correspond to the mean and \(2\sigma\) confidence intervals around it, respectively.
a manifestation of the mild discordance in the \((\omega_{b},\omega_{c})\)-plane between Planck and ACT (seen for instance in Fig. 7 in Balkenhol et al. (2022)), or vice versa. An interesting feature is also captured in the EE reconstructions around \(\ell\sim 400-700\), with milder oscillations extending up to \(\ell\sim 3000\). These might be a slight hint of new features, a manifestation of the mild disagreement in the estimated cosmological parameters, or some unaccounted systematics affecting the low/high-\(\ell\) part of the ACT and/or _Planck_ data. Together with the issue in TT, it might explain why recent analyses reach slightly different conclusions when considering ACT data alone, or performing cuts at a given \(\ell_{\rm max}\) in the _Planck_ data (e.g. Poulin et al., 2021; Hill et al., 2021). Our results seem to support previous findings in the context of \(\Lambda\)CDM and simple extensions (e.g. Handley and Lemos, 2021; Galli et al., 2022; Corona et al., 2022), suggesting that these mild discrepancies are mainly driven by the ACT data and in particular by the TT measurements. Whether such discrepancies arise from physical, systematic, or statistical origin, however, remains to be determined by upcoming (more precise) CMB observations. In particular, the ACT collaboration is soon expected to update their results with the ACT DR6 data release.
#### 3.2.2 Spt-3g
Similarly, we inspect for structures in the residuals of SPT-3G data, when subtracting \(\Lambda\)CDM's best fit to CamSpecPR4 data. The results are shown in Fig. 6. The posteriors of the temperature auto-correlation (TT) and temperature/polarisation cross-correlation (TE) are in good agreement with the \(\Lambda\)CDM predictions, and the GP finds negligible improvements with respect to \(\Lambda\)CDM; see also Fig. 7. However, this could also be explained by the larger uncertainties in the temperature measurements with respect to _Planck_ (see Fig. 1). The situation is slightly different for EE, suggesting again a bimodal distribution in the \((\sigma_{f}\,,\ell_{f})\)-plane, with typical deviations from a zero mean-function of the order \(\sigma_{f}\lesssim 1\) and with a preferred correlation length of \(\ell_{f}\simeq 30\), indicating that the improvement in fit is likely due to subtle oscillations around the \(\Lambda\)CDM best fit predictions. The improvements in fit with respect to \(\Lambda\)CDM are again reported in the last column of Table 2, with a maximum \(\Delta\chi^{2}=-8.196\) for EE, which is of the same order of magnitude as the improvement in fit as for the case of ACT data. Curiously, the SPT reconstructions also show a prominent feature at intermediate scales (\(\ell\sim 500-700\)). As discussed before, these oscillations might be linked to the mild differences in the cosmological parameters (such as \(\omega_{b}\) or \(\omega_{c}\)) affecting the width, height and position of the acoustic peaks with respect to the ones inferred by _Planck_. We should mention that the SPT-3G results are overall consistent with those of _Planck_ at the parameter level; see Table IV and Fig. 7 in Balkenhol et al. (2022). However, our results suggest that the very mild differences between the two are mostly driven by the EE measurements.
At this stage, the mild disagreement between the experiments is not statistically significant (\(\lesssim 2\sigma\)) to confidently claim a discrepancy, although it is interesting to see that both ACT and SPT seem to prefer additional features in EE with respect to the best-fit \(\Lambda\)CDM predictions to _Planck_, which could be pointing towards a common physical origin. Note that with the arrival of upcoming CMB surveys such as Simons Observatory, CMB84, LiteBird and others, the situation might soon improve and might even shed light on the origin of these mild differences.
## 4 Conclusion
In this work, we used Gaussian Processes (GP) to test the consistency of \(\Lambda\)CDM with the most-recent CMB observations. In particular, we tested the robustness of the \(\Lambda\)CDM predictions against the final _Planck_ data release (Rosenberg et al., 2022; Efstathiou
Figure 7: GP reconstructions using the set of hyperparameters maximizing the log-marginal likelihood in Eq. (3) when using SPT-3G data and \(\Lambda\)CDM best-fit to CamSpec PR4 as mean function. Solid line and shaded regions correspond to the mean and \(2\sigma\) confidence intervals around it, respectively.
and Gratton, 2021; Aghanim et al., 2020) as well as with other ground-based temperature and polarisation measurements by the Atacama Cosmology Telescope (ACT) (Aiola et al., 2020; Choi et al., 2020) and South Pole Telescope (SPT-3G) (Dutcher et al., 2021; Balkenhol et al., 2022) collaborations. We find a mild inconsistency between the _Planck_/SPT and ACT results mainly seen in the TT spectra, where the GP finds a non-negligible improvement in fit with respect to the best fit \(\Lambda\)CDM predicitions, with indications for a Harrison-Zel'dovich spectrum with \(n_{s}\simeq 1\). This is a non-parametric confirmation of previous results, which supports the idea that the ACT data seem to favor a scale-invariant primordial power spectrum. Additionally, the EE measurements from both ACT and SPT seem to require additional features at intermediate scales (\(\ell\sim 400-700\)), extending up to \(\ell\sim 2500\), which might be pointing towards a common physical origin or minor unknown systematics in the data.
Throughout the main body of this paper, we discussed the results when using \(\Lambda\)CDM's best-fit predictions to the cleanest CamSpec data as the mean function for our Gaussian Process. However, it is known that the conclusions drawn from a GP analysis are highly sensitive to the choice of mean function. Thus, for completeness and to explore possible systematics affecting the _Planck_'s TT measurements, we repeated our analysis using \(\Lambda\)CDM's best-fit to the combination of ACT+WMAP data, effectively replacing _Planck_'s measurements with WMAP measurements. The results are presented in Appendix A.1 and our conclusions are stable under such a change in the mean function. Importantly, the TT posteriors for the CamSpec data, shown in Fig. A.3 require large deviations from the mean function (\(\Lambda\)CDM's best fit to ACT+WMAP) and the GP reconstruction yields a major improvement in fit (\(\Delta\chi^{2}\simeq-225\)), which reflects again the discrepancies between ACT and _Planck_ TT measurements under the assumption of a \(\Lambda\)CDM mean function; see also the corresponding reconstructions in Fig. A.1. The TT measurements from SPT-3G seem to support the _Planck_ results, as we find similar posterior distributions for \((\sigma_{\ell},\ell_{f})\), seen in the bottom left panel of Fig. A.3, suggesting that ACT+WMAP best fit predictions also conflict with TT measurements from SPT. Similarly, as can be seen from the lower panel in Fig. A.2, the EE measurements from ACT still require the aforementioned features around \(\ell\sim 400-700\), regardless of the chosen mean function, which might suggest a physical origin for such oscillations, and which cannot be properly accounted for by the \(\Lambda\)CDM model. Such features, however, are not statistically significant (yet) to draw robust conclusions.
To summarise, we tested the consistency of \(\Lambda\)CDM against an array of ground and space-based measurements of the temperature and polarisation anisotropies in the CMB. Overall, our analysis again confirms the robustness of the \(\Lambda\)CDM predictions when confronted with state-of-the-art CMB measurements; although we report a slight mismatch between ACT and _Planck_/SPT-3G results, mainly seen in the TT spectrum and using for the first time a non-parametric approach. The arrival of upcoming CMB experiments such as the Simons Observatory Ade et al. (2019), CMB-S4 Abazajian et al. (2019) and others, will allow for further, more careful exploration of these issues. The method discussed in this work can be readily applied to the upcoming data; hopefully determining whether the mild discrepancies reported here are actually coming from physical, systematic or statistical origin.
## Acknowledgements
The authors would like to thank Erik Rosenberg for providing us the latest CamSpec likelihood used in this analysis. RC would like to thank Adrien La Posta for useful discussions. This work was supported by the high-performance computing cluster Seondeok at the Korea Astronomy and Space Science Institute. DKH would like to acknowledge the support from CEFIPRA grant no. 6704-1.
|
2309.16665 | Modular quantum signal processing in many variables | Despite significant advances in quantum algorithms, quantum programs in
practice are often expressed at the circuit level, forgoing helpful structural
abstractions common to their classical counterparts. Consequently, as many
quantum algorithms have been unified with the advent of quantum signal
processing (QSP) and quantum singular value transformation (QSVT), an
opportunity has appeared to cast these algorithms as modules that can be
combined to constitute complex programs. Complicating this, however, is that
while QSP/QSVT are often described by the polynomial transforms they apply to
the singular values of large linear operators, and the algebraic manipulation
of polynomials is simple, the QSP/QSVT protocols realizing analogous
manipulations of their embedded polynomials are non-obvious. Here we provide a
theory of modular multi-input-output QSP-based superoperators, the basic unit
of which we call a gadget, and show they can be snapped together with LEGO-like
ease at the level of the functions they apply. To demonstrate this ease, we
also provide a Python package for assembling gadgets and compiling them to
circuits. Viewed alternately, gadgets both enable the efficient block encoding
of large families of useful multivariable functions, and substantiate a
functional-programming approach to quantum algorithm design in recasting QSP
and QSVT as monadic types. | Zane M. Rossi, Jack L. Ceroni, Isaac L. Chuang | 2023-09-28T17:58:51Z | http://arxiv.org/abs/2309.16665v1 | # Modular quantum signal processing in many variables
###### Abstract
Despite significant advances in quantum algorithms, quantum programs in practice are often expressed at the circuit level, forgoing helpful structural abstractions common to their classical counterparts. Consequently, as many quantum algorithms have been unified with the advent of quantum signal processing (QSP) and quantum singular value transformation (QSVT), an opportunity has appeared to cast these algorithms as modules that can be combined to constitute complex programs. Complicating this, however, is that while QSP/QSVT are often described by the polynomial transforms they apply to the singular values of large linear operators, and the algebraic manipulation of polynomials is simple, the QSP/QSVT protocols realizing analogous manipulations of their embedded polynomials are non-obvious. Here we provide a theory of modular multi-input-output QSP-based superoperators, the basic unit of which we call a _gadget_, and show they can be snapped together with LEGO-like ease at the level of the functions they apply. To demonstrate this ease, we also provide a Python package for assembling gadgets and compiling them to circuits. Viewed alternately, gadgets both enable the efficient block encoding of large families of useful multivariable functions, and substantiate a functional-programming approach to quantum algorithm design in recasting QSP and QSVT as monadic types.
## I Introduction
Quantum algorithms, persistently strange, remain difficult to design and interpret [1]; correspondingly, great effort has been spent to not only generate algorithms, but formalize the motifs of quantum advantage [2; 3; 4]. Both of these desires have been partially addressed by the advent of quantum signal processing (QSP) [5; 6; 7; 8] and its lifted version quantum singular value transformation (QSVT) [9]. These algorithms allow one to modify the singular values of large linear operators by precisely tunable polynomial functions, unifying and simplifying most known quantum algorithms [10], with good numerical properties [11; 12; 13; 14], and deep, fruitful connections to well-studied matrix factorization techniques [15].
In reducing diverse algorithmic problems to statements on the existence of classes of polynomial functions, QSP and QSVT encourage but do not substantiate interpreting quantum computations _functionally_. That is, it becomes tempting to treat these algorithms purely in terms of the functions they apply, directly working within the natural ring over polynomials (generated by addition and multiplication), or the monoid associated with polynomial composition. Such _function-first_ operations have clear semantic interpretation, offer connection to a wealth of preexisting mathematical literature, and connect neatly to existing classical formalisms for the design, verification, and optimization of programs [16; 17; 18].
Limited techniques for treating QSP and QSVT function-first have appeared, answering the following narrow question: given repeatable oracle access to a QSP protocol, and a description of another QSP protocol, can one instantiate the protocol achieving _the composition of the functions achieved by each protocol_ in black-box way? Independent lines of work [19; 20] have answered this positively, with the former enumerating necessary and sufficient conditions under which such composition is possible, and the latter examining recursive composition converging to a useful class of functions. In both works, however, only a single oracle unitary process is considered, and the resulting computations are described only by compositions of polynomials in a single variable, limiting the variety of achievable behavior and concomitant algorithmic use. We also note that such limited methods for self-embedding simple quantum subroutines has a long history [5; 21; 22; 23; 24], with roots in composite pulse theory [25; 26] and classical signal processing [27; 28], albeit without clear functional interpretation. Ultimately, a mature, function-first theory, closed over operations natural to multivariable polynomials (e.g., ring operations, the composition monoid) has not been realized.
In moving to the multiple oracle setting, we are seeking a more ambitious construction than previous work; that is, we want to be able to blithely snap together basic modules, with structure similar to that of QSP, at the level of the functions they apply. We will show this is surprisingly possible, with a bit of additional work, through a careful synthesis of two previous techniques: multivariable QSP (M-QSP) [29] and
an intricate series of powerful results on fractional queries from Sheridan, Maslov, and Mosca [30] extended beautifully to QSVT by Gilyen, Su, Low, and Weibe [9]. In turn, these M-QSP-based examples will then be generalized to a more encompassing theory of _gadgets_. Ultimately, we will be able to succinctly describe and subsume previous work on modular QSP-based algorithms [19; 20], the most obvious bridge between our work and previous work on the level of the aforementioned multivariable quantum signal processing (M-QSP) [29]. M-QSP (Def. II.1), despite having a well-defined series of conditions under which a transform is achievable, has proven unwieldy in practice, as multivariable analogues of the algebraic geometric results characterizing achievable polynomial transforms (chiefy the Fejer-Riesz Theorem [31; 32]) are unavoidably weaker in the multivariable setting. We will show that such barriers, however, do not rule out the existence of independently interesting subroutines strictly subsuming M-QSP, and which allow one to freely manipulate and combine polynomial functions. That is, to yield the same functional class as the one originally desired but disallowed in standard M-QSP, we show it is sufficient to generate algebraic structures over polynomials with rich closures; constructively achieving such closures by expanding the class of circuits considered is a central result of this paper.
The fundamental units of computation we construct are termed _gadgets_, which take as input (possibly multiple) unknown unitary processes, and produce as output (possibly multiple) unitary processes, where outputs depend entirely on multivariable polynomials applied to parameters of the inputs. We highlight an important subclass of gadgets, _atomic gadgets_, which are built directly using M-QSP, and subsequently combined with rich fractional query techniques [9; 30] to realize _snappable gadgets_, which, having been suitably _corrected_, can be freely combined. We provide a characterization of the _syntax_ for combining gadgets into composite gadgets (described by a two-level grammar, which under basic restrictions becomes context-free), as well as a description of valid _semantic_ manipulations for functions achieved by these gadgets (in the form of a monadic type over M-QSP protocols). In providing both a _syntactic_ and _semantic_ theory of gadgets, we provide a formal response to the temptation to view QSP- and QSVT-based modules purely functionally, and embed our constructions within established terminology from functional programming and category theory.
Unlike in the single-oracle setting, the conditions under which functional manipulations are possible within gadgets (for which the work of [19; 20] consider a special case) do not depend solely on simple circuit symmetries; consequently, the bulk of appendices of this work are spent specifying detailed functional analytic arguments on the existence and performance of special _correction protocols_ by which the output of a given gadget can be _corrected_ such that it can be connected to the input of another gadget while preserving the desired functional manipulation. We show that such gadget connections are generally ill-conditioned without correction, and we prove that correction generically cannot be done exactly, only to arbitrary precision. In fact, it is this softened condition of approximate achieveability that enables the huge variety of functional transforms. We emphasize that unlike with other methods for generating composite superoperators, like linear combination of unitaries (LCU), we maintain both stringent space requirements and rapid convergence (polylogarithmic in inverse error) to intended functional transforms. These constraints together constitute the core difficulty of this work, though once settled, they can be supressed, with the resulting gadgets (and simple methods for computing their costs) relied on for algorithm design instead.
The results of this work can be taken at multiple levels of abstraction. At the lowest, we achieve highly space- and query-efficient block encodings of multivariable polynomials in commuting linear operators, asserting that QSVT can retain key advantages over alternative methods like linear combination of unitaries (LCU) [33; 34] in the multivariable setting (this fact is detailed in Appx. B). Slightly higher, this work solidifies and greatly expands the approach of [19] in treating QSP and QSVT protocols in the language of functional programming through the composite objects of _gadgets_. We can rephrase our results as instantiating a _monad_ over QSP and QSVT _types_, and situate this monad in quantum programming language theory, with possible compositions described by simple formal grammars. To support both of these interpretations, we provide and highlight a series of concrete, linked examples of achievable multivariable functions, as well as a (beta) Python package for assembling gadgets and compiling their circuits, linked here: [pyqsp].
Our presentation is generally divided in two components (the main body Secs.I-V, and appendices A-I); in this main body, we provide the construction for _gadgets_ (Sec. II) (of which there are sub-types, Appx. D.3), define and discuss the connection of these gadgets into composite gadgets (Sec. III), and provide a series of linked concrete examples of useful functions achieved by assembling gadgets (Sec. IV), with detailed cost analysis (Appx. C). Within the second major division, we provide a series of appendices which detail the main proofs of theorems made in the main body (Appx. A), detailed analysis of the aforementioned correction protocols (Appxs. E and F), detailed analysis of the non-obvious method for computing gadget cost (Appx. C), discussion of how gadgets instantiate monadic functions and relate to functional programming (gadget _semantics_, Appxs. I.1 and I.2), and finally discussion of a formal language for gadgets constituted by an attribute grammar (gadget _syntax_, Appx. I.3).
Motivating and constructing gadgets
We have claimed that gadgets, properly constructed, can enable the efficient achievement of multivariable polynomial transformations and a modular approach to assembling quantum algorithms. This is shown explicitly in this section, by building from M-QSP and its combination with fractional queries to create _atomic gadgets_, a remarkably useful subclass of gadgets. We use and extend fractional query techniques [9; 30] to allow certain imperfections in atomic gadgets to be corrected, leading the fundamental unit of our construction, the _snappable gadget_, which we suitably generalize beyond explicit reference to QSP. For the moment, however, we recall that QSP circuits prescribe how to take simply parameterized unknown unitary inputs and produce as output unitaries with properties non-linearly dependent on, and carefully tuneable with respect to, the input unitaries [9]. Crucially, within QSP, input unitaries are highly constrained (in fact rotations about a fixed, known axis), while the output unitaries are not; even worse, if one were to consider a QSP-like ansatz depending on multiple unknown unitaries as input, not only can the form of the output unitary depend _on the function applied_, but also on the _relation between each of the many unknown inputs_. At a high level it is this _mismatch of basis_ between input and output unitaries in QSP (and inherited in singular vector subspaces in QSVT) which prevents simple composition of protocols at the level of the functions they achieve.
The core result of this work (Thm. III.2) lies in a terse equivalence between general compositions of multivariable polynomials and general compositions of gadgets (themselves based on QSP-like components) achieving these polynomials. Toward gathering the components to state this equivalence, we first build off of a QSP-related circuit to generate a powerful subclass of gadgets, followed by identifying key issues in their compositions, and special protocols to resolve these issues.
To start off, we recall the general definition of an _M-QSP protocol_[29], of which a standard QSP protocol is a special case. For the moment we will not need to know any properties of this algorithm (covered in Appx. D for the curious) besides its simple circuit form, other than to note that the problems M-QSP protocols exhibit when we attempt to combine them as modules will showcase shortcomings eventually resolved by the definition of _gadgets_ and _correction protocols_ given later in this section. These problems also distinguish the general setting from the single variable setting, where the former is substantively more difficult to address.
**Definition II.1** (M-QSP protocol).: Let \(\Phi\equiv\{\phi_{0},\ldots,\phi_{n}\}\in\mathbb{R}^{n+1}\) and \(s\in[t]^{\times n},\,t\in\mathbb{N}\), where \([t]=\{0,\ldots,t-1\}\). Then the \(t\)-variable M-QSP protocol specified by \((\Phi,s)\) generates the unitary circuit
\[\Phi[U_{0},\cdots,U_{t-1}]\equiv e^{i\phi_{0}\sigma_{z}}\prod_{k=1}^{n}U_{sx} e^{i\phi_{k}\sigma_{z}}, \tag{1}\]
where \(U_{0},\cdots,U_{t-1}\) are unitaries of the form \(U_{k}\equiv e^{i\theta_{k}\sigma_{x}},k\in[t]\) for unrelated \(\theta_{k}\) (see Def. II.2). Here \(\sigma_{z},\sigma_{x}\) are the standard Pauli matrices. We say an M-QSP protocol is _antisymmetric_ if both \((N\circ R)(\Phi)=\Phi\) and \(R(s)=s\), where \(R,N\) reverse and negate lists of scalars respectively, with equality taken elementwise (when \(\Phi\) contains zeros and the oracles can commute past each other, this definition can be replaced one stating \(\Phi[U_{0},\cdots,U_{t-1}]=\Phi[U_{0},\cdots,U_{t-1}]^{\dagger}\).). Often \(s\) will be suppressed when denoting \((\Phi,s)\) as a superoperator (see Appx. D). An M-QSP protocol _achieves_ a function \(f\), where \(f\equiv\langle 0|\Phi[U_{0},\cdots,U_{t-1}]|0\rangle\) is a function over scalars (namely those \(x_{k}\equiv\cos(\theta_{k})\)) parameterizing the oracles.
To understand the problem eventually solved by gadgets, we recall one of the insights of previous work on embedding QSP-based circuits within themselves [19]. In this work, it was shown how to iteratively compose _antisymmetric_ single-variable QSP protocols to compose their achieved functions. Such compositions can be realized by successively using the QSP unitary produced by some protocol \(\Phi_{k}\) as the signal operator of another QSP protocol \(\Phi_{k+1}\), continuing perhaps for some collection protocols \(\{\Phi_{0},\ldots,\Phi_{n-1}\}\). The admissibility of this recursion follows from the fact that for any antisymmetric protocol \(\Phi\) and oracle unitary \(e^{i\theta\sigma_{x}}\), the polynomial \(P\) achieved by a single-variable QSP protocol \(\Phi\) is invariant under _twisting_. More specifically, for any unitary \(U\) and any \(\sigma_{z}\)-rotation \(e^{i\varphi\sigma_{z}}\), \(\Phi[e^{i\varphi\sigma_{z}}Ue^{-i\varphi\sigma_{z}}]=e^{i\varphi\sigma_{z}} \Phi[U]e^{-i\varphi\sigma_{z}}\). Thus for a sequence of antisymmetric QSP protocols \(\{\Phi_{0},\ldots,\Phi_{n-1}\}\) the \((k+1)\)-th protocol automatically treats the function achieved by the \(k\)-th protocol as a variable for the function achieved by \(\Phi_{k+1}\). The output of each protocol remains in some sense _embeddable_, and the composability of functions follows. Below, we clarify this notion of embeddability (introducing a few named types).
**Definition II.2** (Variations on embeddable unitaries).: An SU(2) unitary \(U\) is _embeddable_ if it has the form \(e^{i\theta\sigma_{x}}\) for some \(\theta\in[0,\pi]\). A unitary is \(\varepsilon\)-embeddable if it is at most \(\varepsilon\)-far (in operator norm) from
an embeddable unitary. An SU(2) unitary \(U\) is _twisted embeddable_ (_half-twisted embeddable_) if it has the form \(e^{i\varphi\sigma_{x}/2}e^{i\theta\sigma_{x}}e^{-i\varphi\sigma_{z}/2}\) for some \(\theta\in[0,\pi]\) and \(\varphi\in[-\pi,\pi]\) (\(\varphi\in[-\pi/2,\pi/2]\)). A unitary \(U\) is \(\varepsilon\)-twisted embeddable (\(\varepsilon\)-half-twisted embeddable) if it is at most \(\varepsilon\)-far from a twisted embeddable (half-twisted embeddable) unitary over a specified domain. Given a twisted or half-twisted embeddable unitary of the form \(e^{i\varphi\sigma_{z}/2}e^{i\theta\sigma_{x}}e^{-i\varphi\sigma_{z}/2}\), we refer to \(e^{i\theta\sigma_{x}}\) as its _embeddable component_.
In what follows we show that maintaining such embeddability is vital for a variety of desired computational tasks and discuss some subtleties of the definitions contained in Def. II.2. To begin, we highlight a pedagogical example: consider two antisymmetric, single-variable QSP protocols \(\Phi_{0}\) and \(\Phi_{1}\) and signal operators \(e^{i\theta_{0}\sigma_{x}}\), \(e^{i\theta_{1}\sigma_{x}}\). By Thm. D, \(\Phi_{0}[e^{i\theta_{0}\sigma_{x}}]\) and \(\Phi_{1}[e^{i\theta_{1}\sigma_{x}}]\) are twisted embeddable, with
\[\Phi_{0}[e^{i\theta_{0}\sigma_{x}}]=e^{i\varphi_{0}\sigma_{z}/2}e^{i\theta_{0 }^{\prime}\sigma_{x}}e^{-i\varphi_{0}\sigma_{z}/2}\quad\text{and}\quad\Phi_{1 }[e^{i\theta_{1}\sigma_{x}}]=e^{i\varphi_{1}\sigma_{z}/2}e^{i\theta_{1}^{ \prime}\sigma_{x}}e^{-i\varphi_{1}\sigma_{z}/2}. \tag{2}\]
Suppose \(\Phi_{0}\) and \(\Phi_{1}\) achieve functions \(f_{0}\) and \(f_{1}\), with \(\theta_{0}^{\prime}=f_{0}(\cos(\theta_{0}))\) and \(\theta_{1}^{\prime}=f_{1}(\cos(\theta_{1}))\). Given a two-variable antisymmetric M-QSP protocol \(\Phi\) achieving the function \(g(x_{0},x_{1})\), in the general case,
\[\left\langle 0\right|\Phi\left[\Phi_{0}[e^{i\theta_{0}\sigma_{x}}],\Phi_{1}[e ^{i\theta_{1}\sigma_{x}}]\right]\left|0\right\rangle\neq g\left(f_{0}(\cos( \theta_{0})),f_{1}(\cos(\theta_{1}))\right). \tag{3}\]
This is entirely due to the conjugation of the embeddable components of \(\Phi_{0}[e^{i\theta_{0}\sigma_{x}}]\) and \(\Phi_{1}[e^{i\theta_{1}\sigma_{x}}]\) by _different_\(\sigma_{z}\)-rotations. Because \(\varphi_{0}\neq\varphi_{1}\), it is no longer possible to factor the \(\sigma_{z}\)-rotations out of the overall M-QSP protocol, in the same way as the single variable case. In single variable protocols, accumulating \(\sigma_{z}\)-conjugations by successively composing antisymmetric QSP protocols had no effect on the function achieved by the QSP polynomial. Here, one finds
\[\Phi[e^{i\varphi\sigma_{x}}e^{i\theta\sigma_{x}}e^{-i\varphi\sigma_{z}}]=e^{i \varphi\sigma_{x}}\Phi[e^{i\theta\sigma_{x}}]e^{-i\varphi\sigma_{z}} \Longrightarrow\left\langle 0\right|\Phi[e^{i\varphi\sigma_{z}}e^{i\theta \sigma_{x}}e^{-i\varphi\sigma_{z}}]|0\right\rangle=\left\langle 0\right|\Phi[e^{i \theta\sigma_{x}}]|0\rangle. \tag{4}\]
Therefore, in the multivariable case, we require a technique for suppressing the disagreeing \(\sigma_{z}\)-conjugations of Eq. (2), leaving the relevant embeddable components such that there is no interference with the outer M-QSP protocol. Such a correction would turn Eq. (3) into an approximate equality. Consequently we identify a concrete question: is it possible to map a twisted embeddable unitary \(U\) with unknown \(\varphi\) and \(\theta\) to its embeddable component? In other words, can we _align_ the axes of rotation for two (or more) unitary oracles in an efficient, black box way? We provide an affirmative answer to this question, up to slightly weakened conditions, and call this process of alignment _correction_.
**Lemma II.1** (Efficient correction of \(\varepsilon\)-twisted embeddable unitaries).: Let \(U\) be a \(\nu\)-twisted embeddable unitary, so \(U\) is \(\nu\)-close to \(U^{\prime}=e^{i\varphi\sigma_{z}/2}e^{i\theta\sigma_{x}}e^{-i\varphi\sigma_{z }/2}\). Suppose \(\cos(\theta)\in[-1+\delta,1-\delta]\). Let \(\varepsilon>0\). Then, given controlled access to \(U\), there exists a quantum circuit using \(\zeta=\mathcal{O}(\delta^{-1}\log(\varepsilon^{-2}\delta^{-1/2}))\) black-box calls to \(U\) and a single ancilla qubit which yields a \((\zeta\nu+\varepsilon)\)-embeddable unitary \(V\), which is at most \((\zeta\nu+\varepsilon)\)-far from \(e^{i\theta\sigma_{x}}\) (the embeddable component of \(U^{\prime}\)) with success probability at least \((1-(\zeta\nu+\varepsilon))^{2}\). Alternatively, given access to two ancilla qubits and uncontrolled oracle access to \(U\), it is possible to implement a circuit which achieves the same \((\zeta\nu+\varepsilon)\)-approximation with the same asymptotic query complexity and success probability, on the domain \(\cos(\theta_{k})\in[\delta,1-\delta]\). Finally, if \(U\) is \(\nu\)-half-twisted embeddable, and satisfies the same conditions, with the additional constraint that \(\cos(\varphi)\in[\gamma,\sqrt{1-\gamma^{2}}]\) (or \(\varphi\in[-\pi/2+2\gamma,-2\gamma]\cup[2\gamma,\pi/2-2\gamma]\)), there exists a quantum circuit using \(\zeta^{\prime}=\mathcal{O}(\delta^{-1}\gamma^{-2}\log^{2}(\gamma^{-4}\varepsilon ^{-2}\delta^{-1/2})\)) black-box calls to \(U\) and zero ancilla qubits which yields an \((\zeta^{\prime}\nu+\varepsilon)\)-embeddable unitary, at most \((\zeta^{\prime}\nu+\varepsilon)\)-far from the embeddable component \(e^{i\theta\sigma_{x}}\), with unit success probability. In all cases, the quantum circuits effectuating the corrections can be specified constructively, efficiently.
Lem. II.1 shows that given an approximately embeddable unitary, (1) the error accumulated during its correction is proportional to the error of the input, and (2) the constant of proportionality is linear in the length of the correction procedure. This scaling as not as poor as it first appears, mainly due to the ease with which \(\nu\) can be made small and the limited number of correction procedures that will be used by any physically reasonable protocols. We call such protocols _acceptable_ if their length scales at worst inverse polynomially in the desired output error, and argue for the reasonableness of this class of protocols in the context of the efficiency of Lem. II.1 in the expanded Rem. A.1. The resource costs for the correction procedures under different access model assumptions is elaborated upon further in Appxs. B and H (for a summary, see Table 1).
The correction procedure described enables us to take M-QSP protocols, apply corrective protocols to their output, and pass these outputs into other M-QSP protocols, such that the function achieved is the
multivariate composition of the functions achieved by the individual protocols. The closure over unitary superoperators achieved by arbitrarily composing sequences of M-QSP unitaries and corrections is what will be formalized by the generic notion of a _gadget_. Below, we give two definitions for gadgets; the first, the _atomic gadget_ (Def. II.3), is built directly from M-QSP protocols, while the second, the _gadget_ (Def. II.4) captures the most general relevant definition for our purposes. Under special conditions (discussed in detail in Sec. D.3), atomic gadgets form a strict subset of gadgets, and we will work with such atomic gadgets exclusively. In the following section, we show that such gadgets can be successively composed to yield composite gadgets describing complex, expressive quantum computations and achieving general and interpretable functional transforms. Moreover, gadget objects have a highly-interpretable form; we describe their permitted combinations described by attribute grammars (a _syntax_ for gadgets), and describe their functional action by monadic functions over QSP/QSVT types (a _semantics_ for gadgets).
**Definition II.3** (Atomic \((a,b)\) gadget).: Let \((\Xi,S)\) be a tuple with \(\Xi\equiv\{\Phi_{0},\ldots,\Phi_{b-1}\}\) where \(\Phi_{k}\in\mathbb{R}^{r_{k}+1},r_{k}\in\mathbb{N}\) and \(S\equiv\{s_{0},\ldots,s_{b-1}\}\) where \(s_{k}\in[a]^{r_{k}}\) and \([a]=\{0,\ldots,a-1\}\). Then the atomic \((a,b)\) gadget labelled by \((\Xi,S)\) refers to the parameterization of \(b\) M-QSP protocols using \(a\) single-qubit oracles \(U_{k}\), where each of the \(b\) output unitaries \(\Phi_{k}[U_{0},\ldots,U_{a-1}],k\in[b]\) has parameterization \((\Phi_{k},s_{k})\) following Def. II.1. A gadget is _antisymmetric_ if its constituting \((\Xi,S)\) generate only antisymmetric M-QSP protocols [19]. Note that not all _atomic gadgets_ are _gadgets_ (Def. II.4), as discussed in Sec. D.3, though atomic gadgets considered in this work (e.g., antisymmetric gadgets) _will_ be.
**Definition II.4** (\((a,b)\) gadget).: An \((a,b)\) gadget \(\mathfrak{G}\) is a superoperator which takes as input \(a\) single-qubit embeddable oracles \(U_{k}\) for \(k\in[a]\), and outputs \(b\) twisted-embeddable unitaries \(U^{\prime}_{k}\) for \(k\in[b]\), denoted by \(U^{\prime}_{k}=\mathfrak{G}[U_{0},\ldots,U_{a-1}]_{k}\). Note that _antisymmetric_ atomic \((a,b)\) gadgets (Def. II.3) are a strict subset of \((a,b)\) gadgets (Thm. D.7). However, there also exist other circuit-based methods for constructing gadgets from M-QSP (Def. D.2). A gadget \(\mathfrak{G}\) is said to achieve \(F\equiv\{f_{0},\ldots,f_{b-1}\}\) where \(f_{k}\equiv\langle 0|U^{\prime}_{k}|0\rangle\) is the top-left matrix element of the \(k\)-th output unitary given by applying \(\mathfrak{G}\) to a set of oracles. In the case that the input oracles are parameterized by variables \(x_{0},\ldots,x_{a-1}\), each output unitary \(U^{\prime}_{k}\) and each \(f_{k}\) can be treated as functions \(U^{\prime}_{k}(x_{0},\ldots,x_{a-1})\) and \(f_{k}(x_{0},\ldots,x_{a-1})\).
**Remark II.1** (Gadget terminology).: Going forward, if an \((a,b)\) gadget \(\mathfrak{G}\) is said to achieve output unitaries \(U^{\prime}_{k}(x_{0},\ldots,x_{a-1}),k\in[b]\) and functions \(f_{k}(x_{0},\ldots,x_{b-1}),k\in[b]\), it is meant that the gadget achieves these functions for input unitaries \(U_{k}=e^{i\theta_{k}\sigma_{x}},k\in[a]\), where each \(x_{k}=\cos(\theta_{k}),k\in[a]\) is possibly restricted to a fixed domain specified along with the gadget.
In many cases, a gadget will achieve its output unitaries \(U^{\prime}_{k}\) via some sequence of products and interspersed quantum (and possibly even classical) operations. The only criteria on the input oracles is that the gadget sees them as black-boxes, which can only be utilized through quantum circuit queries. When we refer to the _cost_ associated with a gadget \(\mathfrak{G}\), we are referencing the number of black-box queries that must be made to the input oracles to achieve some particular output unitary, possibly at a specified precision over a specified parameter domain. Detailed discussion of cost and its associated objects for gadgets are restricted to Appx. C.
We now present the key theorem which describes how the correction protocol of Lem. II.1 enables _snappable gadgets_ to be constructed from general gadgets. Snappable gadgets are engineered such that their outputs are embeddable - not merely twisted-embeddable - and may be passed as input to another gadget in a black-box way and while respecting achieved functions. Note that going forward we let \(\widetilde{\mathcal{O}}\) indicate asymptotic complexity up to leading order/logarithmic factors in each independent variable.
**Theorem II.1** (Efficient correction of unitaries produced by \((a,b)\) gadgets).: Let \(\varepsilon,\delta>0\) and \(\mathfrak{G}\) an \((a,b)\) gadget whose output unitaries are guaranteed to be \(\nu\)-twisted-embeddable: they are \(\nu\)-close to unitaries \(e^{i\varphi\sigma_{x}/2}e^{i\theta_{k}\sigma_{x}}e^{-i\varphi\sigma_{x}/2}\) whose embeddable components encode \(\cos\theta_{k},k\in[b]\) within \([-1+\delta,1-\delta]\). Assuming an access model in which one can query the _controlled_ single-qubit unitaries achieved by the output legs of \(\mathfrak{G}\), then given \(k\in[b]\), there exists a computable \(\nu^{\prime}=\widetilde{\mathcal{O}}(\delta\varepsilon)\) and a quantum circuit using \(\zeta=\widetilde{\mathcal{O}}(\delta^{-1}\text{log}(\varepsilon^{-1}))\) black-box calls to the controlled unitaries produced by executing \(\mathfrak{G}\) on the input oracles, as well as one ancilla qubit, such that if \(\nu\leq\nu^{\prime}\) then the circuit achieves an \(\varepsilon\)-approximation of \(U^{\prime}_{k}=e^{i\theta_{k}\sigma_{x}}\) (the embeddable component of the \(k\)-th output unitary of \(\mathfrak{G}\)), with success probability at least \((1-\varepsilon)^{2}\), for all \(k\). Alternatively, given access to two ancilla qubits and access only to uncontrolled \(U\) queries, the same outcome can be achieved with the same asymptotic complexity and success probability over the domain \(\cos(\theta_{k})\in[\delta,1-\delta]\). Finally, in the case that the output legs are all \(\nu\)-half twisted embeddable with \(\cos(\varphi)\in[\gamma,\sqrt{1-\gamma^{2}}]\), the same can be done with no extra space and unit success probability using \(\zeta=\widetilde{\mathcal{O}}(\delta^{-1}\gamma^{-2}\log^{2}(\varepsilon^{-1}))\) black-box calls to \(\mathfrak{G}\) and the guarantee that \(\nu\leq\nu^{\prime\prime}\), for a computable \(\nu^{\prime\prime}=\widetilde{\mathcal{O}}(\delta\gamma^{2}\varepsilon)\). Each protocol described above can be constructed efficiently and obliviously to the output unitaries of the gadget.
Essentially this theorem (proven in Appx. A) applies Lem. II.1 to each output leg of a gadget, obtaining our desired endpoint for this section: _snappable gadgets_. While the following definition is somewhat innocuous, in the following we show that such gadgets, when assembled, induce natural and simple-to-track algebraic manipulations of their achieved transforms.
**Definition II.5** (Snappable gadget).: A gadget is said to be \(\varepsilon\)-\(\delta\)-_snappable_ if each of its output legs, over all unitary inputs with the form of \(\sigma_{x}\)-rotations by an angle \(\theta\) such that \(\delta<|\cos{(\theta)}|<1-\delta\), produces a unitary which is \(\varepsilon\)-close to a \(\sigma_{x}\)-rotation in norm.
## III Composing QSP gadgets
The gadgets of Defs. II.3 and II.4 and the correction procedure of Thm. II.1 bring us to _snappable gadgets_. Such gadgets can finally be freely assembled into composite gadgets to build complex, useful functional transforms; in this section we formally define this assembly, and enumerate its properties. The desired endpoint of this section, the statement of Thm. III.2, shows a neat equivalence between a series of (partially) composed polynomials achieved by individual gadgets, and a structured network of those gadgets. This section begins by focusing on the linkages of these networks, captured in Thm. III.1, such that Thm. III.2 can be used to snap together a series of example gadgets with LEGO-like ease in Sec. IV.
As mentioned, we show in this section that gadgets permit a simple diagrammatic representation of their semantics, depicted in Fig. 1; in this way, complex functions can be built hierarchically out of simpler ones and visually reasoned about in an intuitive way. More specifically, we can think of so-called \((a,b)\) gadgets as boxes with _a input legs_, representing oracular unitary inputs, and _b output legs_, representing output unitaries (These boxes are shown in (c) of Fig. 1 for \(a=b=2\)). To flesh out this diagrammatic language, we present a theorem (Thm. III.1) enumerating the valid ways to link gadgets together, and translate the effect of such compositions into statements on algebraic manipulations over each gadget's achieved polynomial functions. Diagrammatically, linking gadgets is depicted by joining input and output legs. In this way, as exemplified in the _Rosetta Stone_ diagram of Fig. 1, one can freely reason about assemblages of gadgets in terms of any among (a) the circuits realizing them, (b) the functional manipulations they achieve, or (c) as linkages between boxes with semantically simple input and output legs. While not discussed in this section explicitly, the syntax of gadget assemblages follows a formal grammar (Appx. 13), while the semantics of achievable functional manipulations is described by the instantiation of a monadic type (Appx. 12). We finally note briefly that depicting quantum superoperators as such acyclic graphs is not entirely new, as previous works considering quantum combs (circuits with open slots) have expressed them in terms of simple _causal networks_[35, 36, 37]; the difference between those works' general prescription and ours is that our networks depict semantic properties of interactions between achieved functional transforms, and we require careful constraints on allowed unitary inputs. For this reason, as well ease of description for higher-order operations over gadgets defined later, relying on causal networks rather than quantum combs to depict our superoperators leads to an expedient, clarified visual language.
We begin our description of gadget compositions with a definition.
**Definition III.1** (Interlink for gadgets).: Let \(\mathfrak{G}\) and \(\mathfrak{G}^{\prime}\) be \((a,b)\) and \((c,d)\) gadgets respectively. An _interlink_ between these gadgets specifies a way to validly connect them. Take \([b],[c]\) to be the length-\(b\) and length-\(c\) (zero-indexed) ordered lists of labels of the outputs of \(\mathfrak{G}\) and the inputs of \(\mathfrak{G}^{\prime}\) respectively. An _interlink_ is a three element list \(\mathfrak{I}\equiv(B,C,W)\) with the following prescription: \(B\) is a sublist of \([b]\) of size \(e\in\{0,1,\cdots,\min{(b,c)}\}\), \(C\) is a sublist of \([c]\) also of size \(e\), and \(W\) is a member of \(S_{e}\) the permutation group over \(e\) elements.
We can use this definition to precisely state the following theorem on the general combinations of gadgets. This theorem shows that gadgets can be composed in a nearly unconstrained way, snapped together like LEGOs, with well-understood associated cost and functional action.
**Theorem III.1** (Composing a gadget with an atomic gadget).: Let \(\varepsilon,\delta>0\), \(\mathfrak{G}\) be an \((a,b)\) gadget and \((\Xi,S)\) an antisymmetric atomic \((c,d)\) gadget, where \(\Xi\equiv\{\Phi_{0},\ldots,\Phi_{b-1}\}\) and \(S\equiv\{s_{0},\ldots,s_{b-1}\}\) for \((\Xi,S)\). Suppose the gadget \(\mathfrak{G}\) achieves
\[F(x)\equiv\{f_{0}(x_{0},\cdots,x_{a-1}),f_{1}(x_{0},\cdots,x_{a-1}),\ldots,f_ {b-1}(x_{0},\cdots,x_{a-1})\}\in(\mathbb{R}^{a}\to\mathbb{R}^{b}) \tag{5}\]
over \(x\in[-1,1]^{\times a}\), and the atomic gadget \((\Xi,S)\) achieves
\[G(y)\equiv\{g_{0}(y_{0},\cdots,y_{c-1}),g_{1}(y_{0},\cdots,y_{c-1}),\ldots,g_ {d-1}(y_{0},\cdots,y_{c-1})\}\in(\mathbb{R}^{c}\to\mathbb{R}^{d}) \tag{6}\]
over \(y\in[-1,1]^{\times c}\). Let \(\mathfrak{I}=(B,C,W)\) be an interlink between these gadgets. Then, there exists a gadget \(\mathfrak{G}^{\prime}\) which \(\varepsilon\)-approximately achieves
\[H(x,y^{\prime})\equiv\bigcup_{k\in[d]}g_{k}\left(\bigcup_{j\in B}f_{W(j)}(x_{0},\ldots,x_{a})\cup\bigcup_{k\notin C}y_{k}\right)\cup\bigcup_{k\notin B}f_{k}(x _{0},\ldots,x_{a}) \tag{7}\]
over \((F(x),y^{\prime})\in\mathcal{D}\), where \(y^{\prime}\) is the subset of \(y_{k}\) such that \(k\notin C\) and \(\mathcal{D}\) is a domain determined by the correction procedure utilized. The set union symbol is abused to mean concatenation of lists with respect to the pre-established order of the labels of the relevant input and output _lists_ (not sets), and where \(W(j)\) is the result of the application of the specified permutation applied to the _index_ of the subset member \(j\) in the ordered sublist \(C\) of \([c]\). Moreover, \(\mathfrak{G}^{\prime}\) can be constructed efficiently, and uses only a description of \((\Xi,S)\) and a total of \(\widetilde{\mathcal{O}}(d|\Xi|_{\infty}\,\zeta)\) black-box calls to the _unitaries produced by running_\(\mathfrak{G}\) to realize the function \(H\).
Note \(\zeta\), the cost of correcting the gadget \(\mathfrak{G}\), is precisely the cost to make \(\mathfrak{G}\)_snapable_, and this snappability enables the simple compositional form of Eq. 7. \(\zeta\) is one among two choices presented in Thm II.1, with the variables \(\varepsilon\) and \(\delta\) carrying over. Generally, \(\zeta=\widetilde{O}(\operatorname{polylog}(\varepsilon^{-1})\operatorname{ poly}(\delta^{-1}))\), with possible polynomial scaling in a parameter \(\gamma\), given in Thm. II.1, as well. This choice also dictates whether \(\mathcal{O}(1)\) or zero ancilla qubits are required to perform this composition, and whether the success probability of achieving each individual function is at least \((1-\varepsilon)^{2}\), or \(1\). \(|\Xi|_{\infty}\) is the maximum length of lists within \(\Xi\). For a proof of this result, see Appx. A.
**Remark III.1** (Arbitrary gadget interlinks).: While it is possible to consider interlinks between _arbitrary_ gadgets, rather than assuming _a priori_ that the second gadget is atomic, it isn't possible to make a general claim about the required complexity to implement such a composition. For a generic gadget, treated as a black box accepting and outputting unitaries, one cannot know the required query complexity of the input legs in order to achieve a given output, with some precision, without knowledge of exactly how the input legs are queried within the gadget: behaviour which can, in principle, vary dramatically between different gadgets. We do know this internal structure for the special case of atomic gadgets (they are collections of M-QSP circuits). Nevertheless, characterization of interlinks between gadgets and atomic gadgets allows for full characterizations of arbitrary "gadget networks" of many interlinks along with associated corrections (as an atomic gadget which has been corrected may not longer strictly be an atomic gadget, due to the possible introduction of ancilla qubits). As a result, Thm. III.1 is sufficiently powerful for all practical purposes within the scope of this paper.
Figure 1: A gadget _Rosetta Stone_ giving three depictions of the same self-composition of a \((2,2)\) gadget (Def. II.3). Arrows indicate direct substitution, of (a) quantum circuits (time going left to right), (b) polynomials, and (c) gadget legs, where \(A,B\) are slots (places where an oracle unitary can be substituted) and \(\Phi\) are programmable, generally distinct \(\sigma_{z}\) rotations. The (Laurent) polynomial transforms achieved by the two protocols, \(P_{0},P_{1}\), are real-valued with real arguments, and the correction protocol of Thm. II.1 is implicitly interspersed between substitutions (represented explicitly by dots in Fig. 2). Note that the circuit (a) and functional (b) depictions are different from the _gadget_ (c) depiction given, e.g., in Figs. 2 and 4 and elsewhere; this difference is detailed in Rem. I.2.
Via Thm. III.1, an interlink \(\mathfrak{I}\) can be thought of as an operator over gadgets, mapping a pair of gadgets to a single gadget with a possibly new type
\[(a,b),(c,d)\to(a+c-e,b+d-e), \tag{8}\]
for any \(e\in\{0,1,\cdots,\min{(b,c)}\}\).
**Remark III.2**.: Given an interlink, \(\mathfrak{I}\), a gadget \(\mathfrak{G}\), and an antisymmetric atomic gadget \((\Xi,S)\), we refer to the gadget \(\mathfrak{G}^{\prime}\) resulting from the above operation as \(\mathfrak{I}[\mathfrak{G},(\Xi,S)]\). Moreover, we can think of an interlink as inducing an operation of pairs of functions themselves. Going forward, given an interlink \(\mathfrak{I}\), as well as functions \(F\) and \(G\) of the form of Eq. (5) and Eq. (6), we let \(G\circ_{\mathfrak{I}}F\) denote the function \(H\) of Eq. (7).
Graphically an interlink is simple to describe, as is shown in Fig. 2, which also depicts the general action described in Thm. III.1. Beyond composition, one can define a variety of operators over gadgets; we note that, as these gadgets are themselves superoperators, we are free to duplicate and elide outputs at the possible cost of additional queries. As discussed in Appx. A, one can _augment_ an \((a,b)\) gadget to an \((a,b+c)\) gadget, as well as _elide_ an \((a,b)\) gadget to an \((a,b-c)\) gadget by ignoring outputs. Moreover, the input of an \((a,b)\) gadget can be _pinned_ to yield an \((a-c,b)\) gadget, or the output legs of an \((a,b)\) gadget can be _permuted_ to yield an \((a,b)\) gadget. Finally, note that the correction protocol itself can be thought of as an operator over gadgets, leaving the function achieved by the gadget approximately unchanged, but transforming its unitary output by a \(\sigma_{z}\) conjugation. The form and cost of these basic operations is covered in Appx. A and, once generally computed, these rules allow the algorithmist to reason about complex, multi-input/output functional operations agnostic to their realizing circuits, in a _functional style_.
Before continuing, we note that defining the closure over arbitrarily interlinking finite-size gadgets is not a simple task. For instance, decomposing a given polynomial in a single-variable into an optimal tower of composed, lower degree polynomials (a strictly simpler problem) was once assumed to be a computationally hard problem [38, 39, 40], and the conditions under which such decomposition is efficient in the multivariable case have only recently been generally understood [41] and analyzed in the approximate setting [42, 43, 44], with the generic problem known to be NP-hard [45]. Nevertheless such decompositions, if they do exist, are basically unique (a consequence of Ritt's theorem [46]), and packages exist within most computer algebra systems for computing them. Consequently, while we show that we can achieve the full algebraic and monoidal operations natural to multivariable polynomials, our constructions only furnish competitive upper bounds on the required space, gate, and query complexity to achieve a given transform, with lower bounds depending on deep results in algorithmic complexity theory [47, 48, 49] and polynomial decomposition theorems not exposited here. Despite this fact, we are still able to provide a compact _equivalence theorem_ for composite gadgets achieving polynomials which can be decomposed into a tower of lower-degree polynomials for which atomic gadgets are known.
**Theorem III.2** (Efficient equivalence of polynomial compositions and gadget assemblages).: Let \(\mathcal{L}(x_{1},\ldots,x_{n})\) denote the set of all polynomials achievable by atomic gadgets in the variables \(x_{1},\ldots,x_{n}\). Suppose \(P(x_{1},\ldots,x_{n})\) is a polynomial of degree \(D\) and can be split into a tower of \(m=\mathcal{O}(\log(D))\)_interlinked polynomials_ (Rem. III.2),
\[P=P^{(m-1)}\circ_{\mathfrak{I}_{m-2}}\left(P^{(m-2)}\circ_{\mathfrak{I}_{m-3} }\circ\left(\cdots\left(P^{(1)}\circ_{\mathfrak{I}_{0}}P^{(0)}\right)\cdots \right)\right) \tag{9}\]
Figure 2: A depiction of gadgets coupled by interlinks (Def. III.1) both in example (a-b), and generally (c). Given two gadgets, an interlink \((B,C,W)\) specifies a subset of output legs \(B\), input legs \(C\), and a permutation \(W\), joining gadgets according to Thm. III.1. Two possible interlinks are shown explicitly in (a-b) for \((2,2)\)-gadgets. In (c), possibly many legs are suppressed in dashed legs, and the sets \(B,C\) in Thm. III.1 are precisely those composing the composite leg \(z\); in this way (c) captures the most general coupling of two gadgets, per Thm. III.1. In all cases black dots indicate a correction protocol (II.1) necessary to properly couple legs.
such that \(P_{i}^{(j)}\in\mathcal{L}(x_{1},\ldots,x_{n})\) for all \(i\) and \(j\). Assume that the composition satisfies the domain condition (Rem. 11) with \(P_{k}\) supported on \(\mathcal{D}_{k}\) such that each \(\mathcal{D}_{k}\) is separated from singular points \(\pm 1\) (and possibly \(0\)) by some length \(\delta\in\mathcal{O}(1)\) for all \(k\). Then, there exists an assemblage of \(m\) atomic, snappable gadgets which \(\varepsilon\)-approximates the polynomial \(P\) on the domain yielded from the \(\mathcal{D}_{k}\), with cost \(\widetilde{O}(\operatorname{poly}(D)\operatorname{polylog}(\varepsilon))\).
For a proof of this theorem, see Appx. 1. We remark that the discussion of query and gate complexity in Thm. III.1 and thus Thm. III.2, while simple, does not immediately extend to composite gadgets with complex internal structure. For this purpose we introduce a variety of independently interesting mathematical objects which allow for expedient calculation of the cost of implementing a gadget which achieves a desired function up to a desired precision over a desired range of inputs. The basic object for such costs is a _cost matrix_ (Def. 11 in Appx. 1.), which permits one to determine the query complexity of a gadget with respect to a given input leg for a given output leg, up to a specified precision over a specified range of inputs. With a few simple abstractions, computing composite gadget costs is thus reduced to a lightly augmented form of matrix multiplication over sub-gadget cost matrices. Details of this calculation are presented in Appx. 1.
## IV Examples
The results of Sec. III together project a new path for building quantum subroutines: namely, the equivalence of Thm. III.2 ensures that we can think purely in terms of the polynomials achieved by a series of gadgets when snapping them together visually as modular pieces, while this LEGO-like construction is governed behind the scenes in the circuit picture by the simple rules of Thm. III.1. To highlight these principles together in action, we thus consider a series of linked examples making use of minimally complex pieces. We provide an overview of these examples in Fig. 3, grouping them in terms of the natural algebraic structures they constitute; in turn, these examples are exposited in Sec. IV.1, which covers common basic arithmetic, and Sec. IV.2, which combines these pieces to achieve a variety of familiar and utilitarian functional classes.
### Basic arithmetic with gadgets
One of the most immediately useful applications of the protocols derived in the previous section is the ability to perform basic arithmetic and interpolation, coherently, between two polynomial functions, each achieved by an independently specified QSP protocol.
**Example IV.1** (Inversion and negation).: We start with two simple \((1,1)\) gadgets with utility in the creation of composite functions. Each take as input some \(x_{0}\in[-1,1]\) as usual. The first, termed _inversion_ pre-applies an \(iX\) (special unitary) to the oracle, producing \(\sqrt{1-x_{0}^{2}}\) which 'logically inverts' the input on \(\{0,1\}\). The second, termed _negation_ conjugates the oracle by the (special unitary) \(iY\), and achieves \(-x_{0}\). Note that these are not _atomic gadgets_ per Def. II.3, as their form is outside that of M-QSP protocols, but that they produce embeddable output (Def. II.2).
**Example IV.2** (Angle sum and difference).: To illustrate a useful but non-obviously antisymmetric gadget, we can consider the \((2,1)\) gadget resulting from multiplying two oracles in different variables with QSP phases all zero, e.g., \(U_{0}U_{1}\), which achieves \(x_{0}x_{1}-\sqrt{1-x_{0}^{2}}\sqrt{1-x_{1}^{2}}\). This corresponds to adding the angles associated with oracle unitaries. This can be changed to an angle difference by conjugating one oracle, e.g., \(U_{1}\mapsto e^{i\pi/4\sigma_{x}}U_{1}e^{-i\pi/4\sigma_{x}}\).
**Example IV.3** (Multiplication).: Let \(\mathfrak{G}_{\mathrm{mult}}\equiv(\Xi,S)\) be the \((2,1)\) atomic gadget with
\[\Xi\equiv\{\Phi\}=\{\{-\pi/4,\pi/4,-\pi/4,\pi/4\}\}\ \ \text{and}\ \ S=\{\{0,1,0\}\}. \tag{10}\]
This gadget achieves \(f(x_{0},x_{1})=T_{2}(x_{0})x_{1}\), which can be linked to gadget outputs to multiply functions (up to pre-application of \(T_{2}(x)=2x^{2}-1\), the second Chebyshev polynomial of the first kind, to the first argument).
**Example IV.4** (Sub-normalization).: Let \((\Xi,S)\) be the \((2,1)\) atomic gadget of Ex. IV.3. Let \(\mathfrak{G}_{\mathrm{subnorm}}(a)\) be the \((1,1)\) gadget obtained from pinning (Def. 11) the index-\(0\) input leg to the unitary \(U_{0}=e^{i\arccos(x_{0})\sigma_{x}}\) where \(x_{0}=\sqrt{(1+a)/2}\), for \(a\in[-1,1]\). Then \(\mathfrak{G}_{\mathrm{subnorm}}(a)\) obtains the function \(f(x)=T_{2}(\sqrt{(1+a)/2})x=ax\), for all \(x\in[-1,1]\).
**Example IV.5** (Scaling).: Take the \((1,1)\) atypical gadget constructed in Ex. D.2, which achieves an arbitrary bounded real polynomial \(P\) of definite parity. Consider the polynomial approximation of \(P\) from Lemma 30 of [9], which \(\varepsilon\)-approximates the linear function \(ax\) for \(a>1\) on the interval \(x\in[(-1+\delta)/a,(1-\delta)/a]\). Then the QSP phases \(\Phi\) of the protocol achieving \(P\) can be inserted into the prescription of Ex. D.2 to yield a \((1,1)\) gadget which multiplies its input by a scalar greater than \(1\) with a clipped output: \(\mathfrak{G}_{\mathrm{scale}}(a)\). Moreover, the cost of this gadget is \(\mathcal{O}([a/\delta]\log{[a/\varepsilon]})\) queries to \(x_{0}\), by [9], and requires no additional space.
**Example IV.6** (Addition).: Let \(\mathfrak{G}_{\mathrm{add}}\equiv(\Xi,S)\) be the \((2,1)\) atomic gadget with
\[\Xi\equiv\{\Phi\}=\{0,\pi/4,0,-\pi/4,0\}\ \ \text{and}\ \ S=\{0,1,1,0\} \tag{11}\]
This protocol will achieve the function \(f(x_{0},x_{1})=T_{2}(x_{0})T_{2}(x_{1})\) for \(x_{0},x_{1}\in[-1,1]\). Let \(U_{0}\) and \(U_{1}\) be embeddable, with \(U_{0}=e^{i\arccos(x_{0})\sigma_{x}}\) and \(U_{1}=e^{i\arccos(x_{1})\sigma_{x}}\). The products
\[V_{0} =U_{0}U_{1}=e^{i(\arccos(x_{0})+\arccos(x_{1}))\sigma_{x}}, \tag{12}\] \[V_{1} =U_{0}e^{i\pi\sigma_{z}/2}U_{1}e^{-i\sigma_{x}\pi/2}=U_{0}U_{1}^{ \dagger}=e^{i(\arccos(x_{0})-\arccos(x_{1}))\sigma_{x}}, \tag{13}\]
define simple \((2,1)\) gadgets. We can achieve a \((2,1)\) gadget by first duplicating the input legs corresponding to \(U_{0}\) and \(U_{1}\), passing a pair \((U_{0},U_{1})\) into each of the \(V_{0}\) and \(V_{1}\) gadgets, then passing these two output legs into \((\Xi,S)\). However, note that \(V_{0}\) and \(V_{1}\), while being \(\sigma_{x}\)-rotations, can generally have their rotation angle in \([-\pi,\pi]\), rather than \([0,\pi]\). Thus, either \(V_{0}\) is embeddable _or_\(\sigma_{z}V_{0}\sigma_{z}\) is embeddable, with the same holding true for \(V_{1}\). However, as can be easily checked, this \(\sigma_{z}\)-ambiguity will not matter: these extra \(\sigma_{z}\)-rotations (when absorbed into the phase sequence \(\Phi\) as extra \(\pi/2\)-rotations) will have no effect on the output
Figure 3: A summary of functional transforms achieved in this work, grouped (above the line) by their constitution of common mathematical structures over polynomials. Addition, polynomial multiplication, and scalar multiplication form a polynomial ring, where \((\ddagger)\) indicates we consider (possibly clipped) polynomials over \(x\in[-1,1]\) of definite parity, maximum norm one, and taking real-values; these properties are preserved under the specified operations. We also achieve a monoid generated by single-variable polynomial composition, as well as its generalized form capturing tuples of multivariable polynomials, described by the grammar and language of gadgets (Appx. I). We stress that in the general case we cannot achieve these operations exactly, only to arbitrary precision and with an associated cost. These operations, together with the ability to generate arbitrary single-variable bounded, definite parity, real polynomials using QSP, and a special subset \((\dagger)\) of such polynomials with M-QSP [29], permit all highlighted examples of Sec. IV (enumerated below the line).
polynomial: this gadget will always achieve
\[f(x_{0},x_{1}) =T_{2}\left(\cos(\arccos(x_{0})+\arccos(x_{1}))\right)T_{2}\left( \cos(\arccos(x_{0})-\arccos(x_{1}))\right) \tag{14}\] \[=\cos\left(2(\arccos(x_{0})+\arccos(x_{1}))\right)\cos\left(2( \arccos(x_{0})-\arccos(x_{1}))\right)\] (15) \[=\frac{\cos(4\arccos(x_{0}))+\cos(4\arccos(x_{1}))}{2}=\frac{T_{4} (x_{0})+T_{4}(x_{1})}{2}. \tag{16}\]
See Fig. 4 for a graphical depiction of the interlinks constituting this gadget.
**Example IV.7** (Constant shift).: It is possible to pin the addition gadget to perform a "constant shift" (up to the fourth Chebyshev polynomial): \(x\mapsto\frac{1}{2}T_{4}(x)+b\) for \(b\in[-1/2,1/2]\). This gadget, denoted \(\mathfrak{G}_{\mathrm{shift}}(b)\), is achieved by taking the gadget of Ex. IV.6 and pinning its second input (Def. A.1) to \(e^{i\arccos(x_{1})\sigma_{x}}\), with \(T_{4}(x_{1})=2b\). Note it is possible to combine the sub-normalization and constant shift protocols into an _affine shift_ of the form \(x\mapsto\frac{1}{2}T_{4}(ax)+b\).
The above examples, implementing basic arithmetic, have the constraint that they apply a Chebyshev polynomial to at least one of the inputs. As a final point, note that all of the provided examples can be rid of the inclusion of Chebyshev polynomials (approximately), via composition with the inverse Chebyshev protocol of Thm. IV.1, stated and proved below.
**Theorem IV.1** (Inverses of Chebyshev polynomials).: Given \(0<\delta\leq 1\) and \(0<\varepsilon\leq 1/2\), there exists \((1,1)\) gadgets of depth and cost \(\mathcal{O}(\delta^{-1}\log(\varepsilon^{-2}\delta^{-1/4}))\) which \(\varepsilon\)-approximately achieve functions \(T_{2^{n}}^{-1}(x)\) (right-inverses) for \(x\in[-1+\delta,1-\delta]\), using a single qubit (no ancillae).
For a proof of this result, see Appx. H. Using this protocol (in particular, by interlinking the inverse Chebyshev gadget with the arithmetic gadgets), it is possible to realize the following transformations of the functions achieved above, approximately, on restricted domains:
\[f(x_{0},x_{1})=T_{2}(x_{0})x_{1} \Longrightarrow f^{\prime}(x_{0},x_{1})=x_{0}x_{1}, \tag{17}\] \[f(x_{0},x_{1})=\frac{T_{4}(x_{0})+T_{4}(x_{1})}{2} \Longrightarrow f^{\prime}(x_{0},x_{1})=\frac{x_{0}+x_{1}}{2}. \tag{18}\]
This, of course, will come at the expense of added circuit depth. To this end, we have the following theorems.
**Theorem IV.2** (Arbitrary approximate multiplication).: Given \(0<\delta\leq 1\) an \(0<\varepsilon\leq 1/2\), there exists a \((2,1)\) gadget of depth/cost \(\mathcal{O}(\delta^{-1}\log(\varepsilon^{-2}\delta^{-1/4}))\) which \(\varepsilon\)-approximately achieves the function \(f(x_{0},x_{1})=x_{0}x_{1}\) for all \(x_{0}\in[-1+\delta,1-\delta]\) and \(x_{1}\in[-1,1]\) using no ancilla qubits.
**Theorem IV.3** (Arbitrary approximate addition).: Given \(0<\delta\leq 1\) and \(0<\varepsilon\leq 1/2\), there exists a \((2,1)\) gadget of depth/cost \(\mathcal{O}(\delta^{-1}\log(\varepsilon^{-2}\delta^{-1/4}))\) which \(\varepsilon\)-approximately achieves the function \(f(x_{0},x_{1})=(x_{0}+x_{1})/2\) for all \(x_{0},x_{1}\in[-1+\delta,1-\delta]\) using no ancilla qubits.
We refer to Appx. H for proofs of these results, and more details about the cost of performing these transformations. With these basic operations instantiated, it is now easy to see how Thm. III.2 immediately implies that all multivariable polynomials up to bound and parity constraints are approximately achieveable by _some_ composite gadget mirroring the structure of the algebraic manipulations achieved. While the question of the _most efficient_ gadget realization is more difficult, we can freely use the gadgets above sparingly in the next section to instantiate a series of familiar and useful functions.
### Familiar multivariable functions from gadgets
Having highlighted basic examples of arithmetic operations with respect to two variables, we leverage these operations to construct more complicated functional transformations. We begin with a short remark.
**Remark IV.1** (Algebra of gadget operations).: Given the operations of addition and multiplication presented in Eq. (17), Eq. (18), as well as the scaling operations of Ex. IV.4 and Ex. IV.5, and the general ability to compose gadgets arbitrarily, it is possible to achieve approximations of arbitrary multivariate polynomials on mildly restricted domains. This is a somewhat surprising result in and of itself: sufficiently high-depth
M-QSP protocols (up to the possible use of ancilla qubits during correction) can approximately achieve all multivariate polynomials, modulo certain constraints imposed in the examples of Sec. IV.1. In general, these naive, "ground-up" constructions of polynomials will have poor scaling, due to the need to perform the highly non-linear inverse Chebyshev and corrective protocols. Nevertheless, there are also other examples of short protocols which achieve useful transformations _without_ the added depth of performing square roots or many nested correction protocols. This fact is one of the fundamental takeaways of the following results we wish to emphasize: while it is _possible_ to build all transformations from the repeated composition of only a few low-degree gadgets, in order to achieve experimentally reasonable protocols, it is much more advantageous to use such basic gadgets sparingly, relying on functions efficiently achieved by standard M-QSP protocols where possible.
To begin our discussion of more advanced examples, we present a brief discussion of techniques for constructively achieving step and bandpass functions through gadget composition.
**Example IV.8** (Basic step function of [20]).: The protocol introduced in [20], refered to as _r-QSP_ is an example of an \(n\)-fold composition of \((1,1)\) atomic gadgets. In particular, the authors (implicitly) construct a family of \((1,1)\) atomic gadgets \(\mathfrak{G}^{(\ell)}_{\mathrm{step}}\equiv(\Xi^{(\ell)},S^{(\ell)})=( \mathfrak{G}^{(\ell)}_{\mathrm{step}},\{0,\ldots,0\})\) with \(|\mathfrak{G}^{(\ell)}_{\mathrm{step}}|=2\ell+1\) such that the \(n_{\ell}\)-fold full composition (Thm. III.1) \(\mathfrak{G}^{(\ell)}_{\mathrm{step}}\circ\cdots\mathfrak{G}^{(\ell)}_{ \mathrm{step}}\) achieves an \(\varepsilon\)-approximation of the function \(f(x)=\mathrm{sign}(x)\) on the interval \([-1,-\delta]\cup[\delta,1]\) when the length of the resulting gadget, \(\zeta=(2\ell+1)^{n_{\ell}}\), satisfies
\[\zeta=\mathcal{O}\left(\delta^{-2(\nu_{\ell}+1)}\log^{1+\nu_{\ell}}\left( \varepsilon^{-1}\right)\right),\quad\nu_{\ell}=\frac{\log(2\ell+1)}{\log( \ell+1)}-1. \tag{19}\]
Making use of the basic step function construction of [20], as well as the arithmetic operations of the previous section, it is possible to construct a simple, infinite family of bandpass filters.
**Example IV.9** (Simple bandpass family).: Given \(a\in[\delta,1/\sqrt{2}]\), there exists a family of \((1,1)\) gadgets \(\mathfrak{G}^{(\ell)}_{\mathrm{bandpass}}(a)\), such that \(\mathfrak{G}^{(\ell)}_{\mathrm{bandpass}}(a)\)\(\varepsilon\)-approximately achieves a function \(f\) satisfying \(|f|\leq 1\) as well as
\[f(x)\in\begin{cases}[1-\varepsilon,1]&\text{if }x\in[-a+\delta,a-\delta]\\ &\text{if }x\in\left[-\frac{1}{\sqrt{2}},-a-\delta\right]\cup\left[a+ \delta,\frac{1}{\sqrt{2}}\right],\end{cases} \tag{20}\]
where the total depth of the gadget, \(\zeta^{\prime}\), satisfies
\[\zeta^{\prime}=\mathcal{O}\left((2a\delta)^{-4(\nu_{\ell}+1)}\log^{1+\nu_{ \ell}}\left(\varepsilon^{-1}\right)\right). \tag{21}\]
Moreover, given the phases of \(\Phi^{(\ell)}_{\mathrm{step}}\), a sequence of Ex. IV.8, the gadget \(\mathfrak{G}^{(\ell)}_{\mathrm{bandpass}}(a)\) can be described analytically, in terms of these phases.
Figure 4: Component breakdown of the sum gadget (Ex. IV.6). The sum gadget (a) is achieved by the composition of multiple \((1,1)\) and \((2,1)\) gadgets as shown, specifically \(A\) (the square root gadget from Thm. IV.1 with \(T_{2}\)), \(B\) (angle sum gadget, Ex. IV.2), \(C\) (angle subtraction gadget, Ex. IV.2), and \(D\) (product gadget, Ex. IV.3). The achieved composite operation \((b)\) can be treated as a \((2,1)\) gadget labelled by \(E\), whose partial composition with itself \((c)\) has the expected semantic behavior.
For a proof of this result, see Appx. H. This example provide good illustration of the fundamental tension between the generality of the classes of functions that can be achieved via composition of gadgets, and the asymptotic scaling of resource requirements. Here, we have shown that it is possible to achieve a _restricted_ class of bandpass filters via the gadget formalism. It is, generally speaking, possible to drop the restrictions of this example (in particular, the constraint that the function approximately achieves only the values \(\pm 1\)), at the expense of more queries/corrective protocol applications (see Ex. IV.11).
There are many other examples of low-depth protocols which can be achieved via gadget composition, and the simple arithmetic operations discussed previously. We highlight some of these examples below.
**Example IV.10** (\(2^{n}\) mean).: Let \(\mathfrak{G}\) be the \((2,1)\) gadget achieving \((x_{0}+x_{1})/2\). Then one can build a composite \((2^{n},1)\) gadget \(\mathfrak{G}^{\prime}\), as well as the correction protocol, achieving \(2^{-n}(x_{0}+x_{1}+\cdots+x_{2^{n}-1})\) using \((2^{n}-1)\) instances of \(\mathfrak{G}\) and successively pooling pairs of variables. E.g., \((x_{0}+x_{1})/2\) and \((x_{2}+x_{3})/2\) are taken to their average \((x_{0}+x_{1}+x_{2}+x_{3})/4\).
The cost of implementing gadgets will depend on the range of input parameters for which it must output an \(\varepsilon\)-approximation of the desired function. Moreover, the cost will depend on whether the ancilla of ancilla-free gadget correction protocol is used, which will vary on a case-by-case basis. As an illustrative example, we provide a detailed discussion of the complexity of implementing the \(2^{n}\)-gadget, and discuss how its cost differs from the cost LCU, in Appx. H. In addition to this example, we make note of some other gadgets, which are, in principle, possible to achieve. The complexity analysis of each, and how it compares to other techniques such as LCU, carries forward in the same manner of casework as is presented in the example of Appx. H.
**Example IV.11** (Arbitrary one-dimensional bandpass functions).: Consider \(\mathfrak{G},\mathfrak{G}^{\prime}\) two \((1,1)\) gadgets approximately achieving sign functions \(\Theta(x_{0}-a_{0}),\Theta(x_{0}-a_{1})\) (Ex. IV.8), where \(\Theta(x_{0})\) takes the value \(-1\) for \(x<0\) and \(1\) for \(x>0\) on the interval \([-1,1]\). Without loss of generality assume \(a_{0}<a_{1}\); then the averaging gadget applied to \(\mathfrak{G}\) and \(\mathfrak{G}^{\prime\prime}\) (the modified gadget achieving \(-\Theta(a_{1})\), enabled by Ex. IV.1) achieves an approximation to the following ideal bandpass function: \(\Theta^{\prime}(x_{0})\equiv[\Theta(x_{0}-a_{0})-\Theta(x_{0}-a_{1})]/2.\)which for \(x_{0}<a_{0}\) achieves \(0\), for \(a_{0}<x<a_{1}\) achieves \(1\), and for \(x>a_{1}\) again achieves zero.
**Example IV.12** (Majority vote).: Majority vote among \(2^{n}\) elements follows directly from the previous examples; let \(\mathfrak{G}\) the \((2^{n},1)\) gadget of Ex. IV.10 and \(\mathfrak{G}^{\prime}\) the \((1,1)\) gadget thresholding at \(x_{0}=1/2\) (Ex. IV.11). Then the simple composition \(\mathfrak{G}^{\prime}\circ\mathfrak{G}\) is a \((2^{n},1)\) gadget satisfying the desired properties.
**Example IV.13** (Subnormalized functional interpolation).: There exists a \((3,1)\) gadget that approximately achieves the function \(f(x_{0},x_{1},x_{2})=[x_{0}x_{2}+x_{1}\sqrt{1-x_{2}^{2}}]/2\). In other words, depending on \(x_{2}\), the gadget smoothly interpolates between the sub-normalized variables \(x_{0}\) and \(x_{1}\). If \(x_{0},x_{1}\) result from previous gadgets, this realizes a (sub-normalized) interpolation between two functions. Construction follows from previous gadgets. Namely the products \(x_{0}x_{2}\) and \(x_{1}\sqrt{1-x_{2}^{2}}\) are realized by Ex. IV.3, with \(\sqrt{1-x^{2}}\) achieved from \(x_{2}\) by Ex. IV.1. Finally, both results are summed by Ex. IV.6.
The cost analysis of these gadgets carries forward similarly to the example highlighted in Appx. H, depending on which corrective protocol is chosen.
## V Discussion and conclusion
At the most basic level this work gives a modular construction, rooted in QSP and QSVT, for achieving highly expressive multivariable block encodings for commuting operators. This construction relies on special _corrective_ subroutines which restore the _embeddability_ (Def. II.2) of QSP- and QSVT-like protocols in a black-box way. Recovering embeddability allows assemblages of such protocols, which can take many unitary oracles as input and produce many unitary outputs, to be simply wired together _at the level of the functions they apply_ to matrix-invariant quantities; that this is a true modular process is demonstrated in the statements of the work's main theorems: Thms. III.1 and III.2, which indicate how to intersperse independently defined correction protocols (Thm. II.1) between modules. Such modules are named _gadgets_ (Def. II.4), and their valid combinations are shown to follow a simple grammar, and generate both the natural commutative ring and composition monoid over multivariable polynomials. In the terminology of programming language theory, the modularity of gadgets is formalized in identifying them as _monadic types_, with the correction protocol furnishing a previously missing component of the monaic function _bind_ (Appx. I).
We can thus phrase our results in three styles, in level of increasing burden of proof and abstractness. **(a)** We provide improved, constructive upper bounds on the resource requirements (space, gate, and query
complexity) of multivariable polynomial block encodings, and make comparisons to previously known methods (see Appx. B), with inspiration from older results in modular classical filter design [27; 28] and composite pulse sequences [22; 23; 26]. **(b)** We provide methods for interpretable constructions of coherent (quantum) control flow, with no obvious incoherent counterpart. Specifically, we work in a coherent-access model, analogous to that of recent work on the benefit of measurement-free oracular access [50; 51], and show that the enaction of complex QSVT-protocols can be coherently conditioned on, or in general coupled to, within each singular-vector subspace, another set of singular values. **(c)** We apply methods in functional programming, type theory, and category theory to re-situate QSP and QSVT as quantum _types_, in relation to which our results instantiate a _monadic type_ over real-valued, real-argument multivariable polynomials (see Appx. I). Our methods thus offer aid in quantum algorithm design, pragmatically generalizing the work of [19] and providing a simpler path for applying M-QSP [29] to achieve useful multivariable functional transforms.
Our construction of gadgets couples to the design of quantum algorithms on two levels, inspired by a natural division in formal and natural languages: semantics and syntax [16; 17; 52]. On the first, gadgets specify which polynomial transforms can be treated functionally, serving as basic units of computation; formally this is summarized in the instantiation of a monad, and characterizes the basic _semantics_ of the resulting system. At the second level, we define a formal grammar enumerating valid ways to connect gadgets, which characterizes _syntax_ of our system. While the ultimate relation between our construction's syntax and semantics is more complicated than this simple division implies, depending on various hybrid structures such as attribute grammars [52], these two levels help to situate this work in classical programming language theory, where division into semantics and syntax is considered a first step along the path of parsing, optimizing over, and deriving expression meaning from structural information and basic semantic assignments.
To restate a crucial point, this work considers gadgets comprising QSP/QSVT protocols that can be linked, following intermediary processing, to other gadgets _such that_ their embedded functional transforms are combined in a _semantically clear way_. This intermediary processing is generally necessary, non-trivial, and strictly subsumes the single-variable case [19], which itself subsumed more limited but intriguing families of techniques in the recursive use of quantum algorithms [22; 23; 26]. The key benefit of our method is that each gadget inherits the approximative efficiency and ancilla-efficient character of QSP, meaning that the complexity of achieving a desired polynomial transform is no longer coupled directly to term number, polynomial degree, or polynomial norm, as in LCU or standard QSVT [53], but instead sophisticated results in the theory of multivariable polynomial decomposition [38; 39; 41; 45], and multivariable quantum signal processing. In turn, this opens intriguing questions in the algorithmic complexity (or Solomonoff-Kolmogorov-Chaitin complexity) [47; 48; 49] of certain approximate functional transforms under the composition of gadgets, for which our results provide an upper bound. The benefit of this approach to quantum algorithm design is multiform: numerical (given a new utility for low-order gadgets), experimental (given the reduced number of distinct QSP phases to compute), conceptual (given the fungibility/reusability of precomputed gadgets), and finally pedagogical (given the expanded role offered to QSP in quantum information processing).
We stress, however, that there remain significant open questions on the optimality of method offered here, especially in light of the dependence of a gadget's complexity on sophisticated results in the theory of polynomial decompositions [38; 39; 41; 45]. It is thus the authors' belief that existing methods for multivariable block encoding, such as LCU, continue to offer diverse, context-specific utility, and deserve continued attention in future related work on generating novel block encodings.
Whether viewed in terms of highly-efficient block encodings, or as the instantiation of a monadic type [54; 55; 56] over higher-order quantum processes composable in accordance to a simple grammar, this work instantiates new and expressive QSP/QSVT-based quantum algorithms, with function-first interpretability and a declarative style. To support this claim, we provide multiple examples, constructing block encodings of algorithmically useful multivariable polynomials with improved query and space complexity over alternative methods. Finally, we remark that the ability to semantically combine QSP/QSVT as modules in a black-box way relies strongly on their restricted circuit form, and we believe the balance between this structure and the known algorithmic expressivity of QSP/QSVT points toward a larger family of parameterized quantum circuits to which functional programming abstractions can be fruitfully applied.
## VI Acknowledgements
Z.R. was supported in part by the NSF EPiQC program. J.C. thanks Prof. Nathan Wiebe for valuable conversations, and for catalyzing and funding an internship during the summer of 2023 at MIT, where the majority of the work on this project was conducted. J.C. also thanks the MIT Research Laboratory of
Electronics (RLE) for their hospitality during his time at MIT. I.C. was supported in part by the U.S. DoE, Office of Science, National Quantum Information Science Research Centers, and Co-design Center for Quantum Advantage (C2QA) under contract number DE-SC0012704.
|
2305.19683 | The Quantum Frontier of Software Engineering: A Systematic Mapping Study | Context. Quantum computing is becoming a reality, and quantum software
engineering (QSE) is emerging as a new discipline to enable developers to
design and develop quantum programs.
Objective. This paper presents a systematic mapping study of the current
state of QSE research, aiming to identify the most investigated topics, the
types and number of studies, the main reported results, and the most studied
quantum computing tools/frameworks. Additionally, the study aims to explore the
research community's interest in QSE, how it has evolved, and any prior
contributions to the discipline before its formal introduction through the
Talavera Manifesto.
Method. We searched for relevant articles in several databases and applied
inclusion and exclusion criteria to select the most relevant studies. After
evaluating the quality of the selected resources, we extracted relevant data
from the primary studies and analyzed them.
Results. We found that QSE research has primarily focused on software
testing, with little attention given to other topics, such as software
engineering management. The most commonly studied technology for techniques and
tools is Qiskit, although, in most studies, either multiple or none specific
technologies were employed. The researchers most interested in QSE are
interconnected through direct collaborations, and several strong collaboration
clusters have been identified. Most articles in QSE have been published in
non-thematic venues, with a preference for conferences.
Conclusions. The study's implications are providing a centralized source of
information for researchers and practitioners in the field, facilitating
knowledge transfer, and contributing to the advancement and growth of QSE. | Manuel De Stefano, Fabiano Pecorelli, Dario Di Nucci, Fabio Palomba, Andrea De Lucia | 2023-05-31T09:26:10Z | http://arxiv.org/abs/2305.19683v2 | # The Quantum Frontier of Software Engineering:
###### Abstract
**Context.** Quantum computing is becoming a reality, and quantum software engineering (QSE) is emerging as a new discipline to enable developers to design and develop quantum programs. **Objective.** This paper presents a systematic mapping study of the current state of QSE research, aiming to identify the most investigated topics, the types and number of studies, the main reported results, and the most studied quantum computing tools/frameworks. Additionally, the study aims to explore the research community's interest in QSE, how it has evolved, and any prior contributions to the discipline before its formal introduction through the Talavera Manifesto. **Method.** We searched for relevant articles in several databases and applied inclusion and exclusion criteria to select the most relevant studies. After evaluating the quality of the selected resources, we extracted relevant data from the primary studies and analyzed them. **Results.** We found that QSE research has primarily focused on software testing, with little attention given to other topics, such as software engineering management. The most commonly studied technology for techniques and tools is Qiskit, although, in most studies, either multiple or none specific technologies were employed. The researchers most interested in QSE are interconnected through direct collaborations, and several strong collaboration clusters have been identified. Most articles in QSE have been published in non-thematic venues, with a preference for conferences. **Conclusions.** The study's implications are providing a centralized source of information for researchers and practitioners in the field, facilitating knowledge transfer, and contributing to the advancement and growth of QSE.
keywords: Quantum Computing; Quantum Software Engineering; Software Engineering for Quantum Programming; Empirical Software Engineering; Systematic Mapping Study. +
Footnote †: journal: Information and Software Technology
## 1 Introduction
Quantum computing will likely become the next concrete asset for researchers and practitioners. Every developer can now access a quantum computer and use quantum computation to solve arbitrary, computationally-intensive problems [1; 2]. This result is possible because of a great effort made by major software companies, like IBM and Google, which are currently investing hundreds of millions of dollars every year to produce novel hardware and software technologies that can support the execution of quantum programs, making a step further in democratizing quantum computing [3; 4; 5; 6; 7; 8].
Quantum programming is promising in resolving problems in various fields, such as machine learning, optimization, cryptography, and chemistry. However, the development of large-scale quantum software is still distant [9; 10; 11; 12]. To address this, researchers have proposed a new discipline, quantum software engineering (QSE), that extends classical software engineering into the quantum domain [13; 14; 15; 16].
This discipline aims to enable developers to design and develop quantum programs with the same confidence as classical programs, providing them with all the methods and tools necessary [17].
Since the publication of the Talavera Manifesto [14], which can be considered the pillar of quantum software engineering, many studies have been published, proposing novel approaches, tools, and techniques. Consequently, several secondary studies have been published exploring various aspects of QSE, such as optimization [18; 19], industrial adoption issues [20], testing [21], and architecture [22], to put order among all the published studies, and steer further research. These previous studies, however, focused on specific aspects of the discipline, like testing [21], and have failed to provide a comprehensive overview of the field. As a result, significant research angles like the management and maintenance of quantum-based solutions have been left largely unexplored. To fully understand the potential of this field, and maximize the benefits obtainable from the current evidence [23], a comprehensive synthesis of the latest research is essential.
In this paper, we propose a systematic mapping study on the current status of QSE research to fulfill this gap. This systematic mapping study aims to provide a broad and holistic overview of QSE, determine its achievements, and identify current research gaps. To this end, the study addresses several research questions about QSE research aspects. In particular, our study aims to identify the most investigated topics in QSE, the types and the number of proposed studies, the main reported results, and the most studied quantum computing tools/frameworks. Additionally, the study explores the research community's interest in QSE, how it has evolved, and any prior contributions to the discipline before its formal introduction through the Talavera Manifesto [14]. Moreover, this study aims to understand the main researchers involved in the field, their research groups, their interactions, and their distribution concerning various SE topics. The study also intends to identify which venues will most likely publish QSE articles outside thematic workshops.
The results of our systematic mapping study can provide valuable insights into the development and evolution of the research community, the tools and frameworks being studied, and the distribution of research topics among different software engineering areas. Our final goal is also to facilitate knowledge transfer and provide a centralized source of information for researchers and practitioners in the field, contributing to the advancement and growth of quantum software engineering.
The remainder of the paper is structured as follows: in Section 2 related literature is discussed and analyzed; Section 3 and Section 4 present the research method employed to conduct our study and the achieved results, which are then discussed in Section 5; in Section 6, then, final remarks and future research directions are given.
## 2 Related Work
Quantum Software Engineering (QSE) has recently attracted increasing attention. Hence, several secondary studies were published, exploring different aspects of this discipline. This section compares this literature with our secondary study, highlighting the differences and complementary points.
Yarkoni _et al._[18] provided a literature review of quantum annealing (QA) technology and its applications. The study aims to provide a centralized source of information on the applications of QA and identify the advantages, limitations, and potential of the technology for both researchers and practitioners.
Shi _et al._[19] published a study that reviews quantum software optimization techniques, focusing on the efficiency of quantum computing (QC) systems. The study argues that greater efficiency of QC systems can be achieved by breaking the abstractions between layers in the quantum software stack.
Awan _et al._[20] published a study that aims to identify and prioritize the most critical challenges facing the software industry in adopting quantum computing technology. The study implements a fuzzy analytic hierarchy process (F-AHP) to evaluate and rank the identified challenges. The results show that the critical barriers to quantum computing adoption are the lack of technical expertise, information accuracy, organizational interest in adopting the new process, and the lack of standards of secure communication.
Garcia de la Barrera _et al._[21] conducted a systematic mapping study on quantum software testing. The research provides a thorough overview of the current state of the art in quantum software testing. It identifies two major trends in testing techniques: statistical approaches based on repeated measurements and the use of Hoare-like logic for reasoning about software correctness. Reversible circuit testing for quantum software unitary testing is also mentioned by the authors, which is partially applicable to this domain.
Ahmad _et al._[22] published a secondary study that addresses the challenges of architecture-centric implementation of quantum software systems. To achieve this, it explores the role of quantum software architectures (QSA) in supporting quantum software systems' design, development, and maintenance phases. The research focuses on two main aspects: (i) the architectural process with its architecting activities and (ii) the human roles that can leverage available tools to automate and customize the architecture-centric implementation of quantum software. By examining these aspects, the study aims to provide insights into the principles of quantum mechanics and software engineering practices and how they can be integrated into the design and development of quantum software systems.
Reasoning on the current state of the art, we point out that previous systematic literature reviews and mapping studies have focused on specific themes, e.g., testing or architectures, rather than providing a comprehensive view of the research conducted on quantum software engineering. As a consequence, there is still little knowledge on the set of topics more frequently investigated by quantum software engineering researchers, other than of the main results achieved and how these can drive future research. The findings of our study can provide valuable insights into the development and evolution of the QSE research community, the tools and frameworks being studied, and the distribution of research topics among different software engineering areas. In addition, we also identify a gap of knowledge with respect to the venues and research groups that are currently investigating quantum software engineering aspects: these pieces of information are crucial for newcomers, e.g., fresh Ph.D. students, who would like to embrace the quantum software engineering field, other than more senior researchers interested in understanding the dynamics behind the quantum software engineering research community.
## 3 Research Method
The _goal_ of our study was to produce an organic and holistic view of the scientific literature in the field of quantum software engineering, with the _purpose_ of understanding what researchers' efforts have focused on, what needs further investigation, what has been achieved so far, and what the research gaps are. The _perspective_ is of both researchers and practitioners. The former are interested in having a unique source of information providing a comprehensive view of the current research in quantum software engineering. The latter are interested in understanding the current trends and technologies researchers produce that might be potentially transferred in practice. Our systematic mapping study has been conducted following the guidelines in [24; 25; 26]. In terms of reporting, we followed the ACM/SIGSOFT Empirical Standards1
Footnote 1: Available at [https://github.com/acmsigsoft/EmpiricalStandards](https://github.com/acmsigsoft/EmpiricalStandards). Given the nature of our study and the currently available standards, we followed the “General Standard” and “SystematicReviews” definitions and guidelines.
### Research Questions
We aimed to answer various research questions to achieve a holistic view of the state of quantum software engineering. To understand and identify possible gaps in QSE research, especially in terms of coverage of various aspects of SE and the degree of maturity achieved in these areas, we asked:
**Q****RQ\({}_{1}\)**: **Current research trends in QSE**
**RQ\({}_{1.1}\)**: _How many and what kind of studies have been proposed in QSE?_
**RQ\({}_{1.2}\)**: _Which areas of software engineering have received the most attention in QSE?_
**RQ\({}_{1.3}\)**: _Which types of studies are most commonly proposed in the different areas of software engineering within QSE?_
Answering RQ\({}_{1.1}\) will allow us to assess the degree of maturity of the research: a prevalence of philosophical papers depicts a different level of maturity than that of evaluation research papers. The answer to RQ\({}_{1.2}\) will allow an understanding of possible gray areas or unexplored areas which should need further attention
by the research community. By combining the first two insights, answering RQ\({}_{1.3}\) will allow assessing the maturity of research production in each area and evaluating the degree of development of the specific area.
To understand the research progresses in QSE and identify possible research opportunities, we asked:
**Q. RQ\({}_{2}\) Achieved results and studied technologies**
**RQ\({}_{2.1}\)** What are the main results reported?
**RQ\({}_{2.2}\)** Which quantum computing tools/frameworks are most being studied?
Answering the first question will allow us to understand the main results reported in the literature, supplying a picture of what is being studied, what has been discovered, and what is still unknown. Answering the second question will provide information on the types of quantum computing tools/frameworks being studied to better understand the maturity of different implementations. This information can help us to understand the current state of quantum computing and its research landscape.
The Talavera Manifesto was introduced in 2020, yet quantum software has been discussed since before. The following research questions are posed to analyze the interest in this new discipline:
**Q. RQ\({}_{3}\) Evolution of QSE**
**RQ\({}_{3.1}\)** Are there contributions to QSE before the discipline was even born?
**RQ\({}_{3.2}\)** How has the research community's interest in this new discipline evolved?
Answering these questions can provide valuable insights into the historical and evolutionary context of QSE, which can be beneficial in several ways. First, understanding the contributions to QSE before the discipline was even born can help researchers appreciate the foundational ideas and concepts that underlie the field. Second, examining how the research community's interest in QSE has evolved over time can provide insights into the current state of the field and its trajectory. By understanding the factors that have driven the growth and development of QSE, researchers can identify areas that require further investment and attention to continue advancing the field.
To understand who the leading researchers involved in this discipline are and how they and their research groups interact, the following research questions are posed:
**Q. RQ\({}_{4}\) Authors and collaborations in QSE**
**RQ\({}_{4.1}\)** Who are the researchers most interested in QSE? How are they interconnected?
**RQ\({}_{4.2}\)** Given the various SE topics, how are these researchers distributed?
Answering these questions can be valuable because it can provide a comprehensive understanding of the current landscape of QSE research and the key players in the field. Knowing who the leading researchers are can help identify the most influential and significant work in QSE. Understanding how these researchers and their groups are interconnected can provide insight into how ideas and research are shared and disseminated within the discipline. Understanding the relationships and interactions within the QSE research community can also help identify potential barriers or challenges to collaboration and ways to overcome them.
To better understand whether research in QSE is emerging from niche topics and to understand which venues are most attractive, the following research question is posed:
**Q. RQ\({}_{5}\) Publication trends in QSE**
**RQ\({}_{5.1}\)** Which venues will most likely publish QSE articles outside thematic venues?
Answering this question can be valuable because it can provide insight into the reach and impact of QSE research beyond specialized workshops and conferences. Knowing which venues are most likely to publish
QSE articles can help identify the most influential and visible outlets for this type of research and inform strategies for publishing and disseminating work in QSE. Understanding which venues are most attractive to QSE researchers can also help identify trends and patterns in the field and inform decisions about where to submit work for publication. In general, understanding the venues that are most likely to publish QSE articles can help to establish the reach and impact of this type of research and inform strategies for publishing and disseminating work in the field. It must be noted that for _tematic venues_, we intend venues explicitly focused on quantum software engineering or quantum software development.
### Search Criteria
Guidelines [24; 25; 26]suggest that only the variables of population and intervention are necessary to conduct our mapping study. Comparison and outcome are unnecessary for a systematic mapping study because it is not a meta-analysis, randomized controlled trial, or systematic review [25; 26]. Instead, it is a preliminary study that provides an overview of the current state of research in a particular area. A systematic mapping study aims to identify gaps in the existing literature and provide a general understanding of the topics, the methods, and the results in a specific field [25]. Therefore, the emphasis is on identifying and categorizing the existing literature rather than comparing or determining outcomes.
We developed the search strings utilizing these two variables (i.e., population and intervention) as the primary keywords to narrow the scope of our research. In software engineering, the term _population_ (P) may refer to various subgroups within the field, such as specific software engineering roles, categories of software engineers, application areas, or industry groups. These populations may have unique characteristics or challenges relevant to the research being conducted [24]. In particular, the population in our context refers to papers focusing on quantum software engineering. In the context of software engineering, the term _intervention_ (I) refers to any method, tool, technology, or procedure that is implemented or used in the development, maintenance, or optimization of software systems [24]. In our mapping study, we did not limit our research to a single or specific intervention within the field of quantum software engineering. Instead, we sought to broadly examine the various interventions implemented or proposed within this field. This decision allowed us to gain a more comprehensive understanding of the current state of research in quantum software engineering and identify potential trends or gaps in the existing literature. By aggregating the keywords from the P and I and the research questions, we formulated the following research string:
("quantum" AND "software" AND ("engineering" OR "development" OR "requirements" OR "quality" OR "design"' OR "test" OR "maintenance" OR "management"))
The guidelines [24; 25; 26]suggest that employing IEEE and ACM, in addition to two indexing databases, would be sufficient for our mapping study. Therefore, we selected ACM Digital Library, IEEE Xplore, SCOPUS, and Web of Science. We applied our search query to each database, examining all fields. Table 1 depicts the number of studies obtained by applying the query to each database.
### Study Selection and Quality Assessment
The relevant papers were selected considering the title, abstract, and full-text reading. The inclusion and exclusion criteria were specified and discussed among all the authors [26]. We applied a think-aloud
\begin{table}
\begin{tabular}{l r} \hline \hline Database & Search Results \\ \hline ACM & 1,376 \\ IEEE & 1,991 \\ SCOPUS & 998 \\ Web of Science & 1,002 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The number of studies per database.
practice describing the process of inclusion/exclusion to one study to align understanding. Based on this discussion, the level of agreement was determined, and the criteria were updated accordingly. After this, the final criteria were applied to the studies by the first author. The third author then reviewed the whole selection process. Articles satisfying at least one of the exclusion criteria listed in Table 2 were excluded. Consequently, articles that were not already excluded by meeting an exclusion criterion and satisfied at least one of the inclusion criteria listed in Table 2 were included.
Applying the whole approach resulted in a total of 76 selected resources. As complementary research approach [26; 27], we performed both forward and backward snowballing --i.e., the selection of publications that cite (_forward snowballing_), or are cited (_backward snowballing_) by each selected study-- until no other unseen publication could be added, to guarantee the maximum coverage possible. After filtering them with the same selection criteria (Table 2), a total of 13 resources were added by the end of this procedure.
The final step of the selection phase consists of assessing the quality of the extracted resources to limit the risk of bias and incorrect results [24; 25; 26]. Despite Kitchenam _et al._[24] do not require systematic mapping studies to conduct a critical appraisal procedure, Petersen _et al._[26] recommend conducting a quality assessment but not pose too high requirements, to not exclude resources that might be relevant in the scope of a systematic mapping study. Hence, we developed the following quality assessment strategy. We defined a checklist that comprises the following questions:
1. Is the motivation of the paper clearly and explicitly reported?
2. Is the main outcome of the paper clearly and explicitly reported?
The following scores are assigned to each of the above questions. 1 if the answer to the question is _Yes, explicitly_, 0.5 if the answer is _Yes, but not explicitly reported_, 0 if the answer is _Not Reported_. The score of both questions is then summed up. A primary study that achieves a minimum score of 1 is considered acceptable. Applying this quality assessment procedure led to the exclusion of two articles. To conclude, applying the whole procedure resulted in the acceptance of 87 primary studies.
Figure 1: Representation of the study selection process. From the initial set of 3,204 non-duplicated primary studies, 3,128 articles were removed by applying the criteria on title and abstract initially, and on the full text afterward. By applying the snowballing, 13 articles were added (already filtered with inclusion and exclusion criteria). Finally, two articles were removed by applying the quality assessment criteria. Hence, the final set of included primary studies counts 87 articles.
### Data Collection and Analysis
We created an extraction form shown in Table 3 to gather information from the primary studies. This template includes various data items and their corresponding values. It is possible to note that the rationale behind the choice of this gathered information is pretty straightforward: each of the extracted data items corresponds to the phenomenon each research question aims to solve. Table 3 clearly explains this in the description column. The process of extracting this data involved the first author collecting the information and the third author reviewing it for accuracy by comparing it to the original statements in each paper; this was done to mitigate the risk of subjectivity, as depicted in the guidelines [24; 25; 26].
Petersen _et al._[26] recommend using topic-independent classifications as much as possible. This classification was conducted for the SWEBOK Knowledge Areas [28] (depicted in Table 4), the research type (depicted in Table 5) [29; 25; 26], and the venue [26]. Nonetheless, for the result type, a topic-specific classification was needed. Hence, as suggested by Petersen _et al._[26], we used open coding [30] to organize the articles into categories and then counted the number of articles in each category. We labeled or assigned keywords to concepts found in the text during the open coding process, resulting in several open codes, which were then organized into a larger structure. In this process, we merged or renamed some of the codes representing categories [30]. Then, we used these categories to classify the papers. We applied this process to the abstracts of the articles. However, if the abstracts were not clear enough, we considered other parts of the paper (e.g., introduction and conclusion). The resulting classification is depicted in Table 6.
We employed descriptive statistics tools to analyze the gathered data, as suggested by the guidelines, [24; 25; 26]. In particular, we relied on tables (with bibliographical references) to report absolute frequencies of the categorical variables (e.g., the SWEBOK Knowledge Area of interest of each resource) or bar charts (e.g., absolute number of papers per year). To represent co-occurrences of categorical variables, we employed heatmaps and pivot tables depicted as heatmaps (e.g., Knowledge Areas by Research Types). To show the evolution over time of the number of published resources, we employed line graphs (e.g., cumulative number of resources per publication year).
\begin{table}
\begin{tabular}{c c} \hline \hline
**Exclusion Criteria** & **Inclusion Criteria** \\ \hline Article not peer-reviewed. & Peer-reviewed article in English published not earlier than 2018. \\ Article not in English. & Article discussing or proposing SE techniques or practices applied to quantum software development or the quantum software lifecycle. \\ Article published before 2018. & Article discussing an empirical investigation on some socio-technical aspects of software engineering applied to quantum software development. \\ Conference articles extended in journals. & Article discussing an empirical investigation on some socio-technical aspects of software engineering applied to quantum software development. \\ Conference summary. & Secondary or tertiary study on the matter. \\ Article not already selected. & Article not already selected. \\ \hline \hline \end{tabular}
\end{table}
Table 2: Study selection criteria applied in the study selection phase.
### Threats to Validity
The following discusses potential threats and limitations to the study's validity.
Descriptive ValidityThe extent to which observations are accurately and objectively described is descriptive validity. Threats to descriptive validity are prevalent in qualitative and quantitative studies. As reported by the guidelines [24; 25; 26], a data collection form has been designed to support data recording to reduce this threat. As with the primary studies, the form objectified the data extraction process and allowed it to be revisited regularly. Moreover, as depicted by the guidelines [26], not only we extracted the research type and method (which were also fundamental for answering our research questions), but we also mainly applied topic-independent classifications. For topic-specific classification, i.e., for the main results of the primary studies, we applied open coding [30], as the guidelines recommend [26]. As a result, this threat is considered to be under control.
Theoretical ValidityThe ability to capture what we intend to capture determines theoretical validity. Furthermore, confounding factors such as biases and subject selection play a significant role. In our case, this refers to the study identification (or sampling) and the data extraction and classification.
As Wohlin _et al._[27] pointed out, the selection of the studies could have threatened the validity of the study. To mitigate this risk, we supplemented the search with backward and forward snowball sampling [31] of all studies included by full-text reading. In addition, to increase the reliability of the inclusion and exclusion criteria, we applied the think-aloud protocol to the criteria defined and discussed among all the authors. Then the first author conducted the selection process that was reviewed by the third afterward.
\begin{table}
\begin{tabular}{l l l} \hline \hline Data Item & Description & RQ \\ \hline Study ID & Integer & \\ Article Title & Name of the article. & \\ Research Type & Type of research as reported in the guidelines [25; 26]. A detailed & RQ\({}_{1.2}\), RQ\({}_{1.3}\) \\ & description is given in Table 5 & \\ SWEBOK KA & SWEBOK Knowledge Area [28] as representative of a SE topic. & RQ\({}_{1.1}\), RQ\({}_{1.3}\) \\ & A detailed description is given in Table 4 & \\ Result Type & Type of result achieved by the article. Classification depicted in & RQ\({}_{2.1}\), RQ\({}_{2.2}\) \\ & Table 6. & \\ Tool/Framework & Main quantum tool or framework that was the object of study in & RQ\({}_{2.2}\) \\ & the article. & \\ Year & Publication year of the article. & RQ\({}_{3.1}\), RQ\({}_{3.2}\) \\ Authors & List of authors of the article. & RQ\({}_{4.1}\), RQ\({}_{4.2}\) \\ Venue & Venue of publication of the article. & RQ\({}_{5.1}\) \\ Venue Type & Type of the publication venue, as depicted in the guidelines (Journal, Conference, Book, Magazine) [26]. & RQ\({}_{5.1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Essential data elements collected from the resources, including study ID, article title, research type, research method, SWEBOK focus area, result type, main tool/framework, publication year, authors, and venue/venue type.
\begin{table}
\begin{tabular}{p{142.3pt} p{284.5pt}} \hline \hline Knowledge Area & Description \\ \hline Software Requirements & The Software Requirements Knowledge Area deals with gathering, analyzing, specifying, and validating the requirements of a software product. These activities are crucial for the success of software engineering projects and ensure that the product meets the needs and constraints of the real-world problems it is intended to solve. \\ Software Design & The Software Design KA involves defining a software system’s architecture, components, and behavior and creating a detailed plan for its construction based on software requirements analysis. The resulting software design provides a comprehensive blueprint for the software’s construction. \\ Software Construction & The Software Construction Knowledge Area involves detailed software creation through design, coding, testing, debugging, and verification to satisfy requirements and design constraints. It covers software construction fundamentals, management, technologies, practical considerations, and tools. \\ Software Testing & The Software Testing Knowledge Area involves evaluating and improving software quality by identifying defects through dynamic verification against expected behavior on a finite set of test cases. It covers software testing fundamentals, techniques, user interface testing and evaluation, test-related measures, and practical considerations. \\ Software Maintenance & The Software Maintenance Knowledge Area covers enhancing, adapting, and correcting existing software through predictive, adaptive, and corrective maintenance. It includes maintenance fundamentals such as categories and costs, key technical and management issues, cost estimation, and measurement. It also covers the maintenance process of software maintenance techniques such as program comprehension, re-engineering, reverse engineering, refactoring, software retirement, disaster recovery techniques, and maintenance tools. \\ Software Configuration Management & The Software Configuration Management Knowledge Area deals with identifying and controlling the configuration of software systems at different points in time and maintaining the integrity and traceability of the configuration throughout the software life cycle. It includes managing the SCM process, software configuration identification, control, status accounting and auditing, software release management and delivery, and software configuration management tools. \\ Software Engineering Management & The Software Engineering Process \\ Software Engineering Process & The Software Engineering Process \\ Software Quality & The Software Quality K1 covers software quality fundamentals, quality management processes, and practical considerations such as defect characterization and measurement, as well as software quality tools. \\ Software Engineering Professional Practice & Software engineering professional practice involves the professional and ethical practice of software engineering, including professionalism, codes of ethics, group dynamics, and communication skills. The Software Engineering Professional Practice KA covers topics such as professional conduct, software engineering standards, legal issues, working in teams, interacting with stakeholders, and dealing with multicultural environments. \\ \hline \hline \end{tabular}
\end{table}
Table 4: SWEBOK Knowledge Areas as described in literature [28]
Researcher bias is a threat regarding data extraction and classification. Following the guidelines [24; 25; 26], the third author assessed all extractions made by the first author to mitigate this threat. Nonetheless, since this step involves human judgment, the threat cannot be entirely excluded.
\begin{table}
\begin{tabular}{p{113.8pt} p{284.5pt}} \hline \hline Result Type & Description \\ \hline Empirical & These results generally, but not exclusively, come out from an empirical study. The result is not directly exploitable but sheds light on a phenomenon. This kind of result might come from both Validation and Evaluation research and from Experience, Opinion, and Philosophical papers. \\ Technique & This result comes from a technique to solve a particular issue. The technique is validated through an empirical study. \\ Tool & This result has the same characteristics of a technique but is released as an available tool. \\ Dataset & This result is a collection of useful data or similar. It can be exploited as a reference or for empirical studies. \\ Guidelines & This type of result is a collection of suggestions for a particular scenario, e.g., for developing a cloud-based quantum hybrid application. \\ Catalog & Similar to a dataset, but more abstract, it is a collection of useful references (e.g., data encoding patterns). \\ Position & This result concerns publications that provide no explicit result but formulate a position or opinion, such as philosophical papers. \\ \hline \hline \end{tabular}
\end{table}
Table 6: Classification of the result types.
\begin{table}
\begin{tabular}{p{113.8pt} p{284.5pt}} \hline \hline Category & Description \\ \hline Validation Research & Methods examined are unique and have not yet been used in practice. Experiments, or laboratory work, are an example. \\ Evaluation Research & Techniques are put into practice, and their effectiveness is assessed. That is, it is demonstrated how the technique is applied in practice (solution implementation) and what the consequences of the application are in terms of benefits and drawbacks (implementation evaluation). This also includes identifying industry problems. \\ Solution Proposal & A problem solution is proposed; the solution can be either novel or a significant extension of an existing technique. A small example or a good line of argumentation demonstrates the potential benefits and applicability of the solution. \\ Philosophical Paper & This kind of paper proposes a new way of viewing existing things by organizing the field into a taxonomy or conceptual framework. \\ Opinion Paper & This kind of paper expresses someone’s personal opinion on whether a particular technique is good or bad, or how things should be done. They make no use of related work or research methodologies. \\ Experience Paper & This kind of paper is written to describe the practical details of what was done and how it was accomplished, based on the personal experience of the author. \\ \hline \hline \end{tabular}
\end{table}
Table 5: Research type facet as depicted in literature [25; 26; 29]
\begin{table}
\begin{tabular}{l c l} \hline \hline Swebok Knowledge Area & Resources & References \\ \hline Software Testing & 22 & [R1, R2, R3, R4, R5, R6, R7, R8, \\ & & R9, R10, R11, R12, R13, R14, \\ & & R15, R16, R17, R18, R19, R20, \\ & & R21, R22] \\ Software Construction & 17 & [R23, R24, R25, R26, R27, R28, \\ & & R29, R30, R31, R32, R33, R34, \\ & & R35, R36, R37, R38, R39] \\ Software Engineering Models and Methods & 15 & [R40, R41, R42, R43, R44, R45, \\ & & R46, R47, R48, R49, R50, R51, \\ & & R52, R53, R54] \\ Software Design & 10 & [R55, R56, R57, R58, R59, R60, \\ & & R61, R62, R63, R64] \\ Software Quality & 9 & [R65, R66, R67, R68, R69, R70, \\ & & R71, R72, R73] \\ Software Maintenance & 7 & [R74, R75, R76, R77, R78, R79, \\ & & R80] \\ Software Engineering Professional Practice & 4 & [R81, R82, R83, R84] \\ Software Engineering Process & 3 & [R85, R86, R87] \\ \hline \hline \end{tabular}
\end{table}
Table 7: Number of resources per Swebok Knowledge Area. The references are listed in the Bibliography.
\begin{table}
\begin{tabular}{l c l} \hline \hline Research Type & Resources & References \\ \hline Solution Proposal & 32 & [R40, R1, R75, R65, R2, R4, R6, R25, R41, R67, R7, \\ & & R77, R33, R10, R11, R36, R13, R79, R14, R15, R17, \\ & & R70, R18, R37, R19, R20, R21, R22, R50, R51, R53, \\ & & R54] \\ Philosophical Papers & 22 & [R26, R42, R43, R29, R44, R45, R31, R46, R32, R12, \\ & & R58, R85, R86, R87, R49, R60, R73, R61, R52, R62, \\ & & R63, R64] \\ Validation Research & 15 & [R74, R81, R55, R76, R3, R5, R56, R57, R66, R28, \\ & & R83, R9, R30, R72, R39] \\ Experience Papers & 11 & [R23, R24, R8, R69, R78, R84, R34, R35, R71, R59, \\ & & R38] \\ Opinion Papers & 5 & [R27, R68, R47, R16, R48] \\ Evaluation Research & 1 & [R82] \\ \hline \hline \end{tabular}
\end{table}
Table 8: Resources by research type. The references are listed in the bibliography.
## 4 Analysis of the Results
The following section presents the results of our mapping study. Each of the following paragraphs aims to answer the main topics posed by our research questions.
### RQ1: Main Topics and Studies in QSE
Table 8 depicts the distribution of the analyzed resources for the employed research type. It is evident that Solution Proposal papers are the most common type of research, with a total of 32 papers across all focus areas. Philosophical papers are the second most common type of research, with a total of 22 resources. On the other hand, the Evaluation Research and Validation Research categories (which denote industrial and laboratory empirical studies, respectively) have fewer resources ( 1 and 15, respectively). Experience papers are also quite numerous (15), whilst Opinion papers appear in a small number (5).
Regarding the knowledge areas, whose distribution can be seen in Table 7, it can be noted that some focus areas have a higher number of publications than others. For instance, the focus area of Software Testing has the highest number of publications, with 22 papers across all research types. On the other hand, the software engineering process is the least investigated knowledge area, with just three resources. The other two most investigated knowledge areas are Software Construction and Software Engineering Models and Methods, with 17 and 15 resources each. The other knowledge areas present 10 resources or less. It is worth mentioning that several knowledge areas, such as software requirements, configuration management, and software engineering management, have not been the subject of any published papers.
Figure 2 gives insights into the variation in the distribution of research types across the different focus areas. In the figure, the number of papers is represented by color darkness, with darker colors indicating a higher number. It is possible to see that Philosophical papers focus on Software Design, Software Engineering Models and Methods, and Software Construction, with six, seven, and four resources, respectively, but they are almost nonexistent in the other knowledge areas. On the other hand, validation research resources are more evenly distributed over all the knowledge areas, with two to three resources each, except for Software Engineering Models and Methods and Software Engineering Process, which have none. This phenomenon
Figure 2: Distribution of published papers by knowledge area and research type. The number of papers is represented by color darkness, with darker colors indicating a higher number of papers. Most papers focus on software testing, followed by software engineering models, methods, and construction. Solution proposals are the most common type of published paper, followed by philosophical papers. Empirical studies, represented by validation and evaluation research categories, are relatively rare.
of some focus areas having a relatively even distribution of papers across the different research types while others have a higher concentration of specific papers can also be observed in other cases. For example, the focus area of Software Quality has a relatively even distribution of papers across the different research types, with one to three resources for each of the research types considered (except for evaluation research). In contrast, the focus area of Software Testing has a higher concentration of Solution Proposal papers [(16)] and very few resources in the other research categories.
Table 9 depicts the result types achieved by the analyzed resources. The most frequent result is _Technique_ with 22 resources reporting it. _Empirical_ results rapidly follow, with 21 resources reporting it as the main achievement. This result comes not only from purely empirical studies (i.e., Validation or Evaluation Research) but also from Experience papers, Opinion Papers, and Philosophical Papers. The same number of resources reports a _Position_ as the main result achieved. Mature tools are not frequent, with only 13 resources reporting it as the main result achieved. _Catalog_, _Dataset_, and _Guidelines_ are the least commonly achieved results, with seven, two, and one resource each.
Concerning the quantum technology which the analyzed resources focus on, details can be observed in Table 10. The results show that 40 resources did not focus on any specific technology, indicating that many were not interested in exploring a particular quantum technology in depth. On the other hand, 20 resources
\begin{table}
\begin{tabular}{l c l} \hline \hline Result Type & Resources & References \\ \hline Technique & 22 & [R4, R41, R43, R9, R11, R51, R52, R53, R40, R10, R13, R21, R22, R77, R15, R18, R19, R20, R80, R54, R79, R37] \\ Empirical & 21 & [R74, R81, R76, R3, R24, R56, R66, R28, R83, R72, R55, R82, R57, R42, R29, R69, R45, R58, R86, R39, R5] \\ Position & 21 & [R27, R68, R44, R31, R46, R32, R47, R84, R12, R85, R16, R48, R87, R63, R34, R35, R49, R30, R78, R8, R26] \\ Tool & 13 & [R1, R2, R6, R33, R14, R17, R23, R65, R36, R25, R7, R75, R50] \\ Catalog & 7 & [R67, R59, R73, R61, R62, R64, R60] \\ Dataset & 2 & [R70, R71] \\ Guidelines & 1 & [R38] \\ \hline \hline \end{tabular}
\end{table}
Table 9: Resources per result type.
focused on multiple quantum technologies, indicating that were interested in exploring multiple quantum technologies. Of the specific technologies focused on by the resources, Qiskit was the most popular, with 14 resources focusing on it. Q# was the second most popular, with five resources focusing on it. Among the remaining technologies, QASM and Dwave were the only objects of study by more than one resource (three and two, respectively). Amazon Braket and QuantME, as well as a custom technology, were the least popular technologies, with just a single resource focusing on them.
Figure 3 correlates the information carried out by Table 9 and Table 10 to gather further insights. It is also possible to observe that the Result Types Empirical, Technique, Tools, and Position all showed different quantum technologies. Specifically, empirical results mainly focused either on _Multiple_ or _None_ specific technologies, with the sole exception of a single resource focusing on Q#. _Position_ results, on the other hand, are mainly technology independent, despite focusing on _Multiple_, Q#, QASM, and Dwave technologies as well. _Tool_ and _Technique_ result types show the greatest variety of quantum technologies interested: most of the resources focusing on specific technologies fall into these categories. _Guidelines_ results were only reported for QASM, while _Dataset_ only for Qiskit. Finally, _Catalogs_ results were reported mainly in a technology-agnostic manner, except for a single resource, which focused on QASM.
Most reported results types are techniques, empirical, and positions, with the other types of results reported in fairly fewer occurrences (\(\mathbf{RQ_{2.1}}\)). Most resources focus either on multiple technologies or none specific. Qiskit, however, is the most commonly studied technology for techniques and tools. The other technologies were rarely studied on their own (\(\mathbf{RQ_{2.2}}\)).
### RQ\({}_{3}\): Evolution of the discipline over time
Figure 3(a) depicts the number of publications over the years, while Figure 3(b) the cumulative number over the years. These figures tell us that the number of papers published in this discipline has increased, with the first publications (2) dating back to 2018. Then we have six papers in 2019, 15 in 2020, and 34 in 2021.
\begin{table}
\begin{tabular}{l c l} \hline \hline Quantum Technology & Resources & References \\ \hline None & 40 & [R27, R68, R44, R31, R46, R32, R47, R84, R12, R85, R16, R48, R87, R63, R55, R82, R57, R42, R29, R69, R45, R58, R86, R39, R4, R41, R43, R9, R11, R51, R52, R53, R67, R59, R73, R61, R62, R64, R25, R7] \\ Multiple & 20 & [R74, R81, R76, R3, R24, R56, R66, R28, R83, R72, R77, R15, R18, R19, R34, R35, R49, R23, R65, R36] \\ Qiskit & 14 & [R1, R2, R6, R33, R14, R17, R40, R10, R13, R21, R22, R70, R71, R26] \\ Q\# & 5 & [R20, R80, R75, R8, R5] \\ QASM & 3 & [R38, R60, R78] \\ Dwave & 2 & [R30, R79] \\ Amazon Braket & 1 & [R37] \\ Custom & 1 & [R50] \\ QuantME & 1 & [R54] \\ \hline \hline \end{tabular}
\end{table}
Table 10: Resources per quantum technology.
The cumulative number of papers published in this field has also been increasing, with two papers published by the end of 2018, eight papers by the end of 2019, 23 papers by the end of 2020, and 57 papers by the end of 2021, reaching a total of 87 by November 2022.
It is worth noting that the Talavera Manifesto, which established the foundations of this discipline, was published in 2020, which can roughly indicate that the field of quantum software engineering was born that
Figure 4: In both graphs, the dashed line indicates the year of publication of the Talavera Manifesto, which is considered to give birth to QSE. Both graphs show a history of contributions to the quantum software engineering field before the discipline’s formal establishment in 2020 and that the interest in this area among researchers has grown significantly, particularly between 2020 and 2021.
Figure 3: Distribution of the main reported result types alongside the quantum technology interested. The most frequently reported Result Type is ”Technique,” followed by ”Multiple.” Qiskit is the most commonly mentioned quantum technology, followed by ”Multiple.”
year. However, it is interesting to observe that some papers were published in this field before the official establishment of the discipline. In the years following 2020, there has been a speedy increase in published papers, doubling from 14 in 2020 to 37 in 2021. It is also important to note that we considered the paper published up to November 2022 (the time when the research started). Hence, data about the year 2022 might not include all the paper published in that year.
### RQ\({}_{4}\): Authors
Table 11 depicts the authors that published more than three papers in QSE. fig. 5 depicts a social network among the most productive authors for the analyzed literature. The network nodes represent authors: the larger the node, the more papers the author has published. The network edges represent collaborations among authors; the edge's thickness represents the number of collaborations between two authors.
Through the network and the table, it is possible to pinpoint the authors that can be considered the most productive in the field. Mario Piattini has published 13 papers, followed by Perez-Castillo Ricardo with ten papers, and Shaukat Ali, Frank Leymann, and Tao Yue with nine papers each.
The most substantial collaboration cluster has been identified as the one formed by Mario Piattini and his collaborators Ricardo Perez-Castillo, Luis Jimenez-Navajas, Guido Peterssen, and Manuel Serrano. These authors have a strong collaboration history and the highest number of published papers.
\begin{table}
\begin{tabular}{l c l} \hline \hline Author & Resources & References \\ \hline Piattini Mario & 13 & [R23, R75, R41, R68, R44, R77, R46, R79, R39, R80, \\ & & R51, R73, R53] \\ Perez-Castillo Ricardo & 10 & [R75, R41, R44, R77, R46, R79, R39, R80, R51, R53] \\ Leymann Frank & 9 & [R29, R86, R59, R87, R60, R61, R62, R54, R64] \\ Ali Shaukat & 9 & [R6, R13, R58, R14, R15, R17, R21, R22, R49] \\ Yue Tao & 9 & [R6, R13, R58, R14, R15, R17, R21, R22, R49] \\ Barzen Johanna & 8 & [R29, R86, R59, R87, R61, R62, R54, R64] \\ Arcaini Paolo & 7 & [R6, R13, R14, R15, R17, R21, R22] \\ Wang Xinyi & 6 & [R6, R13, R15, R17, R21, R22] \\ Jimenez-Navajas Luis & 5 & [R41, R46, R79, R80, R51] \\ Salm Marie & 5 & [R59, R87, R62, R54, R64] \\ Abreu Rui & 5 & [R1, R2, R4, R5, R32] \\ Zhao Jianjun & 4 & [R76, R67, R70, R71] \\ Weder Benjamin & 4 & [R29, R86, R87, R54] \\ Weigold Manuela & 4 & [R59, R61, R62, R64] \\ \hline \hline \end{tabular}
\end{table}
Table 11: Authors by the number of published papers. Mario Piattini stands out as the most productive author, followed by Ricardo Perez-Castillo, Frank Leymann, Shaukat Ali, and Tao Yue.
The second strongest collaboration cluster comprises Shaukat Ali, Paolo Arcaini, Tao Yue, and Xinyi Wang. Although it is evident that the cluster size is smaller than other clusters, the strength of the collaborations is higher than any other identified cluster.
Frank Leymann also stands out as having many collaborations, co-authoring papers with Daniel Vietz, Joanna Barzen, Benjamin Weder, Manuela Weigold, and Marie Salm. These authors form a solid cluster regarding published papers and the strength of the collaborations.
Another collaboration cluster comprises Enrique Moguel, Javier Rojo, David Valencia, Jose Garcia-Alonso, Javier Berrocal, and Juan Manuel Murillo. However, this cluster shows a less strong relationship between the authors and fewer papers. Another smaller but noticeable cluster comprises Jianjun Zhao, Zhongtao Miao, Pengzhan Zhao, and Shuhan Lan.
In addition to the strong collaboration clusters identified, it is worth noting that there are many tiny other groups and other one-time collaborations among the authors in the data, which may not be visible in
Figure 5: Social network among the most productive authors in the field of Quantum Software Engineering, showing the collaborations among the authors and the number of papers they have published. The nodes’ size and the edges’ thickness represent the number of papers published by each author and the strength of the collaborations, respectively.
the provided figures, but they likely occur between authors who have published occasionally together.
Figure 6 presents a ranking of top authors in the field of quantum software engineering based on the number of publications related to the considered knowledge areas of the SWEBOK.
The top author in the table, with the highest number of publications overall, is Mario Piattini, as previously shown, who has published a total of 13 papers in the field. Piattini has contributed to a wide range of knowledge areas, with the most publications in Software Engineering Models and Methods (five papers) and Software Quality (2 papers). Other notable authors in the table include Xinyi Wang, with six publications in the field, primarily in the Software Engineering Process knowledge area, and Rui Abreu, who has published four papers in the Software Testing knowledge area.
Interestingly, almost all the authors have publications focused on a restricted number of knowledge areas. For example, Frank Leymann has published five papers in the Software Design knowledge area, suggesting a strong focus on this area of quantum software engineering, although contributing to the Software Engineering Process area as well. In contrast, authors such as Shaukat Ali and Jianjun Zhao have not published in any knowledge areas except for Software Testing and Software Quality, respectively. Mario Piattini is the sole author who has published in more than two focus areas: software construction, software design, software engineering models and methods, and software quality. Of the 22 authors listed, only four published papers in two or more focus areas. Specifically, Johanna Barzen published in Software Design and Software Engineering Process, Luis Jimenez-Navajas published in Software Engineering Models and Methods and Software Maintenance, Frank Leymann published in Software Design and Software Engineering Process, and Ricardo Perez-Castillo in Software Design and Software Maintenance.
Figure 6: Top authors for each considered Knowledge Area. Mario Piattini is the leading author with the highest number of publications (13), covering many knowledge areas. Four authors published in multiple areas, while most focused on one.
### RQ5: Venues
Figure 7 depicts the number of papers published in thematic and non-thematic venues over the years, while Figure 8 depicts the total number of publications by venue type. Table 12 shows the top venues by number of papers. Several insights can be made. First, the field is active and diverse, with research published in various venues. The data shows that papers have been published in conferences (41 papers), journals (23 papers), magazines (two papers), and books (one paper). This result suggests that the field is well-established and has a strong presence in multiple publication outlets.
Conferences are the most preferred venue type and the only type showing thematic venues. Among conference venues, ICSEW and ASE have published most of the papers in this field, with four papers each. In general, many venues comprised papers about quantum software engineering, indicating that the field is active and diverse, with research published in many different outlets.
Journals have also published many papers in this field, with 23 papers. Quantum Journal, Journal of Systems and Software, Advances in Engineering Software, APEQS, Software Quality Journal, and SummerSOC are among the venues that have published two papers in this field. Several venues have published only one paper in this field, including top-tier journals and conferences, such as IEEE Transactions on Software Engineering, ICSME, and SANER.
Figure 7: Number of papers published in both thematic and non-thematic venues over the years.
The remaining non-thematic venues are represented by Magazines and books, which have published a
Figure 8: Papers by venue type.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Venue & Thematic & Articles & References \\ \hline International Workshop on Quantum Software Engineering (Q-SE) & Yes & 9 & [R3, R4, R57, R41, R66, R67, R42, R58, R71] \\ Quantum Software Engineering and Technology & Yes & 7 & [R43, R8, R68, R31, R46, R32, R47] \\ ICSE Workshops (ICSEW) & No & 4 & [R45, R30, R20, R48] \\ Automated Software Engineering (ASE) & No & 4 & [R14, R17, R70, R18] \\ Quantum & No & 3 & [R65, R34, R36] \\ The International Conference on the Quality of Information and Communications Technology (QUATIC) & Yes & 3 & [R80, R51, R73] \\ International Workshop on the Quantum Software Engineering and Programming (QANSWER) & Yes & 1 & [R84] \\ International Workshop on Quantum Software (QSW) & Yes & 1 & [R79] \\ \hline \hline \end{tabular}
\end{table}
Table 12: Top venues by the number of papers. The venues are divided into thematic (i.e., specialized in QSE) and non-thematic workshops. All thematic venues are reported, whilst non-thematic venues with more than 2 papers are reported.
relatively small number of papers in this field, i.e., two and one respectively.
Thematic workshops are essential for publishing new ideas in the field. Among all papers, 21 were published in thematic workshop conferences, representing a significant portion of the published papers. Among these, Q-SE and QSET have published the most papers, with nine and seven papers, respectively.
[title=Mith Findings for RQs] Since thematic venues represent a significant proportion of the publication venues, articles in QSE have been published in many other non-thematic venues, with a preference for conferences over others. The preferred non-thematic publication venues are ICSE Workshops and ASE, both having four publications each (**RQ\({}_{5.1}\)**).
## 5 Discussion and Insights for Future Research
### On the current research trends in QSE
The resource distribution analysis reveals several key insights. First, the high prevalence of Solution Proposal papers (33 resources across all focus areas) suggests a strong demand for practical solutions. This result implies that researchers are actively working to address the challenges practitioners face and develop innovative approaches that can help improve software engineering practices. The most challenging aspect is indeed related to testing and debugging quantum programs, for the intrinsic issues of the practice [21].
Second, the few resources focusing on the software engineering processes (only three resources) highlight the need for further research. It suggests that there is still much to be discovered regarding best practices, processes, and methodologies for software engineering. More research is needed to advance our understanding of this area. Moreover, we saw that some knowledge areas in software engineering, such as software requirements, configuration management, and software engineering management, have been neglected and have not been the subject of any published papers.
In particular, the lack of software project management can have significant implications for the socio-technical aspects of quantum software development, such as communication and collaboration among developers, which we can assume come from different backgrounds [17; 13; 32]. Effective management is critical to ensure that resources are effectively utilized, goals are aligned, and stakeholders are effectively engaged.
Such areas could have been neglected because they are perceived as less technically challenging or trending. However, the issue could also be a prioritization choice of the researchers, who focus on solving more tangible and concrete problems, such as testing and debugging [21]. Nevertheless, more research is necessary to address the issues and ensure all aspects of software engineering are adequately covered.
Third, Validation research resources are distributed relatively evenly across focus areas, with two to three resources each, highlighting its significance in software engineering. This type of research is important for providing evidence and supporting effective practices and solutions. At the time of writing, however, empirical research in QSE is limited compared to other activities like solution proposals. Additionally, the discipline almost completely lacks industrial empirical studies.
Finally, the unequal distribution of research types in different focus areas (e.g., a higher concentration of Solution Proposals in Software Testing) underscores the importance of considering each area's unique characteristics and needs when planning research activities to help ensure that research efforts are directed toward the areas that will have the most significant impact.
[title=Mith Findings for RQs] Take Away Message Several areas require further research. These include software engineering processes, software project management, neglected knowledge areas such as software requirements and configuration management, and the need for more academic and industrial empirical studies in quantum software engineering. Additionally, it is important to consider the unique characteristics and needs of each area when planning research activities to ensure that efforts are directed toward those that will have the most significant impact.
### On the achieved results and studied technologies
In Section 4, we presented the main results achieved by the analyzed resources and the main quantum technologies interested. The results provide insights into the state of research in quantum software engineering and highlight the diversity of research in this field.
The technique was the most commonly achieved result type, with 22 resources reporting it. As a newly born discipline, the field of quantum software engineering is still in its early stages of development. Thus, it is unsurprising that techniques are the most prominent result types reported in the resources analyzed. In the early stages of a field's development, researchers tend to focus on developing methods and frameworks that can be used to tackle specific challenges because the field is still in its formative years, and it is essential to establish a solid foundation of techniques and tools that can be used to advance the field further. In quantum software engineering, the development of new techniques has been driven by the need to address various challenges in quantum software development [17; 13; 15].
The finding that a considerable number of resources focus on multiple or no specific quantum technologies can be attributed to the interdisciplinary nature of the field and the need to consider multiple technologies and perspectives. As said, QSE is a relatively new field, and researchers and practitioners are still exploring the potential of different quantum technologies and their integration with classical computing systems [14; 17]. The use of multiple or no specific technologies may aim to develop technology-agnostic methods that can be used with various quantum technologies, as advocated by the Talavera Manifesto [14].
The resources that propose tools as the primary result in quantum software engineering are more focused on specific technologies because there is a practical need for concrete implementations in this field. As a newly-born discipline, there is still much room for exploration and experimentation. Tools provide a tangible means for researchers and practitioners to test and validate their ideas and for users to put the theories into practice. The concentration of resources focused on specific technologies like Qiskit indicates that this technology has become popular among the community, possibly due to its ease of use, user-friendly interface, and compatibility with various quantum computing hardware. This popularity will likely drive the development of many tools and techniques, further increasing its popularity and usability. In addition to tools addressing developers' needs, tools represent a fundamental requirement for conducting experiments on codebases, which explains the significant effort devoted to developing tools early in a newly-born discipline. In conclusion, the prominence of the tool result type and the concentration of resources focused on specific technologies highlights the practical and tangible nature of quantum software engineering and the importance of concrete implementations for advancing the field.
The low representation of result types such as Datasets, Guidelines, and Catalogs in the analyzed resources could be due to several reasons. Firstly, the field of quantum software engineering is relatively new and still in its early stages of development. As a result, there might not be a strong emphasis on building extensive datasets, guidelines, or catalogs to support quantum software development. Instead, the focus is on developing techniques and tools necessary for software engineering practices in the quantum domain. Additionally, collecting and preparing large datasets, guidelines, or catalogs in the quantum software engineering domain could be challenging because quantum technologies are still being developed and standardized. The associated data and information are constantly evolving, making it challenging to build robust and comprehensive datasets, guidelines, or catalogs that can be widely adopted and used by the research community. Finally, the low representation of Datasets, Guidelines, and Catalogs could also be attributed to the current state of research in quantum software engineering. Many researchers in this field might focus more on exploring new techniques and tools rather than investing time and resources into building datasets, guidelines, and catalogs. As the field of quantum software development grows, it's crucial to have comprehensive and current resources to support it. However, the lack of such resources shows a significant research gap in quantum software engineering. As the discipline matures, the need for these resources is likely to become more evident, and more researchers may push in this direction.
Take Away MessageFurther research in quantum software engineering should focus on filling the research gap highlighted by the low representation of result types such as Datasets, Guidelines, and Catalogs in the analyzed resources. Building comprehensive and up-to-date resources to support quantum software development could be essential in advancing the field further. Additionally, exploring new techniques and tools to tackle specific challenges and investigating the potential of different quantum technologies and their integration with classical computing systems should remain a priority for researchers and practitioners.
### On the evolution of QSE
The field of quantum software engineering has experienced rapid growth in recent years, as evidenced by the significant increase in publications in the discipline. The establishment of the discipline, through the publication of the Talavera Manifesto in 2020, has provided a clear definition and set of guidelines for future research and development in this field. This milestone has been crucial in recognizing quantum software engineering as a distinct discipline and will continue to play an essential role in shaping its future.
Early publications in quantum software engineering indicate that researchers and practitioners have already explored and developed the concept of quantum software engineering, even before its formal recognition. This result highlights the growing importance and recognition of the field and its potential to impact the future significantly. The rapid increase in the number of publications, doubling from 2020 to 2021, suggests that there has been a significant increase in research and development in this field, and it is likely to continue to proliferate. This increase in publications also suggests that quantum software engineering attracts more attention from researchers, practitioners, and investors, who recognize its potential to drive innovation and impact in various industries and domains. While the data for 2022 only covers papers published until November of that year and may not be comprehensive, the significant increase in the number of publications offers valuable insights into the progress of quantum software engineering.
Based on the rapid growth and increasing interest in the field of quantum software engineering, it is likely that the discipline will continue to experience significant development in the near future. The growing recognition and importance of the field, the support and demand from various industries and domains, and the potential of quantum computing and quantum software to revolutionize the way we process and store data, optimize complex systems, and solve problems in various fields, will drive further investment and research in quantum software engineering. This discipline is expected to have significant growth and development in the near future and will play a crucial role in shaping technology and innovation.
Take Away MessageThe rapid growth and increasing interest in the field of quantum software engineering suggest that it will continue to experience significant development in the near future. To further advance the discipline, future research could explore the development of best practices, standards, and tools for quantum software engineering and investigate the potential applications of quantum software in various industries and domains.
### On authors and collaborations in QSE
In Section 4, we pointed out which are the most interested authors in QSE in the software engineering community, how they are related, and how they are distributed among the SWEBOK knowledge areas.
The social network analysis conducted on QSE authors has revealed several collaborative clusters with varying degrees of strength and cohesion. These clusters suggest that QSE researchers tend to form closely-knit communities, which could hinder the exchange of knowledge and ideas between different groups. To address this issue, initiatives to promote interdisciplinary collaboration among researchers in different collaboration clusters may be advantageous. These initiatives may include hosting joint conferences or workshops to facilitate the exchange of knowledge and ideas, potentially leading to novel insights and discoveries in the field of QSE. Such efforts may also aid in building more robust and diverse research communities, enabling the development of innovative approaches to QSE problems.
The analysis of the knowledge areas of QSE authors reveals that most researchers focus on specific areas rather than the entire spectrum of QSE knowledge. While some authors have published many papers, they
mostly focus on a few knowledge areas, such as software design and engineering processes. In contrast, other areas, such as software testing and maintenance, are less well-represented, which suggests that there is a need for more research on these underrepresented areas to create a more holistic understanding of quantum software engineering. In addition, very few authors publish in multiple areas suggesting that there may be a lack of interdisciplinary collaboration between quantum computing and software engineering experts. Encouraging more collaboration between these groups could help to create a more balanced research focus in quantum software engineering.
**Take Away Message.** The social network analysis of QSE authors highlights the need for initiatives to promote interdisciplinary collaboration between closely-knit research communities, which could lead to novel insights and discoveries in QSE and help build more robust and diverse research communities. Additionally, the analysis of QSE authors' coverage of SWEBOK knowledge areas reveals a need for more research on underrepresented areas such as software testing and maintenance. Encouraging collaboration between quantum computing and software engineering experts could help create a more balanced research focus in quantum software engineering.
### On the publication trends in QSE
The field of quantum software engineering is a growing and dynamic field that is seeing an increase in the number of publications in various venues. Despite the significant share represented by thematic venues, more recently, non-thematic venues were preferred. The preference for non-thematic venues over thematic ones may suggest the field is leaving its niche discipline status and permeating into well-established software engineering publication outlets. This result can be seen as a positive trend, indicating that quantum software engineering is gaining recognition and attention from the broader software engineering community. Reaching a broader audience through non-thematic venues may help to promote the field and increase its visibility, leading to more collaboration and exchange of ideas with other software engineering sub-fields. Furthermore, publishing in non-thematic venues may also increase the impact of the research, as it can reach a more significant number of readers who may not be familiar with the specific field. Moreover, publishing in non-thematic venues also allows QSE researchers to showcase their work to a wider audience and demonstrate their research's relevance and potential impact in the broader context of software engineering. It may also help attract new researchers and provide a platform for interdisciplinary collaboration. In conclusion, such evolution could give birth to new research directions and the advancement of existing ones and could further support the establishment of the field.
It is important to note that the data presented covers up to December 2022 (excluded) and may not accurately represent the trends in the field over the entire year. It could be beneficial to gather data on other factors that might influence the publication trends in this field, such as funding or resource availability.
**Take Away Message.** The increasing preference for non-thematic publication venues in quantum software engineering suggests that the field is gaining recognition and attention from the broader software engineering community, which could lead to new research directions and interdisciplinary collaborations. However, it would be beneficial to investigate other factors influencing publication trends, such as funding or resource availability. Furthermore, more research is needed to understand how the integration of quantum software engineering with the broader software engineering community can be leveraged further to support the growth and recognition of the field.
## 6 Conclusions
In this systematic mapping study, we aimed to provide a comprehensive overview of the current state of research in Quantum Software Engineering (QSE). Through an extensive analysis of 87 studies, our main contributions are: (i) a comprehensive synthesis and analysis of the research conducted in the field of quantum software engineering, which might be useful to researchers and practitioners to learn how software engineering and quantum computing have been combined so far; (ii) a systematic mapping of the venues
and research groups that are currently focusing on quantum software engineering, which may be useful to newcomers and researchers interested in starting their path in this research field to discover relevant venues and potential collaborators; (iii) A research roadmap that highlights the major achievements that further research should pursue; (iv) An online appendix reporting all the material used to conduct our systematic mapping study, which researchers might use to build on top of our research and extend our findings2.
Footnote 2: Appendix available at [https://figshare.com/articles/online_resource/The_Quantum_Frontier_of_Software_Engineering_A_Systematic_Mapping_Study/22263448](https://figshare.com/articles/online_resource/The_Quantum_Frontier_of_Software_Engineering_A_Systematic_Mapping_Study/22263448)
Our results indicate that QSE research is still in its early stages, focusing on software testing and neglecting some knowledge areas, such as software engineering management. The most reported results types are techniques, empirical, and positions, with Qiskit being the most commonly studied technology. We also observed a growing interest in QSE within the research community, with a speedy increase in published papers between 2020 and 2021. Regarding researchers, we identified the most productive authors, the main collaboration clusters, and the distribution of researchers across different SE topics. This information can help to identify potential collaborators and promote further research in QSE. Finally, our study highlights the need for more empirical studies and a better distribution of research efforts across different SE topics. We also encourage more non-thematic publication venues to consider QSE papers to broaden the research community's knowledge and reach. Our study provides valuable insights into the development and evolution of the research community, contributing to the advancement and growth of QSE.
This systematic mapping study highlights potential future directions for research in Quantum Software Engineering (QSE), especially in the neglected areas of software engineering management practices and quantum software maintenance. Future research could focus on developing effective strategies and tools for managing the software development process and maintaining reliable and performing quantum software over time. Additionally, there is a need for more empirical studies and the development of appropriate metrics to provide a more rigorous empirical basis for future research in QSE. Future research could investigate effective strategies for managing the software development process in the context of quantum computing to address the neglect of software engineering management practices in QSE. Such efforts could involve exploring the unique challenges and opportunities of quantum software engineering, identifying effective strategies for managing the development process, and evaluating the effectiveness of different software engineering practices and tools. Research on quantum software maintenance could also provide new insights and discoveries. Future studies could explore the challenges and opportunities of maintaining quantum software over time, including issues such as version control, bug fixes, and updates. Developing appropriate metrics could also help provide a more rigorous empirical basis for future research in QSE, allowing for the evaluation of code quality, performance, and reliability.
## Appendix
Our online appendix is available at the following link [https://figshare.com/articles/online_resource/The_Quantum_Frontier_of_Software_Engineering_A_Systematic_Mapping_Study/22263448](https://figshare.com/articles/online_resource/The_Quantum_Frontier_of_Software_Engineering_A_Systematic_Mapping_Study/22263448).
## Acknowledgement
This work has been partially supported by the EMELIOT national research project, funded by the MUR under the PRIN 2020 program (Contract 2020W3A5FY). Fabio gratefully acknowledges the support of the Swiss National Science Foundation through SNF Projects No. PZ00P2_186090.
|
2310.00164 | PRIME: Prioritizing Interpretability in Failure Mode Extraction | In this work, we study the challenge of providing human-understandable
descriptions for failure modes in trained image classification models. Existing
works address this problem by first identifying clusters (or directions) of
incorrectly classified samples in a latent space and then aiming to provide
human-understandable text descriptions for them. We observe that in some cases,
describing text does not match well with identified failure modes, partially
owing to the fact that shared interpretable attributes of failure modes may not
be captured using clustering in the feature space. To improve on these
shortcomings, we propose a novel approach that prioritizes interpretability in
this problem: we start by obtaining human-understandable concepts (tags) of
images in the dataset and then analyze the model's behavior based on the
presence or absence of combinations of these tags. Our method also ensures that
the tags describing a failure mode form a minimal set, avoiding redundant and
noisy descriptions. Through several experiments on different datasets, we show
that our method successfully identifies failure modes and generates
high-quality text descriptions associated with them. These results highlight
the importance of prioritizing interpretability in understanding model
failures. | Keivan Rezaei, Mehrdad Saberi, Mazda Moayeri, Soheil Feizi | 2023-09-29T22:00:12Z | http://arxiv.org/abs/2310.00164v2 | # PRIME: Prioritizing Interpretability in Failure Mode Extraction
###### Abstract
In this work, we study the challenge of providing human-understandable descriptions for failure modes in trained image classification models. Existing works address this problem by first identifying clusters (or directions) of incorrectly classified samples in a latent space and then aiming to provide human-understandable text descriptions for them. We observe that in some cases, describing text does not match well with identified failure modes, partially owing to the fact that shared interpretable attributes of failure modes may not be captured using clustering in the feature space. To improve on these shortcomings, we propose a novel approach that prioritizes interpretability in this problem: we start by obtaining human-understandable concepts (tags) of images in the dataset and then analyze the model's behavior based on the presence or absence of combinations of these tags. Our method also ensures that the tags describing a failure mode form a minimal set, avoiding redundant and noisy descriptions. Through several experiments on different datasets, we show that our method successfully identifies failure modes and generates high-quality text descriptions associated with them. These results highlight the importance of prioritizing interpretability in understanding model failures.
## 1 Introduction
A plethora of reasons (spurious correlations, imbalanced data, corrupted inputs, etc.) may lead a model to underperform on a specific subpopulation; we term this a _failure mode_. Failure modes are challenging to identify due to the black-box nature of deep models, and further, they are often obfuscated by common metrics like overall accuracy, leading to a false sense of security. However, these failures can have significant real-world consequences, such as perpetuating algorithmic bias (Buolamwini and Gebru, 2018) or unexpected catastrophic failure under distribution shift. Thus, the discovery and description of failure modes is crucial in building reliable AI, as we cannot fix a problem without first diagnosing it.
Detection of failure modes or biases within trained models has been studied in the literature. Prior work (Tsipras et al., 2020; Vasudevan et al., 2022) requires humans in the loop to get a sense of biases or subpopulations on which a model underperforms. Some other methods (Sohoni et al., 2020; Nam et al., 2020; Kim et al., 2019; Liu et al., 2021) do the process of capturing and intervening in hard inputs without providing _human-understandable_ descriptions for challenging subpopulations. Providing human-understandable and _interpretable_ descriptions for failure modes not only enables humans to easily understand hard subpopulations, but enables the use of text-to-image methods (Ramesh et al., 2022; Rombach et al., 2022; Saharia et al., 2022; Kattakinda et al., 2022) to generate relevant images corresponding to failure modes to improve model's accuracy over them.
Recent work (Eyuboglu et al., 2022; Jain et al., 2022; Kim et al., 2023; d'Eon et al., 2021) takes an important step in improving failure mode diagnosis by additionally finding natural language descriptions of detected failure modes, namely via leveraging modern vision-language models. These methodologies leverage the shared vision-language latent space, discerning intricate clusters or directions within this space, and subsequently attributing human-comprehensible descriptions to them.
However, questions have been raised regarding _the quality of the generated descriptions_, i.e., there is a need to ascertain whether the captions produced genuinely correspond to the images within the identified subpopulation. Additionally, it is essential to determine whether these images convey shared semantic attributes that can be effectively articulated through textual descriptions.
In this work, we investigate whether or not the latent representation space is a good proxy for semantic space. In fact, we consider two attributed datasets: CelebA(Liu et al., 2015) and CUB-200(Wah et al., 2011) and observe that two samples sharing many semantic attributes may indeed lie far away in latent space, while nearby instances may not share any semantics (see Section 5.2). Hence, existing methods may suffer from relying on representation space as clusters and directions found in this space may contain images with different semantic attributes leading to less coherent descriptions.
Inspired by this observation and the significance of faithful descriptions, we propose PRIME. In this method, we suggest to reverse the prevailing paradigm in failure mode diagnosis. That is, we put _interpretability first_. In our method, we start by obtaining human-understandable concepts (tags) of images in the dataset and examine model's behavior conditioning on the presence or absence of a combination of those tags. In particular, we consider different groups of tags and check whether (1) there is a significant drop in model's accuracy over images that represent all tags in the group and (2) that group is minimal, i.e., images having only some of those tags are easier images for the model. When a group of tags satisfies both of these conditions, we identify it as a failure mode which can be effectively described by these tags. Figure 2 shows the overview of our approach and compares it with existing methods.
As an example, by running PRIME on a trained model over Living17, we realize that images where a **black** ape is **hanging** from a **tree branch** identify a hard subpopulation such that model's accuracy drops from \(86.23\%\) to \(41.88\%\). Crucially, presence of all \(3\) of these tags is necessary, i.e., when we consider images that have \(1\) or \(2\) of these \(3\) tags, the accuracy of model is higher. Figure 3 illustrates these failure modes. We further study the effect of number of tags in Section 5.1.
To further validate our method, we examine data unseen during the computation of our failure mode descriptions. We observe that images who match group of tags identified as a failure mode lead the model to similarly struggle. That is, we demonstrate _generalizability_ of our failure modes, crucially, directly from the succinct text descriptions. While reflecting the quality of our descriptions, this allows for bringing in generative models. We validate this claim by generating hard images using some of the failure mode's descriptions and compare the accuracy of model on them with some other generated images that correspond to easier subpopulations.
Figure 1: Visualization of two detected failure modes of class “fox” on a model trained on Living17. Overall accuracy for images of class “fox” is \(81.96\%\). However, we identify two coherent subsets of images with significant accuracy drops: foxes standing in dry grass fields (\(47.83\%\) accuracy) and white foxes in a zoo (\(35.29\%\) accuracy). See Appendix A.3 for more examples.
Figure 2: PRIME illustration.
Finally, we show that PRIME produces better descriptions for detected failure modes in terms of _similarity_, _coherency_, and _specificity_ of descriptions, compared to prior work that does not prioritize interpretability. Evaluating description quality is challenging and typically requires human assessment, which can be impractical for extensive studies. To mitigate that, inspired by CLIPScore (Hessel et al., 2021), we present a suite of three automated metrics that harness vision-language models to evaluate the quality. These metrics quantify both the intra-group image-description similarity and coherency, while also assessing the specificity of descriptions to ensure they are confined to the designated image groups. We mainly observe that due to putting interpretability first and considering different combinations of tags (concepts), we observe improvements in the quality of generated descriptions.
**Summary of Contribution.**
1. We propose PRIME to extract and explain failure modes of a model in human-understandable terms by prioritizing interpretability.
2. Using a suite of three automated metrics to evaluate the quality of generated descriptions, we observe improvements in our method compared to strong baselines such as Eyuboglu et al. (2022) and Jain et al. (2022) on various datasets.
3. We advocate for the concept of putting interpretability first by providing empirical evidence derived from latent space analysis, suggesting that distance in latent space may at times be a misleading measure of semantic similarity for explaining model failure modes.
## 2 Literature Review
**Failure mode discovery.** The exploration of biases or challenging subpopulations within datasets, where a model's performance significantly declines, has been the subject of research in the field. Some recent methods for detecting such biases rely on human intervention, which can be time-consuming and impractical for routine usage. For instance, recent works (Tsipras et al., 2020; Vasudevan et al., 2022) depend on manual data exploration to identify failure modes in widely used datasets like ImageNet. Another line of work uses crowdsourcing (Nushi et al., 2018; Idrissi et al., 2022; Plumb et al., 2021) or simulators (Lecler et al., 2022) to label visual features, but these methods are expensive and not universally applicable. Some researchers utilize feature visualization (Engstrom et al., 2019; Olah et al., 2017) or saliency maps (Selvaraju et al., 2017; Adebayo et al., 2018) to gain insights into the model's failure, but these techniques provide information specific to
Figure 3: Although appearance of tags “hang”, “black”, and “branch” individually lowers model’s accuracy, when all of them appear in the images, model’s accuracy drops from \(86.23\%\) to \(41.88\%\).
individual samples and lack aggregated knowledge across the entire dataset. Some other approaches (Sohoni et al., 2020; Nam et al., 2020; Liu et al., 2021; Hashimoto et al., 2018) automatically identify failure modes of a model but do not provide human-understandable descriptions for them.
Recent efforts have been made to identify difficult subpopulations and assign human-understandable descriptions to them (Eyuboglu et al., 2022; Jain et al., 2022; Kim et al., 2023). DOMINO (Eyuboglu et al., 2022) uses the latent representation of images in a vision-language model to cluster difficult images and then assigns human-understandable descriptions to these clusters. (Jain et al., 2022) identifies a failure direction in the latent space and assigns description to images aligned with that direction. In (Kim et al., 2023), concepts are identified whose presence in images leads to a substantial decrease in the model's accuracy.
**Vision-Language and Tagging models.** Vision-language models have achieved remarkable success through pre-training on large-scale image-text pairs (Radford et al., 2021). These models can be utilized to incorporate vision-language space and evaluate captions generated to describe images. Recently Moayeri et al. (2023); Li et al. (2023) bridge the modality gap and enable off-the-shelf vision encoders to access shared vision-language space. Furthermore, in our method, we utilize models capable of generating tags for input images (Huang et al., 2023; Zhang et al., 2023).
## 3 Extracting Failure Modes by Conditioning on Human-understandable Tags
Undesirable patterns or spurious correlations within the training dataset can lead to performance discrepancies in the learned models. For instance, in the Waterbirds dataset (Sagawa et al., 2019), images of landbirds are predominantly captured in terrestrial environments such as forests or grasslands. Consequently, a model can heavily rely on the background and make a prediction based on that. Conversely, the model may also rely on cues such as the presence of the ocean, sea, or boats to identify the input as waterbirds. This can result in performance drops for images where a waterbird is photographed on land or a landbird is photographed at sea. Detecting failure modes involves identifying groups of inputs where the model's performance significantly declines. While locating failure inputs is straightforward, _categorizing_ them into distinct groups characterized by _human-understandable concepts_ is a challenging task. To explain failure modes, we propose PRIME. Our method consists of two steps: (I) obtaining relevant tags for the images, and (II) identifying failure modes based on extracted tags.
### Obtaining Relevant Tags
We start our method by collecting concepts (tags) over the images in the dataset. For example, for a photo of a fox sampled from ImageNet (Deng et al., 2009), we may collect tags "orange", "grass", "trees", "walking", "zoo", and others. To generate these tags for each image in our dataset, we employ the state-of-the-art _Recognize Anything Model (RAM)_(Zhang et al., 2023; Huang et al., 2023), which is a model trained on image-caption pairs to generate tags for the input images. RAM makes a substantial step for large models in computer vision, demonstrating the zero-shot ability to recognize any common category with high accuracy.
Let \(\mathcal{D}\) be the set of all images. We obtain tags over all images of \(\mathcal{D}\). Then, we analyze the effect of tags on prediction in a class-wise manner. In fact, the effect of tags and patterns on the model's prediction depends on the main object in the images, e.g., presence of water in the background improves performance on images labeled as waterbird while degrading performance on landbird images. For each class \(c\) in the dataset, we take the union of tags generated by the model over images of class \(c\). Subsequently, we eliminate tags that occur less frequently than a predetermined threshold. This threshold varies depending on the dataset size, specifically set at \(50\), \(100\), and \(200\) in our experimental scenarios. In fact, we remove rare (irrelevant) tags and obtain a set of tags \(T_{c}\) for each class \(c\) in the dataset, e.g., \(T_{c}=\{\text{``red''},\text{``orange"},\text{``snow''},\text{``grass"},...\}\).
### Detecting Failure Modes
After obtaining tags, we mainly focus on tags whose presence in the image leads to a performance drop in the model. Indeed, for each class \(c\), we pick a subset \(S_{c}\subseteq T_{c}\) of tags and evaluate the
model's performance on the images of class \(c\) including all tags in \(S_{c}\). We denote this set of images by \(I_{S_{c}}\). \(I_{S_{C}}\) is a coherent image set in the sense that those images share at least the tags in \(S_{c}\).
For \(I_{S_{c}}\), to be a _failure mode_, we require that the model's accuracy over images of \(I_{S_{c}}\) significantly drops, i.e., denoting the model's accuracy over images of \(I_{S_{c}}\) by \(A_{S_{c}}\) and the model's overall accuracy over the images of class \(c\) by \(A_{c}\), then \(A_{S_{c}}\leq A_{c}-a\). Parameter \(a\) plays a pivotal role in determining the severity of the failure modes we aim to detect. Importantly, we want the tags in \(S_{c}\) to be minimal, i.e., none of them should be redundant. In order to ensure that, we expect that the removal of any of tags in \(S_{c}\) determines a relatively easier subpopulation. In essence, presence of all tags in \(S_{c}\) is deemed essential to obtain that hard subpopulation.
More precisely, Let \(n\) to be the cardinality of \(S_{c}\), i.e., \(n=|S_{c}|\). We require all tags \(t\in S_{c}\) to be necessary. i.e., if we remove a tag \(t\) from \(S_{c}\), then the resulting group of images should become an easier subpopulation. More formally, for all \(t\in S_{c}\), \(A_{S_{c}\setminus t}\geq A_{S_{c}}+b_{n}\) where \(b_{2},b_{3},b_{4},...\) are some hyperparameters that determine the degree of necessity of appearance of all tags in a group. We generally pick \(b_{2}=10\%\), \(b_{3}=5\%\) and \(b_{4}=2.5\%\) in our experiments. These values help us fine-tune the sensitivity to tag necessity and identify meaningful failure modes. Furthermore, we require a minimum of \(s\) samples in \(I_{S_{c}}\) for reliability and generalization. This ensures a sufficient number of instances where the model's performance drops, allowing us to confidently identify failure modes. Figure 1 shows some of the obtained failure modes.
**How to obtain failure modes.** We generally use _Exhaustive Search_ to obtain failure modes. In exhaustive search, we systematically evaluate various combinations of tags to identify failure modes, employing a brute-force approach that covers all possible combinations of tags up to \(l\) ones. More precisely, we consider all subsets \(S_{c}\subseteq T_{c}\) such that \(|S_{c}|\leq l\) and evaluate the model's performance on \(I_{S_{c}}\). As mentioned above, we detect \(S_{c}\) as a failure mode if (1) \(|I_{S_{c}}|\geq s\), (2) model's accuracy over \(I_{S_{c}}\) is at most \(A_{c}-a\), and (3) \(S_{c}\) is minimal, i.e., for all \(t\in S_{c}\), \(A_{S_{c}\setminus t}\geq A_{S_{c}}+b_{|S_{c}|}\). It is worth noting that the final output of the method is all sets \(I_{S_{c}}\) that satisfy those conditions and **description** for this group consist of **class name (c)** and **all tags in \(S_{c}\)**.
We note that the aforementioned method runs with a complexity of \(O\left(|T_{c}|^{l}|\mathcal{D}|\right)\). However, \(l\) is generally small, i.e., for a failure mode to be generalizable, we mainly consider cases where \(l\leq 4\). Furthermore, in our experiments over different datasets \(|T_{c}|\approx 100\), thus, the exhaustive search is relatively efficient. For instance, running exhaustive search (\(l=4,s=30,a=30\%\)) on Living17 dataset having \(17\) classes with \(88400\) images results in obtaining \(132\) failure modes within a time frame of under \(5\) minutes. We refer to Appendix A.6 for more efficient algorithms.
**Experiments and Comparison to Existing Work.** We run experiments on models trained on Living17, NonLiving26, Entity13 (Santurkar et al., 2020), Waterbirds (Sagawa et al., 2019), and CelebA
Figure 4: Evaluating detected failure modes on unseen data. **(Left)**: we extract failure modes on Living17 dataset using \(s=30\) and \(a=30\%\). \(132\) failure groups (over \(17\) classes) are detected and it is observed that around \(86.01\%\) of detected failure modes exhibit at least \(25\%\) drop in accuracy over unseen data that shows a significant degree of generalization. **(Right)**: same results for CelebA dataset where the parameters for failure mode detection is \(s=50\) and \(a=30\%\). Around \(79.31\%\) of failure modes show the drop of at least \(20\%\). The trend of \(y=x\) is seen in these plots.
(Liu et al., 2015) (for age classification). We refer to Appendix A.1 for model training details and the different hyperparameters we used for failure mode detection. We refer to Appendix A.2 for the full results of our method on different datasets. We engage two of the most recent failure mode detection approaches DOMINO(Eyuboglu et al., 2022) and Distilling Failure Directions(Jain et al., 2022) as strong baselines and compare our approach with them.
## 4 Evaluation
Let \(\mathcal{D}\) be the dataset on which we detect failure modes of a trained model. The result of a human-understandable failure mode extractor on this dataset consists of sets of images, denoted as \(I_{1},I_{2},...,I_{m}\), along with corresponding descriptions, labeled as \(T_{1},T_{2},...,T_{m}\). Each set \(I_{j}\) comprises images that share similar attributes, leading to a noticeable drop in model accuracy. Number of detected failure modes, \(m\), is influenced by various hyperparameters, e.g., in our method, minimum accuracy drop (\(a\)), values for \(b_{2},b_{3},...\), and the minimum group size (\(s\)) are these parameters.
One of the main goals of detecting failure modes in human-understandable terms is to generate high-quality captions for hard subpopulations. We note that these methods should also be evaluated in terms of coverage, i.e., what portion of failure inputs are covered as well as the difficulty of detected failure modes. All these methods extract hard subpopulations on which the model's accuracy significantly drops, and coverage depends on the dataset, trained model and the hyperparameters of the method, thus, we mainly focus on generalizability of our approach and quality of descriptions.
### Generalization on Unseen Data
In order to evaluate generalizability of the resulted descriptions, we take dataset \(\mathcal{D}^{\prime}\) including unseen images and recover relevant images in that to each of captions \(T_{1},T_{2},...,T_{m}\), thus, obtaining \(I^{\prime}_{1},I^{\prime}_{2},...,I^{\prime}_{m}\). Indeed, \(I^{\prime}_{j}\) includes images in \(\mathcal{D}^{\prime}\) that are relevant to \(T_{j}\). If captions can describe hard subpopulations, then we expect hard subpopulations in \(I^{\prime}_{1},I^{\prime}_{2},...,I^{\prime}_{m}\). Additionally, since \(\mathcal{D}\) and \(\mathcal{D}^{\prime}\) share the same distribution, we anticipate the accuracy drop in \(I_{j}\) to closely resemble that in \(I^{\prime}_{j}\).
In our method, for a detected failure mode \(I_{S_{c}}\), we obtain \(I^{\prime}_{S_{c}}\) by collecting images of \(\mathcal{D}^{\prime}\) that have all tags in \(S_{c}\). For example, if appearance of tags "black", "snowing", and "forest" is detected as a failure mode for class "bear", we evaluate model's performance on images of "bear" in \(\mathcal{D}^{\prime}\) that include those three tags, expecting a significant accuracy drop for model on those images. As seen in Figure 4, PRIME shows a good level of generalizability. We refer to Appendix A.5 for generalization on other datasets with respect to different hyperparameters (\(s\) and \(a\)). While all our detected failure modes generalize well, we observe stronger generalization when using more stringent hyperparameter values (high \(s\) and \(a\)), though it comes at the cost of detecting fewer modes.
In contrast, existing methods (Eyuboglu et al., 2022; Jain et al., 2022) do not provide a direct way to assess generalization from text descriptions alone. See Appendix A.9 for more details.
### Generalization on Generated Data
We utilize language models to create descriptive captions for objects and tags associated with failure modes in images. These captions serve as prompts for text-to-image generative models, enabling the creation of artificial images that correspond to the identified failure modes. To achieve this, we adopt the methodology outlined in Vendrow et al. (2023), which leverages a denoising diffusion model (Ho et al., 2020; Rombach et al., 2022). We fine-tune the generative model on the Living17 dataset to generate images that match the distribution of our training data.
For each class in Living17 dataset, we employ our approach to identify two failure modes (hard subpopulations) and two success modes (easy subpopulations). We then employ ChatGPT1 to generate descriptive captions for these groups. Subsequently, we generate \(50\) images for each caption and assess the model's accuracy on these newly generated images. We refer to Appendix A.10 for more details on this experiment and average discrepancy in accuracy between the success modes and failure modes which further validates PRIME. Figure 5 provides both accuracy metrics and sample images for three hard and three easy subpopulations.
Footnote 1: ChatGPT 3.5, August 3 version
### Quality of Descriptions
In order to evaluate the quality of descriptions, we propose a suite of three complementary automated metrics that utilize vision-language models (such as CLIP) as a proxy to obtain image-text similarity (Hessel et al., 2021). Let \(t\) be the failure mode's description, \(f_{\text{text}}(t)\) denote the normalized embedding of text prompt \(t\) and \(f_{\text{vision}}(x)\) denote the normalized embedding of an image \(x\). The similarity of image \(x\) to this failure mode's description \(t\) is the dot product of image and text representation in shared vision-language space. More precisely, \(\text{sim}(x,t):=\langle f_{\text{vision}}(x),f_{\text{text}}(t)\rangle\).
For a high-quality failure mode \(I_{j}\) and its description \(T_{j}\), we wish \(T_{j}\) to be similar to images in \(I_{j}\), thus, we consider the average _similarity_ of images in \(I_{j}\) and \(T_{j}\). we further expect a high level of _coherency_ among all images in \(I_{j}\), i.e., these images should all share multiple semantic attributes described by text, thus, we wish the standard deviation of similarity scores between images in \(I_{j}\) and \(T_{j}\) to be low. Lastly, we expect generated captions to be _specific_, capturing the essence of the failure mode, without including distracting irrelevant information. That is, caption \(T_{j}\) should only describe images in \(I_{j}\) and not images outside of that. As a result, we consider the AUROC between the similarity score of images inside the failure mode \((I_{j})\) and some randomly sampled images outside of that. We note that in existing methods as well as our method, all images in a failure mode have the same label, so we sample from images outside of the group but with the same label.
In Figure 6, we show (1) the average similarity score, i.e., for all \(I_{j}\) and \(x\in I_{j}\), we take the mean of \(\text{sim}(x,T_{j})\), (2) the standard deviation of similarity score, i.e., the standard deviation of \(\text{sim}(x,T_{j})\) for all \(I_{j}\) and \(x\in I_{j}\), and (3) the AUROC between the similarity scores of images inside failure modes to their corresponding description and some randomly sampled images outside of the failure mode to that. As shown in Figure 6, PRIME improves over DOMINO (Eyuboglu et al., 2022) in terms of all AUROC, average similarity, and standard deviation on different datasets. It is worth noting that this improvement comes even though DOMINO chooses a text caption for the failure mode _to maximize the similarity score in latent space_. We use hyperparameters for DOMINO to obtain fairly the same number of failure modes detected by PRIME. Results in Figure 6 show that PRIME is better than DOMINO in the descriptions it provides for detected failure modes. In Appendix A.8 we provide more details on these experiments. Due to the limitations of Jain et al. (2022) for automatically generating captions, we cannot conduct extensive experiments on various datasets. More details and results on that can be found in Appendix A.11.
Figure 5: Accuracy of model over \(50\) generated images corresponding to one of the success modes and failure modes for classes “bear”, “ parrot”, and “fox” from Living17. Accuracy gap shows that our method can identify hard and easy subpopulations. Images show that extracted tags are capable of describing detailed images.
## 5 On Complexity of Failure Mode Explanations
We note that the main advantage of our method is its more faithful interpretation of failure modes. This comes due to (1) putting interpretability first, i.e., we start by assigning interpretable tags to images and then recognize hard subpopulations and (2) considering combination of several tags which leads to a higher number of attributes (tags) in the description of the group.
### Do We Need to Consider Combination of Tags?
We shed light on the number of tags in the failure modes detected by our approach. We note that unlike Bias2Text (Kim et al., 2023) that finds biased concepts on which model's behavior changes, we observe that sometimes appearance of several tags (concepts) all together leads to a severe failure mode. As an example, we refer to Table 1 where we observe that appearance of all \(3\) tags together leads to a significant drop while single tags and pairs of them show relatively better performance.
In PRIME, we emphasize the necessity of tags. Specifically, for any detected failure mode, the removal of any tag would result in an easier subpopulation. Consequently, failure modes with more tags not only provide more detailed description of their images but also characterize more challenging subpopulations. Table 2 presents the average accuracy drop on unseen images for groups identified by three tags, compared to the average accuracy drop on groups identified by subsets
\begin{table}
\begin{tabular}{c|c|c} \(\#\) of Tags & Tags & Accuracy \\ \hline \hline
3 & hang; branch; black; & \(41.18\%\) \\ \hline \multirow{3}{*}{2} & hang; branch; & \(56.33\%\) \\ \cline{2-3} & hang; black; & \(56.25\%\) \\ \cline{2-3} & branch; black; & \(54.67\%\) \\ \hline \multirow{3}{*}{1} & hang; & \(70.09\%\) \\ \cline{2-3} & branch; & \(69.23\%\) \\ \cline{2-3} & black; & \(73.21\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy on unseen images (\(\mathcal{D}^{\prime}\)) for class “ape” when given tags appear in the inputs (see Table 4 for visualization).
\begin{table}
\begin{tabular}{c|c|c|c} Dataset & \(3\) Tags & \(2\) Tags & \(1\) Tag \\ \hline \hline Entity13 & \(34.75\%\) & \(25.29\%\) & \(14.86\%\) \\ \hline Living17 & \(26.82\%\) & \(17.13\%\) & \(8.18\%\) \\ \hline Waterbirds & \(23.35\%\) & \(14.43\%\) & \(7.19\%\) \\ \hline CelebA & \(23.25\%\) & \(16.84\%\) & \(9.02\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average accuracy drop over unseen images (\(\mathcal{D}^{\prime}\)) on failure modes with \(3\) tags and images have at least \(2\) of those tags or at least one of them.
Figure 6: The mean and standard deviation of similarity scores between images in failure modes and their respective descriptions, along with the AUROC measuring the similarity score between descriptions and images inside and outside of failure modes, demonstrate that our method outperforms DOMINO in descriptions it generates for detected failure modes across various datasets.
of two tags or even a single tag from those failure modes. These results clearly demonstrate that involving more tags leads to the detection of more challenging subpopulations.
### Clustering-Based Methods may Struggle in Generating Coherent Output
We empirically analyze the reverse direction of detecting human-understandable failure modes. We note that in recent work where the goal is to obtain interpretable failure modes, those groups are found by clustering images in the latent space. Then, when a group of images or a direction in the latent space is found, these methods leverage the shared space of vision-language to find the text that best describes the images inside the group.
We argue that these approaches, based on distance-based clusters in the representation space, may produce less detailed descriptions. This is because the representation space doesn't always align perfectly with the semantic space.
Even points close to each other in the feature space may differ in certain attributes, and conversely, points sharing human-understandable attributes may not be proximate in the feature space. Hence, these approaches cannot generate high-quality descriptions as their detected clusters in the representation space may contain images with other semantic attributes.
To empirically test this idea, we use two attribute-rich datasets: CelebA (Liu et al., 2015) and CUB-200 (Wah et al., 2011). CelebA features \(40\) human-understandable tags per image, while CUB-200, a dataset of birds, includes \(312\) tags per image, all referring to semantic attributes. We use CLIP ViT-B/16 (Radford et al., 2021) and examine its representation space in terms of datasets' tags. Table 3 shows the statistics of the distance between the points conditioned on the number of shared tags. As seen in the Table 3, although the average of distance between points with more common tags slightly decreases, the standard deviation of distance between points is high. In fact, points with many common tags can still be far away from each other. Last column in Table 3 shows the probability that the distance between two points with at least \(d\) shared tags be larger than the distance of two randomly sampled points. Even when at least \(5\) tags are shared between two points, with the probability of \(0.34\), the distance can be larger than two random points. Thus, if we plant a failure mode on a group of images sharing a subset of tags, these clustering-based methods cannot find a group consisting of _only_ those images; they will inevitably include other irrelevant images, leading to an incoherent failure mode set and, consequently, a low-quality description. This can be observed in Appendix A.7 where we include DOMINO's output.
We also run another experiment to foster our hypothesis that distance-based clustering methods cannot fully capture semantic similarities. We randomly pick an image \(x\) and find \(N\) closest images to \(x\) in the feature space. Let \(C\) be the set of these images. We inspect this set in terms of the number of tags that commonly appear in its images as recent methods (Eyuboglu et al., 2022; d'Eon et al., 2021; Jain et al., 2022), take the average embedding of images in \(C\) and then assign a text to describe images of \(C\). Table 5 shows the average number of tags that appear in at least \(\alpha N\) images of set \(C\) (we sample many different points \(x\)). If representation space is a good proxy for semantic space, then we expect a large number of shared tags in close proximity to point \(x\). At the same time, for the point \(x\), we find the maximum number of tags that appear in \(x\) and at least \(N\) other images. This is the number of shared tags in close proximity of point \(x\) but in semantic space. As shown in Table 5, average number of shared tags in semantic space is significantly larger than the average number of shared tags in representation space.
## 6 Conclusions
In this study, drawing from the observation that current techniques in human-comprehensible failure mode detection sometimes produce incoherent descriptions, along with empirical findings related
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline \# of shared tags \(\geq d\) & mean & standard deviation & Probability \\ \hline \hline \(d=0\) & 9.49 & 0.98 & 0.50 \\ \hline \(d=1\) & 9.47 & 1.00 & 0.49 \\ \hline \(d=3\) & 9.23 & 1.00 & 0.42 \\ \hline \(d=5\) & 8.89 & 1.21 & 0.34 \\ \hline \(d=7\) & 8.32 & 1.80 & 0.25 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Statistics of the distance between two points in CelebA conditioned on number of shared tags. Distances are reported using CLIP ViT-B/16 representation space. The last column shows the probability that the distance between two sampled images with at least \(d\) common tags be more than that of two randomly sampled images.
to the latent space of vision-language models, we introduced PRIME, a novel approach that prioritizes interpretability in failure mode detection. Our results demonstrate that our method generates descriptions that are more similar, coherent, and specific in comparison to existing methods for the detected failure modes.
## Acknowledgments
This project was supported in part by a grant from an NSF CAREER AWARD 1942230, ONR YIP award N00014-22-1-2271, ARO's Early Career Program Award 310902-00001, Meta grant 23010098, HR00112090132 (DARPA/RED), HR001119S0026 (DARPA/GARD), Army Grant No. W911NF2120076, NIST 60NANB20D134, the NSF award CCF2212458, an Amazon Research Award and an award from Capital One.
|
2309.11122 | Hyperspectral Benchmark: Bridging the Gap between HSI Applications
through Comprehensive Dataset and Pretraining | Hyperspectral Imaging (HSI) serves as a non-destructive spatial spectroscopy
technique with a multitude of potential applications. However, a recurring
challenge lies in the limited size of the target datasets, impeding exhaustive
architecture search. Consequently, when venturing into novel applications,
reliance on established methodologies becomes commonplace, in the hope that
they exhibit favorable generalization characteristics. Regrettably, this
optimism is often unfounded due to the fine-tuned nature of models tailored to
specific HSI contexts.
To address this predicament, this study introduces an innovative benchmark
dataset encompassing three markedly distinct HSI applications: food inspection,
remote sensing, and recycling. This comprehensive dataset affords a finer
assessment of hyperspectral model capabilities. Moreover, this benchmark
facilitates an incisive examination of prevailing state-of-the-art techniques,
consequently fostering the evolution of superior methodologies.
Furthermore, the enhanced diversity inherent in the benchmark dataset
underpins the establishment of a pretraining pipeline for HSI. This pretraining
regimen serves to enhance the stability of training processes for larger
models. Additionally, a procedural framework is delineated, offering insights
into the handling of applications afflicted by limited target dataset sizes. | Hannah Frank, Leon Amadeus Varga, Andreas Zell | 2023-09-20T08:08:34Z | http://arxiv.org/abs/2309.11122v1 | Hyperspectral Benchmark: Bridging the Gap between HSI Applications through Comprehensive Dataset and Pretraining
###### Abstract
Hyperspectral Imaging (HSI) serves as a non-destructive spatial spectroscopy technique with a multitude of potential applications. However, a recurring challenge lies in the limited size of the target datasets, impeding exhaustive architecture search. Consequently, when venturing into novel applications, reliance on established methodologies becomes commonplace, in the hope that they exhibit favorable generalization characteristics. Regrettably, this optimism is often unfounded due to the fine-tuned nature of models tailored to specific HSI contexts.
To address this predicament, this study introduces an innovative benchmark dataset encompassing three markedly distinct HSI applications: food inspection, remote sensing, and recycling. This comprehensive dataset affords a finer assessment of hyperspectral model capabilities. Moreover, this benchmark facilitates an incisive examination of prevailing state-of-the-art techniques, consequently fostering the evolution of superior methodologies.
Furthermore, the enhanced diversity inherent in the benchmark dataset underpins the establishment of a pretraining pipeline for HSI. This pretraining regimen serves to enhance the stability of training processes for larger models. Additionally, a procedural framework is delineated, offering insights into the handling of applications afflicted by limited target dataset sizes.
**Keywords:** Hyperspectral Imaging, Data Set, Pretraining, Classifcation, Benchmark
## 1 Introduction and Related Work
Hyperspectral imaging (HSI) is a non-destructive measurement technique, which can be seen as a spatial spectroscopy. The sensors cover a wide spectrum beyond the visible light, and usually provide hundreds of narrow bands. The recording of a hyperspectral camera is called a hyperspectral cube, with two spatial dimensions (x and y) and a channel dimension (\(\lambda\)). Each pixel is a high-dimensional vector whose channel entries correspond to the intensity in a specific wavelength.
With the advantage of capturing additional spectral information, the hyperspectral imaging technology became increasingly popular and has been applied in many fields. In remote sensing where HSI has its origin (e.g., Chen et al (2014)), but also in medical applications by Lu and Fei
(2014), in agriculture ("precision farming") by Lu et al (2020), in the recycling industry (e.g., Bonifazi et al (2019)), and in the food industry, where the HSI technique is used, for example, to measure the quality or ripeness of fruit (e.g., Girod et al (2008); Varga et al (2021)). The recordings and the important features of these applications differ widely.
The interpretation of a hyperspectral cube poses challenges for human experts, which lead to the development of various processing methods. However, these solutions are often tailored to specific applications, making adaptation to new scenarios difficult. Ideally, a method that demonstrates strong generalization capabilities and can be readily applied to diverse data sets and tasks without significant modifications would be desired. To validate existing approved approaches and novel models based on these criteria, a comprehensive benchmark testing different application scenarios is required.
In this work, we focus on the classification and segmentation task of hyperspectral recordings, as these are still the most common applications of HSI. More complex computer vision tasks, like object detection or object tracking, are with the most current hyperspectral cameras not practicable. For classification, the task is to determine the class of a sample in a recording. In contrast, segmentation produces a segmentation mask for a recording, which indicates the class of each pixel.
In the early stages, classical machine learning (ML) approaches, like support vector machines (SVM) described by Cristianini and Shawe-Taylor (2010) were used for HSI classification. Other methods focused on feature extraction or dimensionality reduction, e.g., via principal component analysis (PCA) proposed by Pearson (1901) as a preprocessing step. These methods operate pixel-wise and therefore on the spectral dimension only. However, it has been shown that HSI classification performance is highly dependent on both, spatial and spectral information (e.g., Yan et al (2010)).
Nowadays, hyperspectral image data is mostly evaluated using deep learning. Deep learning models, like autoencoders (AE) (LeCun, 1987), recurrent neural networks (RNN) (Rumelhart et al, 1986), or convolutional neural networks (CNN) (Fukushima, 1980), have been applied for HSI classification (e.g., Lin et al (2013); Mou et al (2017); Yang et al (2018)).
Conventional CNNs use 2D convolutions that are applied over the spatial dimensions, but are not able to handle the spectral information. 3D convolutions, on the other hand, can extract the spectral and spatial information simultaneously, but at the cost of increased computational complexity. Recently, there have been attempts to combine both, 2D and 3D convolutions, in order to benefit from the spatial and spectral feature learning capability, by simultaneously overcoming this drawback. Such networks are then referred to as hybrid or 3D-2D CNNs (e.g., Roy et al (2020)).
An alternative method for simultaneously leveraging spatial and spectral information is to apply 2D convolutions on transformed data. For instance, Chakraborty and Trehan (2021) employed wavelet transformations to incorporate both aspects.
Although CNNs have proven to be powerful in extracting spatial and locally contextual information, they fail to capture the globally sequential information, especially long-term dependencies in the spectral dimension of the hyperspectral data. vision transformers (ViT), as described by Dosovitskiy et al (2020) are specifically designed for and very efficient at analyzing sequential data, mainly because of the self-attention mechanism. Thus, ViTs have also been successfully applied for HSI classification (e.g., He et al (2020); Hong et al (2022); Yang et al (2022)).
A couple of publications reviewed the current development of HSI models (Paoletti et al, 2019; Li et al, 2019; Ahmad et al, 2022). However, these reviews often lack the most recent developments, including vision transformers, and focus on remote sensing scenes only.
For the assessment of hyperspectral models, the established hyperspectral remote sensing scenes (HRSS) dataset published by Grana et al (2011) is the primary choice, despite its limitations (e.g., lack of a defined training-test split), which will be discussed in Section 2. This dataset comprises satellite recordings with the objective of ground classification as a segmentation task. Although there are different recorded scenarios, the application is very specific and differs widely from the much more common inline inspection.
Consequently, recent model developments' conclusions are not directly transferable to other hyperspectral applications.
To address this, we introduce a comprehensive framework to evaluate and compare various approaches and models for different hyperspectral imaging (HSI) classification tasks. The benchmark, see Section 2, consists of three different data sets, provides model implementations for 23 models, and allows a simple integration of additional ones. Through the establishment of consistent training and evaluation protocols, we facilitate equitable model comparisons and enhance result reproducibility in the field of HSI.
Furthermore, leveraging the proposed framework, we conduct an in-depth analysis of current state-of-the-art models for HSI, yielding valuable insights, as presented in Section 3.3.
Finally, we explore the technique of pretraining, where models are initially trained on large datasets with disparate applications, and subsequently fine-tuned on the target dataset, which is typically smaller. This pretraining process stabilizes training and leads to notably improved results for color images (see, e.g., Krizhevsky et al. (2012)). Similar results were produced for the HRSS data set with different scenes (Lee et al., 2022; Windrim et al., 2018). Still, this is very specific for the remote sensing use case. In Section 4, we leverage the proposed benchmark datasets to evaluate pretraining's efficacy for various hyperspectral applications and highlight essential considerations for its successful implementation.
## 2 Proposed Benchmark
As discussed in the introduction, the current data sets serve well for developing models tailored to specific applications. However, this approach becomes limiting when attempting to address entirely new applications, as the fine-tuned models struggle to generalize effectively to unseen scenarios.
The primary objective of the proposed benchmark is to overcome this limitation by amalgamating data sets from multiple applications. By doing so, we aim to evaluate the performance of models across diverse hyperspectral imaging (HSI) applications, providing a comprehensive measure of their overall ability to handle hyperspectral recordings.
This section presents a detailed description of the published benchmark. It comprises three carefully chosen data sets, selected based on their widespread usage within the community, substantial size, and diverse application requirements.
The entire benchmark data set has a size of approximately 250 Gigabytes, and we have made it easily accessible for download through the provided download script at [https://github.com/cogsys-tuebingen/hsi_benchmark](https://github.com/cogsys-tuebingen/hsi_benchmark). Further, a PyTorch framework is provided, which allows reproducing our results and straightforward implementation of additional models.
### Hyperspectral Remote Sensing Scenes
M. Grana et al. collected satellite recordings and provide these as the hyperspectral remote sensing scenes (HRSS) data set (Grana et al, 2011). This
Figure 1: Indian Pines hyperspectral recording from the Hyperspectral Remote Sensing Scenes (HRSS) data set. (a) depicts the average of the 200 bands. (b) illustrates the corresponding ground truth segmentation mask.
data set is currently the most established data set for the evaluation of hyperspectral models (Paoletti et al, 2019; Li et al, 2019; Ahmad et al, 2022; Roy et al, 2020; Chakraborty and Trehan, 2021; He et al, 2020; Hong et al, 2022; Yang et al, 2022).
Within the HRSS data set, several hyperspectral recordings are available for conducting segmentation tasks, with each recording being evaluated independently. Each of these recordings comes accompanied by a ground truth segmentation mask, facilitating precise evaluation. However, it's worth noting that the different recordings within the data set do not share the same classes. So, it is not straightforward to combine all recordings in a single evaluation.
For the benchmark, we select the three most frequently used recordings.
* _Indian Pines_: The Indian Pines scene (shown in Fig. 1) contains two-thirds agriculture, and one-third forest or other natural perennial vegetation. The task is to distinguish crop types. The corrected version without the noisy water absorption bands is used. The recording contains 16 classes. (\(x=145;y=145;\lambda=220\), AVIRIS sensor)
* _Salinas_: The Salinas scene shows the Salinas Valley in California. Again, different crop types should be detected. The corrected version of the Salinas data set is used. It contains 16 classes. (\(x=512;y=217;\lambda=224\), AVIRIS sensor)
* _Pavia University_: The recording Pavia University shows the university of Pavia, Italy. It covers nine classes, which represent the nature of the ground (e.g. asphalt, bitumen, soil). (\(x=610;y=340;\lambda=102\), ROSIS sensor)
For detailed specifications of the sensors, refer to Table 1.
Despite the widespread usage of the HRSS data set, it lacks an established training-validation-test split. Oftentimes, only the training-test ratio is reported, leading to complexities in reproducing experiments and making fair model comparisons. The absence of a standardized partitioning scheme hinders the ability to assess model performance consistently and hampers the reproducibility of results. As a consequence, researchers encounter challenges in accurately replicating experiments and drawing meaningful comparisons between different models. Establishing a well-defined training-validation-test split is crucial for enhancing the rigor and reliability of research outcomes and fostering advancements in this area.
To solve this issue, we define fixed training-validation-test splits independent of the train-test ratios and with balanced classes (see Appendix A.2). The train-test-ratio defines the fraction of the image used for training. We define two train-test-ratios (10 % and 30 % of the pixels for training, and the remaining pixels for testing), as these are the most commonly used. Although evaluations involving smaller train-test ratios are conducted, they are considered supplementary and are not part of the main evaluation. By employing fixed splits and accounting for class balance, we ensure consistent reproducibility and fostering robust comparisons between different models.
Figure 2: An avocado recording of the DeepHS Fruit data set. (a) depicts the average of the bands. (b) shows the corresponding ground truth labels.
### DeepHS Fruit
DeepHS Fruit version 2 (Varga et al, 2021, 2023a) presents a classification data set tailored for fruit ripeness prediction through hyperspectral imaging. The data set comprises approximately 5000 recordings of individual fruit specimens belonging to five distinct fruit types, namely avocado, kaki, kiwi, mango, and papaya. To ensure high accuracy and precision, the recordings were conducted under controlled laboratory conditions using three advanced hyperspectral cameras.
Specifically, two cameras, namely the Specim FX10 and Corning microHSI 410 Vis-NIR Hyperspectral Sensor, were utilized to record high-resolution images, capturing spectral information within the visible range and the lower near-infrared range. In addition, a near-infrared camera, the Innospec RedEye, was employed to acquire complementary data, further enhancing the data set's richness and reliability. For detailed specifications of the cameras, refer to Table 1.
The inclusion of hyperspectral cubes, along with elaborately measured ground truth labels, offers valuable insights into fruit ripeness. The data set's labels encompass three key attributes per recording, namely:
* **Ripeness**, encompassing the distinct stages of unripe, ripe, and overripe fruit.
* **Sweetness**, characterized by discerning between not sweet, sweet, and overly sweet fruit.
* **Firmness**, judiciously classified as too firm, perfect, or too soft.
This data set provides a robust foundation for objectwise classification tasks with three distinct categories, thereby serving as a benchmark for research and application endeavors within the domain of fruit ripeness prediction using hyperspectral imaging. The original publication's definition of training-validation-test sets has been adopted for consistency and comparability.
### DeepHS Debris
The DeepHS Debris data set is a contribution within this publication. Similar to the DeepHS Fruit data set, it was recorded under laboratory conditions. The data set encompasses both an objectwise classification task and a segmentation task, with the primary objective of distinguishing components of construction waste. Over a hundred samples of five common debris types (asphalt, brick, ceramic, concrete and tile) were recorded.
The acquisition was performed with two hyperspectral cameras (Specim FX10 and Corning microHSI 410 Vis-NIR Hyperspectral Sensor). Both record the visible range and the lower near-infrared (up to 1000 nm) and produced high-resolution images of the debris under consideration. Again, refer to Table 1 for detailed specifications of the cameras.
In contrast to the previously mentioned data sets, the DeepHS Debris data set uniquely offers both an objectwise classification track and a segmentation track. This allows the evaluation of models for both use-cases. Both tracks are evaluated separately. To ensure consistency and comparability in research endeavors, a fixed training-validation-test split has been defined for this data set.
### DeepHS Benchmark
The proposed hyperspectral benchmark was created by amalgamating the abovementioned three distinct data sets, each representing significant and emerging applications of hyperspectral imaging (HSI): remote sensing, food inspection, and
Figure 3: A brick recording of the DeepHS Debris data set. (a) depicts the average of the bands. (b) shows the corresponding segmentation mask. Additionally, the ground-truth label is ‘brick’ as only a single object is shown in the recording.
recycling. The inclusion of these diverse applications in the benchmark ensures its relevance and importance in the field.
Each selected data set possesses unique characteristics and demands specific requirements from models. The HRSS data set focuses on segmentation, emphasizing the significance of texture features. In contrast, DeepHS Fruit revolves around objectwise classification, where spectral identification takes precedence. Lastly, DeepHS Debris allows both objectwise classification and pixelwise segmentation, encompassing the tasks of the other data sets. Despite their dissimilarities, all three data sets carry equal weight in our evaluation.
Through the integration of these data sets, the benchmark offers a comprehensive assessment of models' performance in hyperspectral applications. This evaluation enables us to make meaningful observations about the models' generalizability and provides valuable insights for future advancements in model development.
In Appendix A.1, a comprehensive outline of the dataset configurations employed for the benchmark is furnished. The table therein not only enumerates the specific configurations but also furnishes the dimensions of the training, validation, and test sets corresponding to each configuration.
By introducing this benchmark, we facilitate the pretraining of hyperspectral models, a practice widely employed in neural networks for color image applications (see, e.g. Krizhevsky et al. (2012)). Traditionally, large data sets are utilized for pretraining, allowing models to assimilate crucial features beforehand. Consequently, fine-tuning on a smaller target data set, specific to the actual application, becomes more straightforward. While some publications have utilized the HRSS data set for pretraining and fine-tuning exclusively within the realm of remote sensing recordings (Lee et al., 2022; Windrim et al., 2018), this approach lacks versatility for other applications. This limitation and the usage of the proposed benchmark for pretraining is explored further in Section 4.
## 3 Experiments
In this section, the baseline experiments for the proposed data sets are described. After introducing the selected models with their specific training parameters, the experiment setups with the training procedures are presented. Finally, the results of the experiments are discussed and conclusions helping future model developments are provided.
### Models
A selection of diverse models was chosen to encompass both state-of-the-art techniques for hyperspectral image classification and contemporary approaches for handling the hyperspectral cube. The complete list of models can be found in Table 2 and will be elaborated upon in the subsequent paragraphs. Unless otherwise specified, the default configuration suggested in the original publications of each model was employed.
As visualized in Tab. 2, the 23 selected models can be assigned to eight different groups, which are based on fundamentally different techniques to handle the hyperspectral data.
The first category comprises classical machine learning techniques, namely support vector machine (SVM) (Cristianini and Shawe-Taylor, 2010) and partial least-squares discriminant analysis (PLS-DA) (Barker and Rayens, 2003). The SVM separates input data using hyperplanes and can handle non-linearly separable spaces by
\begin{table}
\begin{tabular}{l|l|l|l} \hline \hline Camera & Wavelength range & Application & Data set \\ \hline AVIRIS sensor & \(400-2500nm\) (224 bands) & Satellite & HRSS - Indian Pines and Salinas \\ ROSIS sensor & \(430-860nm\) (115 bands) & Satellite & HRSS - University of Pavia \\ Corning microHSI 410 & \(408-901nm\) (249 bands) & Inline & DeepHS Fruit and Debris \\ Innospec RedEye & \(920-1730nm\) (252 bands) & Inline & DeepHS Fruit \\ Specim FX10 & \(400-1000nm\) (224 bands) & Inline & DeepHS Fruit and Debris \\ \hline \hline \end{tabular}
\end{table}
Table 1: List of all used hyperspectral cameras and their specifications.
employing the kernel trick. In our case, we utilize the radial-basis function kernel for its robustness. PLS-DA, similar to principal component regression (PCR) proposed by Kendall (1957), aims to establish a relationship between the input and the ground truth by identifying the multidimensional direction in the input space that best aligns with the maximum multidimensional variance direction in the output space.
The second category comprises two straightforward neural network architectures that make pixel-based decisions. The first is the multilayer perceptron (MLP) (Paoletti et al, 2019), which is a simple feed-forward neural network with two layers. The second is the recurrent neural network (RNN) (Rumelhart et al, 1986), which utilizes recurrent connections to incorporate spectral information.
\begin{table}
\begin{tabular}{l|l c c c} \hline \hline Model name & Type & \begin{tabular}{c} Spatial \\ \end{tabular} &
\begin{tabular}{c} Trainable \\ \end{tabular} \\ \hline SVM Cristianini and Shawe-Taylor (2010) & Classical ML & ✗ & PCA(10) & - \\ PLS-DA Barker and Rayens (2003) & & ✗ & Raw & - \\ MLP Paoletti et al (2019) & Basic & ✗ & Raw & 29,000 \\ RNN Rumelhart et al (1986) & neural networks & ✗ & Raw & 27,000 \\
1D CNN Paoletti et al (2019) & & ✗ & Raw & 73,000 \\
2D CNN Paoletti et al (2019) & Convolutional & ✓ & PCA(40) & 7,500,000 \\
2D CNN (spatial)Paoletti et al (2019) & neural & ✓ & Mean & 7,500,000 \\
2D CNN (spectral) & networks & ✗ & Raw & 7,500,000 \\
3D CNN Paoletti et al (2019) & & ✓ & PCA(40) & 29,000,000 \\ Gabor CNN Ghamisi et al (2018) & & ✓ & PCA(3) & 7,400,000 \\ EMP CNN Ghamisi et al (2018) & & ✓ & PCA(3) & 7,500,000 \\ ResNet-18 He et al (2016) & & ✓ & Raw & 12,000,000 \\ ResNet-152 He et al (2016) & ResNet networks & ✓ & Raw & 59,000,000 \\ ResNet-18+HyveConv & & ✓ & Raw & 11,000,000 \\ ResNet-152+HyveConv & & ✓ & Raw & 58,000,000 \\ DeepHS-Net Varga et al (2021) & & ✓ & Raw & 31,000 \\ DeepHS-Net+HyveConv Varga et al (2023b) & DeepHS networks & ✓ & Raw & 17,000 \\ DeepHS-Hybrid-Net Varga et al (2023a) & & ✓ & Raw & 1,300,000 \\ SpectralNET Chakraborty and Trehan (2021) & State-of-the-art & ✓ & Raw & 8,300,000 \\ HybridSN Roy et al (2020) & for HRSS & ✓ & PCA(30) & 50,000,000 \\ Attention-based CNN Lorenzo et al (2020) & & ✗ & Raw & 2,000,000 \\ SpectralFormer Hong et al (2022) & & ✓ & Raw & 1,000,000 \\ HiT Yang et al (2022) & approaches & ✓ & Raw & 59,000,000 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Model overview: These models are used for the analysis and provide baseline results for the proposed benchmark
The following category combines basic convolutional neural networks with 1D, 2D and 3D kernels, allowing for the integration of spatial and spectral information. The 1D convolutional neural network (1D CNN) by Paoletti et al (2019) employs 1D convolution layers only along the spectral dimension of each pixel. The 2D convolutional neural network (2D CNN) by Paoletti et al (2019) convolves the input in the spatial dimension using 2D convolutional layers, while combining the spectral data in the fully connected head. And finally, the 3D convolutional neural network (3D CNN) by Paoletti et al (2019) employs 3D convolutional layers that operate on all three dimensions of the hyperspectral cube. The difference in size of the models for the different approaches is obvious.
Furthermore, two additional configurations are evaluated using the 2D CNN architecture. The "spatial" configuration solely focuses on spatial features by reducing the \(\lambda\) dimension to a single component, such as taking the mean of all channels. In contrast, the'spectral' configuration exclusively utilizes spectral features from the center pixels without considering spatial context. This configuration utilizes the same information as the 1D CNN model.
The fourth category includes models that combine preprocessing filters with convolutional neural networks. One such model is the Gabor CNN (Ghamisi et al, 2018), which applies a spatial Gabor filter to preprocess the hyperspectral input. The Gabor filter enhances textural information by exploring a higher-dimensional Gabor space. Another model in this category is the extended morphological profiles convolutional neural network (EMP-CNN) (Ghamisi et al, 2018), which utilizes mathematical morphology operators such as opening and closing to standardize the input. Both of these methods are beneficial when textural information is crucial and have demonstrated successful performance on HRSS data sets. Based on the complexity of the filter calculation, these models are applied on a PCA-reduced input.
ResNet proposed by He et al (2016) is a widely used architecture commonly employed for color image data. Its key feature is the inclusion of skip connections, also known as shortcut connections, which enhance interconnectivity between layers at varying depths. This architecture is available in different layer configurations, and for our experiments, we utilize ResNet-18 with 18 layers and ResNet-152 with 152 layers. The latter represents a larger-scale CNN architecture.
In addition to these, modified versions of the ResNet networks are evaluated in our study. We replace the initial convolutional layers of the architectures with HyveConv layers, as proposed by Varga et al (2023). This replacement not only reduces the number of trainable parameters to some extent, but more importantly, it ensures camera-agnostic models. This camera-agnostic capability is essential for training on recordings from different cameras, which will also be utilized for pretraining (Section 4).
The next category encompasses the DeepHS-Net architectures, which have been specifically optimized for ripeness classification of fruit using hyperspectral imaging (HSI). DeepHS-Net (Varga et al, 2021) is a 2D convolutional neural network designed for efficient performance on small hyperspectral data sets. DeepHS-Hybrid-Net (Varga et al, 2023), on the other hand, combines 3D and 2D convolutions, making it a hybrid model. By leveraging both 3D and 2D convolutions, the network can reduce the number of parameters compared to a fully 3D CNN while still retaining the ability to convolve along the \(\lambda\) dimension. Further, this category contains a DeepHS-Net with a HyveConv layer in the first layer, as proposed by Varga et al (2023).
The next category includes two state-of-the-art methods specifically developed for the HRSS data set. One of these methods is SpectralNET Chakraborty and Trehan (2021), which utilizes wavelet transformations to conduct convolutions in both the spatial and spectral dimensions. The second method, HybridSN (Roy et al, 2020), is a hybrid architecture that combines 3D convolutions for spatial-spectral operations with 2D convolutions for purely spatial operations. It is worth noting that HybridSN has a significantly larger number of trainable parameters compared to DeepHS-Hybrid-Net. As a remark, the noted trainable parameters for HybridSN in Table 2 are larger than in the original publication, as we increase the input patch size for all models (described in Sec. 3.2).
The final category represents the latest trend in computer vision, which is attention-based methods, particularly in the form of vision transformers. Three models were selected in this category, optimized specifically for the HRSS data set.
The first is an attention-based CNN proposed by Lorenzo et al (2020), which employs an attention mechanism solely for the spectral dimension without utilizing spatial context. SpectralFormer, introduced by Hong et al (2022), is an early adopter of the vision transformer approach for hyperspectral recordings. It utilizes an attention-based method with spatial context, incorporating skip connections for a more flexible backbone. HiT, proposed by Yang et al (2022), is a vision transformer model that includes two key components: 3-D convolution projection modules and convolution permutators to capture subtle spatial-spectral discrepancies. It is important to note that the vision transformer methods have higher complexity compared to other approaches, and they are still in their early stages of development.
In summary, a diverse range of models has been presented as baselines for the proposed benchmark. The models can be categorized based
\begin{table}
\begin{tabular}{l c|c c c|c c c} \hline \hline & & **HRSS** & **Fruit** & **Debris** & **Overall** & **Rank.** \\ \hline SVM & \multirow{3}{*}{Classical ML} & 78.85 \% & 47.10 \% & 52.57 \% & 59.50 \% & 16 \\ PLS-DA & & 66.61 \% & 51.22 \% & 38.56 \% & 52.13 \% & 21 \\ MLP & Basic & 71.18 \% & 44.54 \% & 50.72 \% & 55.48 \% & 22 \\ RNN & neural networks & 69.61 \% & 41.72 \% & 52.50 \% & 54.61 \% & 23 \\
1D CNN & \multirow{3}{*}{Convolutional} & 91.04 \% & 51.30 \% & 63.13 \% & 68.49 \% & 11 \\
2D CNN & & 99.71 \% & 54.42 \% & 77.39 \% & 77.17 \% & 2 \\
2D CNN (spatial) & neural & 99.69 \% & 44.85 \% & 54.23 \% & 66.26 \% & 13 \\
2D CNN (spectral) & networks & 86.73 \% & 49.27 \% & 50.47 \% & 62.15 \% & 15 \\
3D CNN & & 99.73 \% & 56.06 \% & **87.56** \% & **81.12** \% & **1** \\ Gabor CNN & \multirow{3}{*}{CNNs + Filter} & **99.75** \% & 52.57 \% & 66.43 \% & 72.92 \% & 5 \\ EMP CNN & & 99.54 \% & 52.76 \% & 61.87 \% & 71.39 \% & 8 \\ ResNet-18 & \multirow{3}{*}{ResNet networks} & 99.52 \% & 49.05 \% & 59.56 \% & 69.38 \% & 9 \\ ResNet-152 & & 96.27 \% & 47.00 \% & 29.30 \% & 57.52 \% & 19 \\ ResNet-18+HyveConv & & 99.67 \% & 51.43 \% & 67.12 \% & 72.74 \% & 6 \\ ResNet-152+HyveConv & & 97.22 \% & 42.66 \% & 46.91 \% & 62.26 \% & 18 \\ DeepHS-Net & & 98.33 \% & **58.28** \% & 75.32 \% & 77.31 \% & 4 \\ DeepHS-Net+HyveConv & DeepHS networks & 98.53 \% & 57.57 \% & 75.22 \% & 77.11 \% & 3 \\ DeepHS-Hybrid-Net & & 95.36 \% & 55.01 \% & 82.14 \% & 77.50 \% & 7 \\ SpectralNET & State-of-the-art & 98.38 \% & 49.25 \% & 46.33 \% & 64.65 \% & 14 \\ HybridSN & for HRSS & 97.54 \% & 48.74 \% & 73.85 \% & 73.38 \% & 10 \\ Attention-based CNN & & 89.93 \% & 44.88 \% & 50.18 \% & 61.66 \% & 20 \\ SpectralFormer & \multirow{3}{*}{Attention-based} & \multirow{3}{*}{96.25 \%} & 41.71 \% & 53.24 \% & 63.74 \% & 17 \\ HiT & & 98.47 \% & 48.16 \% & 59.23 \% & 68.62 \% & 12 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Overall classification accuracy for the individual models and the three different data sets. In addition, the ranking based on the average ranking on the individual data sets is given. **Bold** highlights the best and underlined the second-best accuracy per data set as well as overall.
on their handling of the input data. The majority of models utilize the normalized hyperspectral cube as the input data. In some cases, a dimension reduction technique is applied to reduce the \(\lambda\) dimension. Principal component analysis (PCA), which reduces dimensions based on the variance, is a commonly used method for this purpose and is recommended for several selected models (refer to Table 2).
### Training Procedure
In order to enable a fair comparison between all of these models, we homogenized the training procedure as far as possible.
For each of the three benchmark data sets, a fixed train-val-test-split was used (see Sec. 2), and the size of the classes in the categories was balanced, respectively. Also, we used a standardized input image size across data sets and models. For objectwise classification, the whole image was resized to \(128\times 128\) pixels while for patchwise classification, we used patches of size 63 pixels, in combination with a dilation of one for the HRSS data sets and 30 for the DeepHS Debris data set, respectively. For testing, all available pixels were used (dilation 1).
Using three different seeds each, we trained each classifier model for overall 36 combinations of data set, classification task, and sensor.
In all cases, the model parameters were optimized with Adam (Kingma and Ba, 2015), using a learning rate of 0.01, which was stepwise decreased during training. Cross-entropy loss (George Cybenko, 1999) was used as loss function. We trained for 50 epochs and used checkpoint callback and early stopping based on the validation loss (Prechelt, 1998). A batch size of 32 was chosen. The training data was augmented using random flipping, random rotation and random cut, each with a probability of 50%, and random cropping with 10% probability. For individual model-specific exceptions, see Appendix, Tab. B2.
### Results
In this section, we delve into a comprehensive evaluation of the selected models, as outlined in Section 3.1, in the context of the proposed benchmark elucidated in Section 2. Moreover, we undertake an analysis of interesting aspects of hyperspectral image classification and highlight interesting insights.
Tab. 3 shows the overall classification accuracy for the individual models and the three different data sets. Highlighted are the top two accuracies for each of the data sets and overall, respectively.
Classical ML methods, such as SVM, as well as simple neural networks, like MLP and RNN, perform rather badly over all data sets.
Figure 4: Overall classification accuracy versus size (number of parameters) of the individual models. The gray area contains all models which are not considering the spatial information.
The convolutional neural networks are optimized for the structure of image data. Among these, the 3D CNN emerges as the foremost performer, showcasing remarkable accuracy, particularly within the debris dataset. Notably, even the 2D CNN exhibits commendable performance. By exclusively feeding spatial or spectral data into the 2D CNN architecture, it becomes feasible to discern the significance of each feature in the decision-making process. For instance, in the context of the HRSS dataset, the spatial context emerges as a crucial determinant. A more intricate exploration of this phenomenon is undertaken in subsequent sections for an in-depth understanding.
Convolutional neural networks enhanced by specialized filters such as Gabor CNN or EMP CNN exhibit enhanced performance when applied to the HRSS dataset. However, despite this advantage, they fall short in effectively processing spectral features and consequently do not secure the top-performer position. Furthermore, their inference process tends to be computationally expensive.
The performance of deep convolutional neural networks, exemplified by the ResNet family, is relatively modest. Notably, the ResNet-152 network, while powerful, encounters challenges when dealing with diminutive hyperspectral datasets. Addressing this concern is a focal point of the subsequent section (Section 4), where pretraining methods come to the fore as a solution.
While the DeepHS-Net family was meticulously crafted to excel in the fruit dataset domain, delivering remarkable performance, its prowess on the HRSS dataset is more reserved. Nevertheless, it's important to emphasize that the overall performance remains consistently stable across various datasets.
The state-of-the-art models employed in our experiment exhibit commendable performance on the HRSS dataset. However, their performance falls short when applied to the other datasets. It is crucial to underscore that these models are evaluated with a same and fixed training-validation-test split for the first time.
The attention-based approaches did not show a significant improvement in comparison to the other models. Our experiments show that transformers are not necessarily better than the (state-of-the-art) CNNs for hyperspectral image classification. It seems that the global spectral information, which can only be processed by the transformer models, is not as important as the local context information captured by the convolutions.
To address the issue of varying accuracy variances among different datasets, we additionally introduce a model ranking in Table 3. This ranking is established by computing the average placement of each model across individual datasets. Notably, a strong correlation between overall accuracy and the mean ranking emerges, albeit with a few outliers. This underscores that a mere comparison of overall accuracy is informative, but not
Figure 5: Overall classification accuracy for each model. Bars for models considering the spatial dimension in red, spectral in gray, both in gold.
sufficient. Consequently, an evaluation of performance across the individual datasets is addressed later in this section.
In Fig. 4, we augment the accuracy results with supplementary data concerning the model size. The 3D CNN remains the foremost performer, albeit noticeably larger in scale by comparison. The DeepHS-Net family continues to exhibit commendable performance, and notably, in contrast to the 3D CNN, commands significantly smaller dimensions (bearing in mind that the x-axis is logarithmically scaled). Additionally, it's evident that the inclusion of HyveConv contributes to an enhanced performance of the larger ResNet models, such as ResNet-18 and ResNet-152.
Differences in model performance can further be attributed to their respective feature extraction approach. As hyperspectral data contains two spatial dimensions as well as the spectral dimension for the model to consider, we distinguish between models with purely spatial, spectral or both, spatial-spectral feature extraction.
The shaded region in Fig. 4 accentuates all models that operate without incorporating spatial context. These models are devoid of the supplementary insights offered by the hyperspectral cube and consequently demonstrate inferior performance.
Fig. 5 again shows the overall classification accuracy for all models. Bars corresponding to
Figure 6: Classification accuracy for the (a) HRSS data sets, (b) the DeepHS Fruit data set, and (c) the DeepHS Debris data set, for each model. Bars for models considering the spatial dimension in red, spectral in gray, both in gold.
models considering the spatial dimension only are marked in red, spectral dimension only in gray, and both, spectral and spatial dimension, in gold. Pixel-based models that only have access to the spectral information of a single pixel at a time achieve only low classification accuracies, while usage of the spatial information seems to be more relevant. Models like the DeepHS-Net variants or 3D CNN, operating on both, the spectral and spatial dimension, perform best overall.
Considering the classification performance on the remote sensing data exclusively (Fig. 6(a)), we obtain an even more extreme distribution. All models that incorporate only the spectral dimension (e.g., MLP, RNN, 1D CNN) perform significantly worse than those including the spatial component. We even find that, in this case, the purely spatial information is already enough to achieve \(>95\%\) mean accuracy. The 2D CNN (spatial), that operates on a single aggregated channel dimension, as well as the Gabor CNN are among the top five models, leading to the conclusion that the hyperspectral information contained in the channels is not needed here.
This should be alarming to any researcher working in the field, as it would indicate that the most popular and almost exclusively used hyperspectral data set is not so well suited for evaluating spectrum-based methods after all. With our benchmark collection, we aim to provide alternative data sets and classification tasks, for which we show that, as expected, models considering both the spectral and spatial information can actually outperform purely spatial models for hyperspectral image classification (see Fig. 6(b) and 6(c)).
Further, we find that for at least some of the models, performance differs significantly for whether the task is objectwise or patchwise classification (see Fig. 7). Especially large and complex models, like the attention-based models and the larger ResNet (i.e., ResNet-152), but also the models specifically designed for HSI classification, perform much worse for objectwise than for patchwise classification.
On the data side, the biggest difference is the amount of training samples available; every recording yields only a single sample with the objectwise approach, but on the other hand, can be divided into many patches, yielding many samples per recording for patchwise classification. For the DeepHS Debris data set, for example, we obtain \(85\) versus \(11,635\) training samples, respectively. The extremely low number of training data samples might not be sufficient to optimize the large number of parameters of some of the models, and therefore explain their rather bad performance on the objectwise classification task.
Another explanation might be that the state-of-the-art HSI classifiers (e.g., HybridSN, SpectralNET, SpectralFormer) are optimized for the almost exclusively used remote sensing application and the corresponding segmentation task, and therefore on patchwise image processing.
In turn, the rather small models of the DeepHS-Net family were explicitly designed for handling small hyperspectral data sets and optimized for predicting fruit ripeness, which is an objectwise classification task, and could explain their outstanding performance on objectwise classification, also for the Debris data set.
Motivated by the abovementioned observations, we further investigated the dependence of the model performance on the number of labeled training samples available. For the three HRSS data sets, Fig. 8 shows the average classification accuracy as a function of the ratio of the labeled training data, which was stepwise reduced from \(30\%\) to \(5\%\).
Figure 7: Classification accuracy for objectwise classification (gold) and patchwise classification (red) for the debris data set and for each model, respectively.
As to be expected, the general trend is that better performance is achieved with more training data. However, on the basis of the models' concrete behavior, different groupings emerge:
The simple methods (e.g., SVM, PLS-DA, as well as MLP, and RNN) have low accuracies and also do not to get much better, even when provided with more training samples. In the plot, we included only the SVM in as a representative example, but disregarded the other three, as - due to their below-average performance - they are considered irrelevant in this context and will not be discussed further.
Further, we find a number of models in the middle field, achieving average accuracies while still following the trend of improving for larger portions of training data.
The smaller DeepHS-Net model variants, as well as the 3D CNN and 2D CNN are already quite good for only 5% of the data, improving little when more training data is added, while still remaining among the best performing models (also compare to Tab. 3 and Fig. 4). For these models, the training data ratio seems not important - they can also handle small data sets rather well.
But then, there is also a group of "data-hungry" models that show poor performance on small fractions of the training data, but whose classification accuracy rises significantly as more samples are provided. An outstanding example is ResNet-152, for which the accuracy increases almost linearly with the portion of labeled training samples. Similar behavior can be observed for more complex hyperspectral classifiers (e.g., HybridSN) and some transformer models. The most probable explanation for this is, once again, based on the complexity and size of the models. The latter need more training data to optimize their large number of parameters, and therefore can only show satisfying performance when enough training samples are available.
This immediately brings up the question whether the larger models, like the ResNet-152, could become much better and even as good as, e.g., the DeepHS-Net (+ HyveConv) model, if they only were to see more training data.
This is addressed in Section 4 where we provide and evaluate a strategy for pretraining the model on other, yet similar hyperspectral image data.
## 4 Pretraining and Transfer Learning
As we have found, some of the classifier models highly depend on the amount of training data available (see Fig. 8). However, labeled hyperspectral data is usually rather scarce, and even for the simplest networks, training solely on one
Figure 8: Classification accuracy versus portion of training samples (5 %, 10 %, 15 %, 20 %, 25 % and 30 %) for the HRSS data set and each of the models, respectively.
small data set, can lead to unstable training and overfitting.
As an effective workaround, we propose pretraining the model on other, potentially multiple hyperspectral data sets and tasks collected in this benchmark beforehand, and then just fine-tuning it using the data and configuration for the actual classification task - to improve classification accuracy, avoid overfitting and consequently allow using larger (deeper) models in general.
This idea was successfully applied for large RGB data sets (e.g., Krizhevsky et al (2012)). Also, pretraining for hyperspectral image classification has already been explored to some extent, e.g., by Lee et al (2019, 2022); Windrim et al (2018). Lee et al (2022) showed that it also works in this special case, and that it is possible to pretrain a shared backbone using a so-called multi-domain approach. However, they only considered different remote sensing scenes, with both, data and task, still being very similar across those domains.
In contrast, utilizing our benchmark data sets, we look at entirely different application scenarios where data, recorded wavelengths and classification task vary considerably. We address the following questions: Is it still reasonable and helpful to combine the different data sets and train a shared backbone on multiple tasks? Can we even benefit from this large variety to find a common structure in HSI data and extract the most general hyperspectral features - independent of application, task, and sensor? And finally, can we transfer these learned features to an unseen target data set and task in the fine-tuning process?
Figure 9: Scheme of the proposed pretraining and fine-tuning procedure. First, we pretrained the model on (potentially) multiple data set configurations (i.e., data sets, cameras and classification tasks), using a shared backbone with an initial HyveConv layer and multiple task-specific heads. We kept the general pretrained backbone weights, while re-initializing the task-specific fully-connected part for fine-tuning on a single specific data set and task.
### Pretraining Strategy
To answer these questions, in this section, we present a pretraining and fine-tuning strategy to pretrain any model on potentially multiple different hyperspectral data sets and tasks, and subsequently fine-tune and evaluate it on a specific target configuration.
We assume a HSI classifier model of most general structure, consisting of a backbone for feature extraction and a task-specific head. The backbone is usually built out of multiple convolutional layers, including a first layer which depends on the data set (wavelengths; number of channels) and the remaining part of the backbone, which is more abstract and independent of the specific data set and task. The head, one or few fully-connected linear layer(s), is again specific to the task at hand and carries out the actual classification, and therefore depends on the number of output classes.
Two major adjustments were made for our purposes:
* To allow training on multiple data sets as well as to adapt to a differing target data set easily, the first convolutional layer was replaced by a hyperspectral visual embedding convolution (HyveConv) layer (Varga et al, 2023b). It operates on a wavelength-based feature learning paradigm, rather than the conventional channel-based approach and therefore ensures camera-agnostic models, applicable across different camera setups. The HyveConv was used in combination with an extended wavelength range, covering the lowest and highest wavelength of all data under consideration, avoiding the need to employ a separate first layer for different sensors.
* Further, we introduced a multi-head approach similar to Lee et al (2018) to pretrain on multiple tasks, with (potentially) different class outputs, simultaneously. For each data set and classification task, another fully-connected head was used. We balanced the data sets and used mixed batches (equal ratio per configuration and batch). We switched between heads, depending on the current task. As suggested by Lee et al (2019, 2022), the learning rate was multiplied by a factor \(\frac{1}{N}\), when \(N\) different configurations were considered simultaneously.
As visualized in Fig. 9, we then employed the following general pretraining and fine-tuning procedure.
1. [leftmargin=*]
2. **Pretraining.** For pretraining on (potentially) multiple data sets, cameras and / or classification tasks, as described above, we used the HyveConv layer, and switched between multiple task-specific heads, depending on the current data set configuration. The training was conducted on mixed batches and with a reduced learning rate (see above).
3. **Fine-tuning.** To specialize on a specific data set, camera and classification task, first, we reinitialized the fully-connected task-specific head (except for the BN layer) to adapt the class outputs, while keeping the pretrained weights of the remaining intermediate layers, the "shared backbone". Then, the model was again optimized in an end-to-end fashion, effectively training the last layers from scratch, while only fine-tuning the general backbone part.
### Experiments
For the pretraining experiments, we constrained ourselves to two kinds of models, variants of the DeepHS-Net Varga et al (2021, 2023b) and the ResNet He et al (2016). By modifying them as described in Sec. 4.1, we obtain a multi-head DeepHS-Net+HyveConv, as well as a multi-head ResNet-18+HyveConv, and ResNet-152+HyveConv. In addition, we extended the multi-head DeepHS-Net+HyveConv to obtain a larger variant of this model (5 instead of 3 hidden layers of increased size; parameters), mainly for comparison.
We pretrained the respective multi-head model on a subset of our benchmark data, using an initial learning rate of \(\frac{0.01}{N}\) for \(N\) different data set configurations, which was stepwise decreased during training. Cross-entropy loss (George Cybenko, 1999) was used as loss function, optimized using Adam (Kingma and Ba, 2015). We trained for 50 epochs and used checkpoint callback and early stopping based on the validation loss (Prechelt, 1998). In each epoch, mixed batches of size 32, were considered. As for regular classification (see Sec. 3.2), the pretraining data was augmented
using random flipping, random rotation and random cut, each with a probability of 50%, and random cropping with 10% probability.
To fine-tune on the given data set, camera and task, after (re-)initializing the fully-connected task-specific head - except for the BN layer -, the model was again trained as described in Sec. 3.2, for 50 epochs with checkpoint callback and early stopping, using an initial learning rate of 0.01 with a stepwise-decrease, the cross-entropy loss, Adam optimizer, and a batch size of 32. The same set of data augmentations was applied.
Again, we conducted the experiments for 3 different seeds each and report the average classification accuracy as well as their standard deviation.
We tested the proposed pretraining strategy on two hyperspectral remote sensing scenes data sets and the corresponding patchwise classification task. Each of the abovementioned models was pretrained on HRSS/Salinas with a train ratio of 30%. For fine-tuning and evaluation on HRSS/Indian Pines, we deliberately choose to consider only 5 % of the pixels to obtain more meaningful results. Otherwise, both accuracy values - without and with pretraining - were near 100 % already, and therefore hard to compare. Tab. 4 lists the classification accuracies obtained with and without pretraining, respectively.
In all cases, the pretrained model performs better than without pretraining, respectively. Pretraining on another remote sensing scene boosts classification accuracy by a significant amount of 6% up to over 60%. We observe a strong correlation between the model size and this performance gain; the larger the model, the more of an impact pretraining has and the more of an improvement it brings. Already when comparing the DeepHS-Net+HyveConv versus its larger version, the gain in accuracy is almost doubled. Further, the improvement is larger for the ResNet-18+HyveConv, and even larger (60.43%) for the ResNet-152+HyveConv model. All the three larger models that initially performed worse, achieve an even higher classification accuracy than the DeepHS-Net+HyveConv model, when pretrained.
As usually, the advantage of pretrained models comes most to effect when very few labeled training samples are available and / or subsequent fine-tuning is limited (e.g., by time or resources), we also examine the dependence of the classification performance on the portion of training samples (1% up to 30%, see Fig. 10(b)) as well as on the number of epochs for fine-tuning (Fig. 10(a)).
As expected, in both cases - without and without pretraining -, the classification accuracy increases with increasing ratio of training samples used. However, with pretraining, the accuracy is already quite high for only a small fraction of the training data, and the corresponding curve remains above the curve without pretraining for all the remaining ratios measured, indicating a higher overall accuracy with pretraining.
Analogously, with pretraining, we observe an already high accuracy for low number of training epochs, and again, a higher accuracy overall. Without pretraining, the classification performance even decreases again after a certain number
Figure 10: Classification accuracy (mean and standard deviation) versus (a) the number of fine-tuning epochs and (b) the portion of training samples, for the ResNet-18+HyveConv model. The curve for the pretrained model is marked in red and for the model without pretraining in gold, respectively.
of epochs, possibly due to overfitting when no pretraining is employed beforehand.
We conclude that, with pretraining, in general, fewer epochs and less labeled training data samples were needed for the subsequent training on the target data set and task.
For all (other) combinations of pretraining and fine-tuning on all data sets and tasks included in our proposed benchmark, Tab. 5 shows the improvement in accuracy with pretraining relative to pure classification, using one representative example configuration, but covering the whole spectrum of applications, sensors, and tasks, respectively: The Salinas remote sensing data set (with a train ratio of 30%), the avocado fruit measurements recorded by the Specim FX10 camera w.r.t. firmness, and the debris data recorded by the Corning HSI - patchwise and objectwise, respectively - were used for pretraining, while for fine-tuning and evaluation, we considered the following configurations: HRSS Indian Pines (with only 5% train ratio), the avocado data recorded by the Specim FX10 - this time w.r.t. ripeness, and the debris patchwise / objectwise data, recorded also by the Specim FX10.
Foremost, in all cases except for one, the classification performance could be improved when pretraining on either one of the four example configurations.
It is worth emphasizing that pretraining on the patchwise classification task and the debris data set could increase the accuracy by over 10% for all the three data sets, different sensors and classification types.
Analyzing the results in more detail, we also find that there is a significant improvement when pretraining and fine-tuning on (other) remote sensing scenes, probably due to the fact that those three data sets are very similar in terms of sensors and classes and share the same patchwise classification task, which makes feature transfer especially easy. Similarly, for correlated features, like firmness and ripeness of a fruit, the transfer is possible. However, considering the improvement for the fruit data set as well as the objectwise configuration for the debris data set, it seems that one cannot learn much from the few training samples provided for objectwise classification in general, while patchwise pretraining and fine-tuning on the objectwise classification task works surprisingly well. The outstanding example is again patchwise pretraining and objectwise fine-tuning for the debris data set, where we observe an improvement of 33% in classification accuracy, although different sensors were used. On the other hand, it also seems to help, if the data was recorded by the same sensor, and feature transfer is possible for the same sensor, but differing tasks, for example.
These are promising results, since they show that it is indeed possible to transfer hyperspectral features between different data sets, tasks, and even classification types. Pretraining on hyperspectral data of all kind helps to increase classification performance.
In contrast, we show that pretraining on regular image data (i.e., RGB data) does not work. Tab. 6 lists the classification accuracy for two ResNet's of different size - without pretraining, when pretrained on the RGB ImageNet data set Russakovsky et al (2014), and with pretraining on hyperspectral data of two different kinds.
Pretraining on ImageNet even decreases the accuracy relative to no pretraining while pretraining on the hyperspectral data increases the accuracy, even for an entirely different data set, leading to the conclusion that pretraining and feature transfer does only work for (other) hyperspectral data. This in turn strengthens the assumption that specialized methods are needed for this kind
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \begin{tabular}{c} \multicolumn{2}{c}{Larger} \\ DeepHS-Net \\ +HyveConv \\ \end{tabular} & \begin{tabular}{c} ResNet-18 \\ +HyveConv \\ \end{tabular} & \begin{tabular}{c} ResNet-152 \\ +HyveConv \\ \end{tabular} &
\begin{tabular}{c} ResNet-15 \\ +HyveConv \\ \end{tabular} \\ \hline No pretraining & 81.42 \% & 79.19 \% & 77.64 \% & 30.48 \% \\ Pretraining & **88.19 \%** & **89.19 \%** & **92.76 \%** & **90.91 \%** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Classification accuracy (mean and standard deviation) with pretraining and without pretraining, for different models (DeepHS-Net+HyveConv, the larger DeepHS-Net+HyveConv, ResNet-18+HyveConv, and the ResNet-152 +HyveConv). The higher accuracy is marked in **bold**, respectively.
of learning on HSI data - as apparently, pretraining on HSI data is more complex and not as easy and straight-forward as for regular RGB image data (see, e.g., Lee et al (2019)).
We were able to prove that our specialized approach to pretraining on hyperspectral data works, not only for a single pretraining configuration, but also when combining multiple hyperspectral data sets, sensors and tasks for so-called multi-task pretraining. In this sense, we could make the best possible use of the variety of data and classification tasks in the proposed benchmark, with the goal to learn more general features for HS image classification, independent of the concrete application.
We pretrained the abovementioned models on a selection of different configurations, covering almost all data sets, sensors, and classification tasks, and evaluated them again on all possible configurations in the benchmark (see Appendix, Tab. A1), reporting the overall classification accuracy without and with pretraining, respectively.
Fig. 11, similar to Fig. 4, shows those accuracy values in relation to the model's size, with the pre-trained versions of the models of interest added to the plot.
Again, we observe that, in all cases, the pre-trained model achieves better accuracy, and further, as a general trend, the larger the model the more of an impact pretraining has and the greater the improvement in classification performance.
With our experiments, we show that pretraining allows training larger (deeper) models that then perform almost equally good overall and can even outperform the initially favored smaller classifier networks in specific cases.
We provide pretrained weights for the DeepHS-Net+HyveConv as well as the ResNet-18+HyveConv model at [https://github.com/cogsys-tuebingen/hsi_benchmark](https://github.com/cogsys-tuebingen/hsi_benchmark).
## 5 Limitations and Future Work
One obvious limitation of the proposed benchmark is the restriction to only three hyperspectral applications. It is a significant improvement to the evaluation on the HRSS data set only, still the benchmark could be much more diverse. The intention is to enhance forthcoming iterations of this benchmark by encompassing a wider array of hyperspectral applications. Should you possess a suitable dataset for potential inclusion in the upcoming version, we encourage you to establish contact with the authors.
Besides that, the analysis of the results could be even more sophisticated. A more in depth investigation seems possible and also helpful for the design of better hyperspectral models. Future research will focus on this.
\begin{table}
\begin{tabular}{l c c} \hline \hline & ResNet-18 & ResNet-152 \\ & +HyveConv & +HyveConv \\ \hline No pretraining & 77.64 \% & 30.48 \% \\ \hline Pretraining on ImageNet & 50.83 \% & 25.81 \% \\ \hline Pretraining on HRSS & 92.76 \% & 90.91 \% \\ Pretraining on Debris & 91.22 \% & 85.21 \% \\ \hline \hline \end{tabular}
\end{table}
Table 6: Mean classification accuracy for the ResNet-18+HyveConv and ResNet-152+HyveConv on HRSS Indian Pines (train ratio 5 %) without pretraining and with pretraining on ImageNet and on two different other hyperspectral data sets (HRSS Salinas (train ratio 30 %) and debris (Corning HSI, patchwise classification)).
\begin{table}
\begin{tabular}{l l|c c c c} \hline \hline & & \multicolumn{4}{c}{**Fine-tuning and evaluation**} \\ & & HRSS & Fruit & Debris (patchw.) & Debris (objectw.) \\ \hline \multirow{4}{*}{**Pretrain.**} & HRSS & **+ 15.12 \%** & + 2.78 \% & + 4.54 \% & + 10.00 \% \\ & Fruit & + 3.44 \% & + 9.73 \% & + 9.36 \% & \(\pm\) 0.00 \% \\ & Debris (patchw.) & + 13.58 \% & **+ 18.06 \%** & **+ 15.17 \%** & **+ 33.33 \%** \\ & Debris (objectw.) & + 7.00 \% & + 5.56 \% & + 6.79 \% & + 3.33 \% \\ \hline \hline \end{tabular}
\end{table}
Table 5: Average improvement in classification accuracy with pretraining relative to pure classification without pretraining, exemplary for all combinations of data set and task, for pretraining and fine-tuning, and the ResNet-18+HyveConv model.
## 6 Conclusion
We have established a comprehensive framework to assess models across various hyperspectral imaging (HSI) classification tasks. In pursuit of a universally applicable HSI classification model, we amalgamated three diverse datasets, each catering to distinct use cases, into a unified benchmark. This initiative aids in advancing the quest for a generalizing HSI classification model. Central to this benchmark is the establishment of fixed training and evaluation pipelines, thereby enabling impartial comparisons for future research.
In the context of the benchmark, 23 models were implemented and evaluated. The performance of these significantly different models is analyzed and discussed. As a result, we conclude, that the requirements on the model depend on the hyperspectral use case. Still, models considering the spatial and spectral dimension performed overall better. Moreover, a significant number of state-of-the-art models designed for HSI classification have been fine-tuned exclusively on the HRSS dataset. This dataset predominantly stresses the models' spatial feature extraction capabilities, with less emphasis on the spectral aspect. This observation underscores the necessity for a more comprehensive evaluation benchmark that encapsulates diverse scenarios. Additionally, our study highlights the substantial disparity in performance between patchwise and objectwise classification approaches employed by these models. Furthermore, we meticulously assessed the impact of the limited size of hyperspectral datasets, revealing distinct variations in how different models respond to this constraint.
To tackle the issue of the small data sets, we propose a pretraining strategy for hyperspectral image classifier models and show that pretraining utilizing our hyperspectral benchmark can help to improve classification performance. We claim that it is indeed helpful to combine the different HS data sets and train a shared backbone on multiple tasks, even if data sets and tasks differ considerably. It seems, we can even benefit from this large variety to extract the most general hyperspectral features. A model backbone pretrained in this way stabilizes the subsequent training and reduces overfitting, which, in turn, enables to use larger (deeper) networks that are probably better suited to capture the complex spectral characteristics of hyperspectral image data.
We hope that this work makes future research easier and allows better comparison of different approaches. Further, we believe the insights reported here, and the described pretraining, will help to design better HSI models and allow the use of hyperspectral imaging in new areas.
**Data availability and Code availability.** The benchmark data and code are released at [https://github.com/cogsys-tuebingen/](https://github.com/cogsys-tuebingen/)
Figure 11: Increase in overall classification accuracy without and with pretraining in relation to the model size, for the DeepHS-Net+HyveConv, larger DeepHS-Net+HyveConv, ResNet-18+HyveConv, and ResNet-152+HyveConv, respectively.
hsi_benchmark. In addition, we provide pretrained model weights.
Acknowledgments.Thanks to Manuel Grana for the permission to use the collection of the HRSS data set (Grana et al, 2011) for this benchmark. This work has been supported by the German Ministry of Economy, Labour and Tourism, Project KI-PRO-BAU, FKZ: BW1_2108/02.
## Appendix A Benchmark statistics
### Training-validation-test set sizes
Tab. A1 enumerates all configurations within the proposed benchmark. For each configuration, the size of the training, validation and the test set is presented. The difference between objectwise (O) and patchwise (P) configurations is expectable, as a recording of a single object can be used to generate multiple patches.
### HRSS: Deterministic train-validation-test split
As mentioned, the HRSS data set lacks an established definition of the training-validation-test sets. This hinders a fair comparison of results. We provide a solution with a variable ratio between the training set (containing pure training and validation set) and the test set.
To establish deterministic sets for specific training-test ratios, we employ a pseudo-random pixel arrangement within each category. This approach guarantees the reproducibility of results, an essential prerequisite for enhancing research quality within this domain.
For a practical illustration of how to implement these splits, a comprehensive example is available at the following link: [https://github.com/cogsys-tuebingen/hsi_benchmark](https://github.com/cogsys-tuebingen/hsi_benchmark).
## Appendix B Hyperparameters
Table B2 presents a comprehensive overview of the distinct hyperparameters associated with each model. Throughout the experimentation, the maximum feasible batch size, up to 32, was employed for all models. In the majority of cases, a uniform learning rate of \(1\times 10^{-3}\) was adopted, though deviations were permitted when the source literature indicated an alternative default learning rate. The identical principle governed the determination of training epochs and optimizer.
|
2302.14328 | Emergent glassiness in disorder-free Kitaev model: Density matrix
renormalization group study on a one-dimensional ladder setting | The complete phase diagram of the Kitaev model with a magnetic field remains
elusive, as do the experimental results in the candidate material
{\alpha}-RuCl3. Here, we study the Kitaev model on a one-dimensional ladder
setting within the density-matrix renormalization group method in the presence
of a magnetic field at zero temperature. We find five distinct phases with
increasing magnetic field, which are characterized by a homogeneous flux phase,
the Z2 vortex gas, solid and emergent glass phase, and finally, a
spin-polarized phase. The emergent glassiness is confirmed by calculating
correlation functions showing quasi-long-range behavior and ground state
fidelity, showing a plethora of energetically accessible orthogonal saddle
points corresponding to different flux configurations. This glassy behavior
seems to arise from the slow dynamics of the Z2 fluxes, which is a consequence
of the local constraints present in the underlying Hilbert space. This
phenomenon can also be explored in other spin-liquid systems where the
corresponding low-energy excitations are similarly retarded due to constraints. | K. B. Yogendra, Tanmoy Das, G. Baskaran | 2023-02-28T05:42:45Z | http://arxiv.org/abs/2302.14328v2 | # Emergent glassiness in disorder-free Kitaev model
###### Abstract
A recent experiment shows surprising glass-like features [npj Quantum Materials 6, 1 (2021)] in the nearly disorder-free \(\alpha\)-RuCl\({}_{3}\), a Kitaev spin liquid candidate, at low temperatures in the intermediate magnetic field region. Inspired by this experiment, we study the Kitaev model within the density-matrix renormalization group (DMRG) method in the presence of a magnetic field at zero temperature. We find five distinct phases with increasing magnetic field, which are characterized by a homogeneous flux phase, the \(Z_{2}\) vortex gas, solid and emergent _glass_ phase in the so-called \(U(1)\) spin liquid region, and finally a spin-polarized phase. The emergent glassiness is confirmed by calculating correlation functions as well as ground state fidelity, showing a plethora of energetically accessible orthogonal saddle points corresponding to different flux configurations. Taking our result together with previous theories of emergent glassiness in disorder-free quantum many-body systems, we propose that _glassiness is intrinsic to disorder-free generic quantum spin liquids_.
## I Introduction
Quantum magnetism in crystalline solids and the study of spin liquids is experiencing a resurgence. It is partly due to a remarkable exactly solvable quantum spin model on a honeycomb lattice by Kitaev [1], followed by an exciting proposal by Jackeli and Khaliullin [2] of experimental realization of Kitaev spin liquid in certain real materials. Several potential Kitaev proximity materials are appearing on the scene [3; 4; 5; 6; 7; 8; 9; 10; 11]. New experimental results in possible Kitaev systems, such as \(\alpha\)-RuCl\({}_{3}\)[12; 13; 14; 15; 16; 17; 18; 19], continue to surprise us. Beyond basic sciences, developments in quantum spin liquids give hope and pave the way for novel qubits, topological quantum computation, and quantum information science and technology.
Spin glasses, both classical and quantum, are often found in systems with spatial disorder. In an exciting recent experiment, glassiness is found in a nearly disorder-free \(\alpha\)-RuCl\({}_{3}\) at low temperatures and intermediate magnetic fields [20], as seen in anomalous non-linear susceptibilities. _Is \(\alpha\)-RuCl\({}_{3}\), a Kitaev proximity system, showing us a way to intrinsic glassiness in disorder-free and generic quantum spin liquid systems?_ At the end of this article, we suggest an affirmative answer to this question, using our present work and the above developments, and make connections to earlier proposals on glass physics in a variety of quantum systems.
We begin with viewing the glass phase in a generic sense as a ground state exhibiting slow dynamics. Emergent glassiness in disorder-free many-body systems is seen, sporadically or otherwise, in many earlier works, although the observed phase was not often associated with glassiness. Intuitively, if the ground state is in proximity to a wealth of local minima due to (say frustration-induced or topological-) degeneracy [21], 'emergent disorder' arising from an excessive number of conserved quantities [22; 23; 24], or orthogonal catastrophe near a critical point [25; 26], or local constraints or local bath, [27; 28; 21] its dynamics are impeded. In modern calculations, it is also shown that if the Hilbert space is partitioned [29; 30] and/or disentangled [31] into (local) Hilbert space, then the ergodicity is hampered.
The Kitaev model is studied extensively in the presence of the magnetic field in 2D honeycomb lattice [32; 33; 34; 35; 36; 37], in ladder setups [38; 39], and combined with other interactions [40; 41; 42]. There has been a variety of results and proposals, some of which are ubiquitous while others remain active research topics, yet the physics of glassiness was not reported earlier. There is theoretical evidence of \(U(1)\) quantum spin liquid (QSL) in the intermediate field regime, with gapless excitations whose nature is still debated (for reviews see [43; 44; 45; 46]). Our understanding of the constituent gauge and matter excitations in the Kitaev model with other interactions [47; 48; 49; 50; 51; 52; 53; 54] and external perturbations are gradually evolving [55; 56; 57; 58; 59; 60]. In particular, the behavior of the gauge fluxes is not explicitly investigated in the previous numerical studies at finite magnetic fields, and hence their role on the corresponding phases remained unknown.
The experimental situations similarly remained inconclusive. Experiments have observed half quantization in the thermal Hall effect [61], and quantum oscillations in in-plane longitudinal thermal conductivity without any observed quantization in the corresponding transverse conductivity [62] in \(\alpha\)- RuCl\({}_{3}\) in the intermediate magnetic field region. Another experiment has indicated multiple phase transitions in the same field region based on the anomalies in thermal (both longitudinal and Hall) conductivity [63] Evidence of magnetic excitations [64] and phonon anomalies [65] are also presented in experiments in the same field region (before polarized phase appears). More interestingly, this is roughly the same magnetic field region where a recent experiment finds a signature of glassiness [20].
A magnetic field modifies the dynamics of spins. But why are the dynamics slow in the intermediate field re
gion without any external inhomogeneity causing glassiness? Here we carry out a DMRG study on the Kitaev model on the 1D ladder at a finite magnetic field and zero temperature. Such a model is studied earlier with ED (also DMRG) [38], and with iDMRG [66], but missed some of our phases, presumably due to numerical limitations and for overlooking the role of flux operators responsible for phase transitions. We find a set of interesting phases with increasing magnetic field. At low fields, the \(Z_{2}\) gauge flux stabilizes in a spatially homogeneous phase, before it tends to crystallize. In the intermediate field region, we spot a robust glass phase determined by random spatial distributions of the \(Z_{2}\) gauge fluxes, with possible gapless excitations. The emergence of glass physics is corroborated by the signature results of the correlation functions, quantum Fidelity calculations of the ground state. The glass phase intervenes in the homogeneous flux phase on one side and a homogeneous polarised phase at high field. On the basis of these observations, we provide a possible mechanism of intrinsic glass physics and make connections to other glass physics to outline an organizing principle for the many-body glass state.
Our remaining article is organized as follows. We present our DMRG method and results in the Kitaev ladder at \(T=0\) as a function of the magnetic field and discuss the emergence of various phases with emphasis on the intrinsic glass phase. Before concluding, we attempt to obtain a unifying understanding of emergent glassiness in disorder-free many-body systems by connecting various other proposals dating back to the RVB theory of the QSL state to modern languages where slow dynamics of a ground state are promoted by temperature or magnetic field or other external baths.
## II Method
We consider the Kitaev model with the magnetic field (**h**) along the [111]-direction as
\[H=\sum_{\langle ij\rangle_{\alpha}}J_{\alpha}S_{i}^{\alpha}S_{j}^{\alpha}- \sum_{i,\alpha}h_{\alpha}S_{i}^{\alpha}. \tag{1}\]
Here \(J_{\alpha}>0\) are bond dependent exchange couplings, \(\alpha=x,y,z\). This model is set on the 1D Ladder as shown in Fig. 1). Each bond has three nearest neighbor interaction, hence mimicking the setup proposed by Kitaev on a honeycomb lattice. The coupling along the \(z\)-bond (between the chains) is taken to be staggered, in general, as \(J_{z}=J_{3}\) or \(J_{4}\) in alternative rungs, see Fig. 1.
The spin operator \(S_{i}^{\alpha}\) at each site \(i\) can be factorized into matter Majorana fermion (\(c_{i}\)) and gauge Majorana fermion (\(b_{i}^{\alpha}\)) operators. Then the gauge Majorana operators in the nearest bonds can be combined into a bilinear operator \(u_{i}^{\alpha}=ib_{i}^{\alpha}b_{j}^{\alpha}\), which serves as a \(Z_{2}\) gauge field. With this, we can define a flux operator at a six-bond plaquette \(p\) as
\[W_{p}=S_{i}^{y}S_{j}^{z}S_{k}^{x}S_{l}^{y}S_{m}^{z}S_{n}^{x}=\prod_{l_{p}}u_{l_ {p}}^{\alpha}, \tag{2}\]
where \(l_{p}=ij,jk,kl,lm,mn\), and \(ni\) are nearest neighbor bonds. The chosen spin component at a given site is the one present in the outward bond (normal to the plaquette). It turns out that \(W_{p}\) at each plaquette commutes with the Hamiltonian at \(h=0\), giving \(N\) conserved quantities in both 2D Honeycomb lattice as well as in the 1D ladder. In addition, in the present 1D ladder setting, there are two additional local conserved quantities, which are four-bond plaquette operators as defined by
\[T_{1p}=S_{i}^{y}S_{j}^{y}S_{k}^{x}S_{l}^{x}=-\prod_{l_{p}}u_{l_{p }}^{\alpha},\] \[T_{2p}=S_{j}^{x}S_{m}^{x}S_{n}^{y}S_{k}^{y}=-\prod_{l_{p}}u_{l_{p }}^{\alpha}, \tag{3}\]
where \(l_{p}=ij,jk,kl,li\) bonds in the \(1p\)-plaquette, and so on. These operators are shown in Fig. 1. Consequently, \(W_{p}=T_{1p}T_{2p}\) and \([T_{1p},T_{2p}]=0\)[67]. In the ground state, all these conserved quantities assume \(W=+1\) and \(T_{1p/2p}=+1\), (uniform flux-free phase),[68] giving us an extensive number of conserved quantities. Hence the many-body Hilbert space is made of 'trivial' product states of gauge sectors, and matter sectors [69]. This is a \(Z_{2}\) - QSL state [70]. The energy dispersion of matter (Majorana) fermions for the couplings \(J_{x}=J_{y}=J_{3}=1,J_{4}=0\) shows gap-less (quadratic dispersion), and for \(J_{y}=J_{3}=J_{4}=1\): \(J_{x}=1\) is gaped; \(J_{x}=2\) is gap-less with linear dispersion.
We study Eq. 1 at \(h\neq 0\) by using DMRG method for \(N=200,300,400\) with cyllindrical boundary condition between the chains, and open boundary condition at the edge. The randomly initialized Matrix product state (MPS) is variationally tuned to the ground state by minimising the expectation value of the matrix product operator of \(H\) in Eq. 1 (energy) with bond dimension up to \(D\leq 2500\) and truncation error, \(\epsilon\sim 10^{-10}\). The DMRG algorithm is implemented using ITensors Library [71]. The expectation values of any gauge-invariant operators are calculated by contracting the MPO with DMRG predicted ground state MPS.
We repeat some of the calculations on a four-leg 1D lattice with cylindrical boundary conditions along the armchair direction and open boundary condition along
Figure 1: A Kitaev ladder setup that we study here. At each site, we have three nearest neighbor bonds with exchange interactions, \(J_{x,y,z}\) between \(S^{x,y,z}\), respectively, as in a honeycomb analog. The \(J_{z}\) interactions (\(J_{3}\), \(J_{4}\)) are kept to be the same as well as different, for comparison. **a** denotes the lattice constant, while \(W\), \(T_{i}\) are flux operators defined in the text.
the zig-zag direction. This geometry is closer to the 2D Honeycomb lattice, see Appendix. B. The salient properties that are presented in the main text for two-ladder compound are reproduced in the four-ladder settings.
## III Results
In gauge theories of the present kind, it is often difficult to find the right order parameter (s), especially when there are multiple phases that compete and/or coexist. As \(h\to 0\), we have a non-local multi-linear operator, \(W_{p}\), which acquires a fixed eigenvalue at each site as discussed above. At \(h\to\infty\), the local linear (magnetization) operator \(S_{i}^{\hat{h}}\) has a uniform average value in the polarised phase with the easy axis oriented along the field direction \(\hat{h}\). There is no obvious way to smoothly interpolate between these two (quasi-) local operators, and a phase transition between them, if exists, evades the Landau theory and occasionally can be classified within the deconfined quantum critical paradigm. Non-local string operators arise as dynamics are introduced in the intermediate magnetic field strength. These string operators bind flux-flux, matter-matter, and/or flux-matter excitations. It is numerically expensive to evaluate their expectation values within DMRG. We will, however, occasionally comment on the possible role of such non-local string operators for the slow dynamics of the glassy phase we obtain here.
We present the spatial average values of the ground-state expectation value \(\langle\mathcal{O}\rangle=\frac{1}{N}\sum_{l}\langle\mathcal{O}_{l}\rangle\), where \(\mathcal{O}_{l}=S_{i}^{\hat{h}}\), \(l=i\) site index as shown in Fig. 2, and \(\mathcal{O}_{l}=W_{p}\), \(T_{1p}\), \(T_{1p}\) and \(l=p\) plaquette index, as shown in Fig. 3. In both values of \(M=\langle S\rangle\) and \(\langle W\rangle\), we observe occurrence of kinks or jumps with increasing magnetic field strength \(h\). We denote these finite-field phases by I, II, III, IV, and V. We see in Phase I a uniform flux value at all plaquettes with the average value decreasing with \(h\), and hence we dub it the uniform-flux phase, see Fig. 4. In Phase II, local flux (we will call them \(Z_{2}\) vortex) values begin to fluctuate around their finite mean value. Phase III appears in the region where the number of vortices is nearly half of the number of lattice sites (half-filling), and \(Z_{2}\) vortices tend to crystallize. Phase IV corresponds to the glass phase with random fluctuations in the \(Z_{2}\) vortices around a zero-mean value. Finally, Phase V corresponds to the uniform polarised phase.
The magnetization grows near-linearly at all field strengths except in the intermediate region. The uniform spin susceptibility, defined as \(\chi=\frac{\partial M}{\partial h}\), shows divergence features at all phase boundaries. The divergence in \(\chi\) is most sharp at \(h=0.43J\), at the phase boundary between the glass and the polarized phases, possibly indicating a phase transition caused by the long-wavelength collective
Figure 2: (a) The spatial average value of the magnetization along the magnetic field direction is plotted as a function of field strength. (b) Corresponding values of the uniform spin susceptibility (\(\chi\)) are plotted here. Three different colors denote the same calculated values but for three different system sizes \(N=400,300,200\). The vertical dashed lines mark the phase boundaries which are located at \(h\simeq 0.24,0.28,0.3\), and \(0.43\). (The plots are magnified between \(h=0.2\) - \(0.5\) values for vizsualization.)
Figure 3: Computed values of the spatial average of the three flux operators \(W\), \(T_{i}\) are plotted as a function of field strength. The results are shown for a DMRG run on a 400 site lattice. The vertical dashed lines indicate the same phase boundaries as in Fig. 2. The horizontal dashed line marks the \(\langle W\rangle=0\) line.
excitations (magnons).
### Uniform and Crystalline phases of fluxes
The expectation values of flux operators show an intriguing behavior, as shown in Fig. 3. Up to \(h\approx 0.24J\), we observe a uniform value of \(\langle W_{p}\rangle\), but \(\langle T_{ip}\rangle\) obtain staggered mean values between the alternative four-bond plaquettes, as shown in Fig. 4(a) and 4(b), respectively. (The condition for \(T_{1p}>T_{2p}\) versus \(T_{1p}<T_{2p}\) at a given plaquette depends on the open boundary condition.) Moreover, the uniform value of \(\langle W_{p}\rangle<1\) at all plaquettes suggests that the gauge sector of the ground state can still be approximated to be a product state of local basis, but now the local states have changed from \(|+\rangle_{p}\) at \(h=0\) to \(\alpha_{p}|+\rangle_{p}+\beta_{p}|-\rangle_{p}\) for \(h>0\), where \(W_{p}|\pm\rangle_{p}=\pm|\pm\rangle_{p}\), and \(\alpha_{p}^{2}-\beta_{p}^{2}=\langle W_{p}\rangle\), \(\forall p\). The normalization condition dictates \(\alpha_{p}^{2}=(1+\langle W_{p}\rangle)/2\).
When the Kitaev model is perturbed, in general, one gets complicated multi-body interactions among Majorana Fermions and Z\({}_{2}\) gauge fluxes. Z\({}_{2}\) gauge fluxes become dynamic and acquire finite effective masses[57]. Further, open string operators carrying Majorana fermion modes (both \(b_{i}^{x,y,z}\) and \(c_{i}\)) at their ends also have expectation values in the ground state. Study of open strings using DMRG at finite fields is cumbersome. Elaborated discussion on these string objects at finite fields and their role in dynamics is presented in Appendix. D. There are virtual excitations due to \(T_{ip}\) fluxes whose energy scale is \(<10^{-3}J\), but in the uniform \(\langle W_{p}\rangle\) phase, they make no contributions. There are long-wavelength collective excitations, in which \(\alpha_{p}\) (i.e. \(\langle W_{p}\rangle\)) varies slowly across the lattice but with a gap which scales with the system size. Finally, single matter Majorana excitations appear at higher energy.
A single \(Z_{2}\) vortex creation in the uniform flux case at a six-bond plaquette, i.e. changing \(W_{p}\) from \(+1\) to \(-1\) costs energy \(E\sim 0.24J\). Therefore, for \(h>0.24J\), \(W_{p}\) vortex creation is energetically feasible. In the dilute limit, the vortices start to proliferate in the lattice like a vortex gas or liquid phase, which is Phase II in our phase diagram.
With further increase of the field strength, by \(h\geq 0.28J\) there is a tendency for the vortices to crystallize, as shown in Fig. 5(a). This is Phase III. Here \(W_{p}=\pm 1\) plaquettes are nearly equal in number, giving \(\langle W\rangle\to 0\), which is close to half-filling. In this case, the vortices are 'frozen' to the lattice site with alternating plaquettes having opposite \(W_{p}\) sites, see Fig. 5(a). This phase is analogous to a density wave order in a correlated fermionic insulator or hard-core bosonic insulator at half-filling. The vortex lattice formation is evident in the dominant value of the Fourier component of the flux operators at a single wavevector as shown in Fig. 5(c). Slightly away from the half-filling on both sides, we observe here a few wavevectors and quasi-long-range correlation functions. which suggests an amorphous behavior.
### Emergent Glassiness
An amorphous crystal is a precursor to glassiness and may be at play in the present case as well. The energy to
Figure 5: Similar to Fig. 4, but here the results are shown at two representative fields of Phase II (\(h=0.2975\)) and Phase III (\(h=0.365\)). In the middle panel ((c),(d)), we plot the real part of the Fourier transformation of \(\langle W_{p}\rangle\) with wave vector \(k\).
Figure 4: The computed value of \(\langle W_{p}\rangle\) are shown for each plaquette \(p\) for two different fields (a) \(h=0.2\) (b) \(h=0.275\), which correspond to Phase I and Phase II. (c),(d)) The values of \(\langle T_{bp}\rangle\) are shown in the corresponding bottom panel. The \(T_{2p}>T_{1p}\) at a \(p\) corresponds to \(T_{1p}\) flux sitting at the boundaries, and vice versa
create a single \(W_{p}\) flux in the crystalline phase is \(\sim 0.05J\) (assuming uniform crystal for this estimation, see Appendix C for more details). Therefore, for \(h>0.3J\), we enter into the dense vortex region (Phase IV). The dense \(Z_{2}\) vortex density is evident in the \(\langle W\rangle\leq 0\) value shown in Fig. 3. Because of this high density, any small local fluctuation tends to impede the ordering of the entire lattice and hence a _glassiness_ arises.
We calculate the correlations of \(W_{p}\), quantifying the fluctuations from its mean, as \(\Delta W=\langle W_{p}W_{q}\rangle-\langle W_{p}\rangle\langle W_{q}\rangle\), where the expectation value is calculated with respect to the MPS ground state. The correlations of the fluxes are small but finite with several wavevectors as shown in Fig. 6(d). Interestingly, the correlation survives to a large length in this glass phase than that in the previous crystalline phase. This is in contrast to a solid-to-liquid phase transition where the correlation length decreases in a liquid phase. This is one aspect of the glassiness that distinguishes phase IV from it being a liquid phase.
Furthermore, we have also checked that the phase has non-zero central charge, signaling gapless excitations. Note that this is the approximate range of fields where gapless \(U(1)\) QSL state is proposed in the 2D Honeycomb Kitaev model [32; 33; 34; 35; 36; 37]. The convergence of DMRG minimization in this range of fields is slow compared to time scales for other phases.
Referring to the definition of \(W_{p}\) in Eq. 2, it is easy to associate the fluctuation of \(\langle W_{p}\rangle\) with the quantum fluctuation of the spins. This sets the present glass physics apart from the classical glassy phase of frozen spin configurations. Note that apart from single flux productions, there are also non-local flux pairs which are connected by Wilson operator \((W_{p})^{n}\) - which in the spin operator form takes a string operator. This automatically generates \(n-\)point spin-spin correlations in this system. Definite a \(n^{\text{th}}\)-order uniform susceptibility \(\chi_{n}\sim\partial^{n}M/\partial h^{n}\), we have checked that the second and third-order susceptibilities in this region are large and more chaotic as a function of the magnetic field. Note that in a gaussian fluctuation theory, the third and higher-order susceptibilities vanish, as we also find in the other phases. But in Phase IV, we find significant enhancement of the mean square values of of the second and third-order susceptibilities in the range of \(\mathcal{O}\left(10^{2}\right)\) to \(\mathcal{O}\left(10^{3}\right)\).
For the high magnetic fields, Phase V is trivially polarised along the \([111]\) direction. The fluxes are half of the plaquettes with \(\pi-\) fluxes resulting in \(\langle W_{p}\rangle=0\) and \(\langle T_{1p/2p}\rangle=0\) in every plaquette uniformly.
### Robustness of results with other Lattice Settings
We repeat the DMRG calculation in a 2D lattice strip via the four-leg Honeycomb lattice with cylindrical boundary conditions; see Appendix. B for more details. We find four phases (Fig. 10), where phase II and phase III are not distinguishable within the finite system size calculation. More importantly, the glass phase is reproduced here.
We also repeat the DMRG calculation in 1D ladder for \(J_{4}=0\) with other parameters fixed at 1. This creates open boundary conditions between the chains. The result is presented in Appendix A. We find two phases: at \(h>0\) we immediately find a crystalline phase (phase III) and the uniform polarized phase (phase V). The glass phase is absent here.
## IV Phase transitions and fidelity
In the absence of a well-defined local order parameter, the phase boundaries and phase transitions are difficult to characterize. In such a scenario, we can study how 'orthogonal' different variational ground states are as a function of the control parameter. This information is obtained with the quantum Fidelity analysis.
The quantum Fidelity is defined as \(F(h)=|\langle\psi_{0}\left(h\right)|\psi_{0}\left(h+\delta h\right)\rangle|\), where \(|\psi_{0}\left(h\right)\rangle\) is the ground state vector obtained from the DMRG calculation at \(h\)[72; 73; 74; 75; 76]. It is now evident that if the states \(\left|\psi(h)\right\rangle\) and \(\left|\psi(h+\delta h)\right\rangle\) are linearly dependent we have \(F\to 1\), and if they are
Figure 6: We plot the correlation function of the flux operator \(\Delta W_{p}\) with \(p=50\) for (a) \(h=0.295\), and (b) \(h=0.365\).
completely orthogonal we get \(F\to 0\), and any value between them measures the overlap between the two wavefunctions. At \(F=0\), with an infinitesimal change in \(h\), the system picks up a different local minima configuration whose state is orthogonal to other local minima states [77]. Earlier, the measure of the Fidelity was captured via the concept of 'Orthoganality Catastrophe' a la Anderson in free Fermion systems. This is an infrared catastrophe arising from gapless excitations. Fidelity can vanish for different reasons, such as emerging glassiness.
As shown in Fig. 7, we see that \(F\to 1\) in both the uniform phases of flux (Phase I) and of spin (Phase V), suggesting a unique ground state in these phases. \(F\) sharply decreases at the phase boundary between Phase I and II, implying that the vortex gas phase is separated from the uniform phase by a phase transition. Within the Phase II region, the Fidelity does not completely reach 1, suggesting the presence of configurations that partially overlap with the chosen ground state.
The most exciting feature is obtained in Phase III (amorphous vortex crystal) and Phase IV (vortex glass) where \(F=0\). This clearly indicates the presence of a plethora of local minima whose wavefunctions are orthogonal to the chosen ground states. These local minima are not necessarily degenerate to each other but lie within the energy fluctuation scale provided by the magnetic field. Such an abundance in local minima hinders dynamics and ergodicity and is responsible for glassiness. For a given value of \(h\in\) Phase- III and IV, we have repeated our DMRG runs many times, and each time, the DMRG iterations yield a different flux configuration that is orthogonal to the other one. In phase- IV, this behavior hints at the presence of gapless feature of the ground state as reaffirmed by the calculation of the central charge (not shown).
## V Discussion: Intrinsic glassiness of generic quantum spin liquids
Building on the previous works on quantum glass physics and the above-detailed demonstration of it in a Kitaev model, it is important to ask whether glass physics proximates any generic QSL state when a perturbation is introduced to cause dynamics.
Various numerical studies have suspected intrinsic glassiness as a function of temperature and anomalous thermalisation in the pure Kitaev model. Localisation features are observed for a range of different couplings in the Kitaev Ladder in low-field [78] indicating that behaviour of localisation is beyond low-field uniform flux approximations. The phase with non-ergodic dynamics is also observed in the 2D Kitaev model under quench with skew magnetic field [79] and without the field for anisotropic couplings [80]. The exact solubility of the Kitaev model is a result of \(N\) local conserved quantities (flux operator) in a honeycomb lattice of \(N\) plaquettes. As claimed in Refs. [29] and [30], the local conservation constraint leads to the shattering of the full Hilbert space of dimension \(2^{2N}\) into \(2^{N}\) sectors of equal dimension. Each sector defines a \(2^{N}\) dimensional Hilbert space of free neutral fermions. This perfect partitioning of Hilbert space and consequent superselection challenges ergodicity and leads to many-body localization etc.
A complementary view can be given from a pure gauge theory (3D Toric Code model) from Ref. [21]. The spins are _locally_ coupled to a dissipating bosonic bath (due to, say, local displacement fields, local temperature gradient) to induce dynamics. Yet, there is a tendency for slow relaxation of the ground state due to topological (over-) protection of the emergent low-energy excitations (such as matter-field is confined to the gauge flux (s)). Such a state is associated with emergent glassiness (see Appendix. D for discussions in the Kitaev model). Caution is to be taken that the aforementioned glass behaviors are not always characterized by all the signatures of glass physics.
In an RVB state, a spin operator at a site \(\mathrm{S}_{i}^{\alpha}\), while acting on a given site produces two spinons, which separate away during time evolution. However, spinons as sources of emergent gauge fields, carry gauge fluxes [81]; sometimes both electric and magnetic charges, called Dyons [82; 83]. In the context of \(\mathrm{Z}_{2}\) spin liquids, \(\mathrm{Z}_{2}\) flux excitations are also present[84]. Net gauge fluxes created by the spin operators are zero, even though the spin operators are gauge invariant. Flux attachment endows spinons with fractional exchange statistics in 2 dimensions [85]. This is also transparent in Kalmeyer-Laughlin chiral spin liquid state [86; 87] and later works, where low energy spinon carries a Vison or Meron (half-Skyrmion) [83] or SU(2) gauge fluxes [88].
Any external degree of freedom which directly couples to the constituent local spin degree of freedom can generate (extended) topologically-protected excitations that do not disappear easily [89]. Consequently, we expect
Figure 7: We plot the quantum Fidelity (defined in the text) with magnetic field for \(N=400\) lattice sites. The vertical dashed lines indicate all five phase transition points, which coincides with Figs. 2 and 3.
glassiness in spin liquids in general. Anomalous behaviour may generally arise in the non-linear susceptibilities as magnetic fields get directly coupled to the spin operator via Zeeman coupling. Glassiness arising from emergent gauge fluxes and neutral Fermions will be visible in experiments involving field quenching and other non-equilibrium analysis as well. The femtosecond laser pulses can be used to probe glassiness in the Kitaev spin liquid material.
## VI Conclusions
Our detailed DMRG study on the 1D Kitaev model with a magnetic field reveals an intriguing phase diagram with five phases, and among them, we discover a glass phase. The model is also studied earlier with exact diagonalization (ED) [38] as well as with iDMRG method [66]. But numerical studies of 2D Kitaev have the disadvantage of even lesser plaquette number than in 1D ladder counterpart, and hence the phases distinguished by the behavior of flux operators may not be discernible due to boundary effects. The earlier DMRG study found three phases, which are Phase I, Phase II, III, and IV combined, and phase IV. We are able to segregate between the vortex gas, crystal, and glass phases in the otherwise known \(U(1)\) QSL phase, due to the detailed analysis of the vortex operators as well as the Fidelity calculation. We find evidence of gapless excitations in the vortex glass phase but not in the gas and crystal phases.
How robust is our phase diagram beyond a two-leg ladder geometry and beyond the limitations of the DMRG studies? A complete answer to this question is not known in the community. We have however repeated the DMRG calculation on a four-leg ladder geometry as given in the Appendix. Here we find four Phases: Phase I, Phase III, Phase IV, and Phase V. This means the boundary between Phase II (vortex gas) and Phase III (vortex crystal) is not discernible. However, the vortex glass of present interest is well reproduced.
There are now numerical softwares available for finite temperature calculation within DMRG and Tensor network formalism. Future extension of our calculation to finite temperature will shed light on the possibility of a BKT-like physics for \(Z_{2}\) vortex as well as the stability of glass phase to thermal broadening.
_Note:_ As we were finishing this manuscript, we came across an interesting paper from Zheng Yan et al. [90], where they report numerical finding of emergent glassiness in disorder free Rydberg atom arrays in 2D.
## Acknowledgements
GB thanks B. Shivaram for the discussion on experimental results from his group on anomalous non-linear susceptibility in \(\alpha\)-RuCl\({}_{3}\); Tarun Grover and Mathew Fisher for discussions. We thank Vijay Shenoy for suggesting the Fidelity calculation. GB acknowledges continuing support from the Institute of Mathematical Sciences, the Indian Institute of Technology in Chennai, India, and the Perimeter Institute for Theoretical Physics at Waterloo, ON, Canada. GB's research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. TD acknowledges research funding from S.E.R.B. Department of Science and Technology, India, under I.R.H.P.A Grant No. IPA/2020/000034 and acknowledges the computational facility at S.E.R.C. Param Pravega under NSM grant No. DST/NSM/R&D HPC Applications/2021/39. KBY thank ICTS for the accommodation during the program "Frustrated Metals and Insulators" (code: ICTS/frumi2022/9).
## Appendix A Phases for Different Couplings
In the main text, we presented results for \(J_{x}=J_{y}=J_{3}=J_{4}=1\). \(J_{4}=1\) imposed a cylindrical boundary
Figure 8: (a) Average of \(\langle W_{p}\rangle\) for \(N=400\) as a function of magnetic field for the coupling \(J_{4}=0\) with \(J_{x}=J_{y}=J_{3}=1\). The phase boundary is indicated with a vertical dashed line. (b)\(\langle W_{p}\rangle\) as function of plaquette number, \(p\) at field \(h=0.165\).
condition perpendicular to the ladder, leaving no bond indices to be open. Now we set \(J_{4}=0\), which gives alternating sites to have open bonds in the matrix product state. The energy dispersion of matter fermions in the ground state with \(h=0\) is gapless and quadratic. For \(h>0\), only two phases are present with a phase boundary at \(h\approx 0.215\), which is distinguished by a cusp in the magnetization plot (not shown). The average value of the \(W_{p}\) operator as a function of the magnetic field is shown in Fig. 8(a). The low field phase is a crystalline phase of \(\langle W_{p}\rangle\), as shown in Fig. 8(b). This phase is same as Phase-III for \(J_{4}=1\) given in main text. The high-field phase is a polarized phase with \(\langle W_{p}\rangle=0\) at all the plaquettes. There is no glass phase observed here.
Two phases with phase boundary around \(h\approx 0.25\) as a function of magnetic field are found with couplings \(J_{x}=2,J_{y}=J_{3}=J_{4}=1\), where the ground state dispersion of matter fermions at \(h=0\) is gap-less and linear. No structural difference is observed in flux configurations in both phases with \(\langle W\rangle\) decreasing smoothly with increasing \(h\).
## Appendix B Towards 2D: Results on 4 Leg model
To find out the robustness of the phase diagram presented in the main text on a two-leg DMRG calculation, we repeat the calculation on a four-leg ladder with cylindrical boundary conditions of system size, \(N=52\) sites and with bond dimension \(D\leq 1000\), truncation error, \(\epsilon\approx 10^{-10}\). We reproduce four phases as presented in Fig. 9. Those four Phases are Phase I, Phase III, Phase IV, and Phase V of the 1D ladder results presented in the main text.
The different phase boundaries are identified from the magnetization values with smaller steps of the magnetic field than seen in previous studies, see for example, Ref. [32]. The average expectation of the plaquette operators as a function of the magnetic field is shown in Fig. 9 agreeing with the previous findings [33].
Even though the system size is small and prone to boundary effects along the legs, we reproduce four phases as seen in the 1D ladder. Here the boundary between Phase- II to Phase- III is not explicit within the finite size calculations. The low field phase, Phase-I, for fields up to \(0.23J\), has uniform flux configurations, as shown in Fig. 10(a). Then, in Phase III, for a range of fields \(0.23<h<0.29\), the proliferation of the dynamically generated fluxes into the ordered configurations is observed, see Fig. 10(b). For fields above \(0.29\), the randomly distributed fluxes without any order are observed up to \(0.36\) in the proposed U(1) spin liquid region. The flux configurations in the polarised phase are with \(\langle W_{p}\rangle=0\) in all the plaquettes uniformly.
## Appendix C Estimation of Gaps
The gap to the excited state with \(\pi-\) fluxes from uniform flux phase at \(h=0\) is estimated by Exact Diagonalisation. The ED calculations are done with matter Majorana fermions by fixing gauge in accordance with uniform flux configuration. The \(Z_{2}\) vortex gap for creating a single \(W\) or \(T\) is in principle calculated by keeping two \(W\) or \(T\)\(\pi-\) fluxes infinitely far apart. In finite size calculations, that is approximately the y-axis intercept of the plot: gap versus \(1/d\), where d is the distances between the two \(\pi-\) fluxes with systems sizes, \((2d)\). The \(\pi-\) flux pair can be created by changing the bond operator from \(u^{z}_{(ij)}=+1\) to \(-1\) on \(z-\) bond common to the two adjacent plaquettes. Further creating a series of adjacent \(\pi-\)fluxes either to the right or left of already created flux-pair for separating those initially created \(\pi-\)fluxes accordingly. The gap to single \(W\) vortex is \(\approx 0.24\) and for \(W\) flux-pair, \(\leq 10^{-3}\). In case of \(T\) plaquettes, it is \(\leq 10^{-3}\) for single vortex and \(\leq 10^{-5}\) for flux-pair.
The ordered superlattice flux configuration at the finite field strength is approximated to uniform crystal for estimation of the gap. Further, it is approximated as fol
Figure 10: \(\langle W_{p}\rangle\) at (a) \(h=0.18\) and (b) \(h=0.26\), the emergence of periodic ordering in flux configurations is observed at this field. Colour bar indicate the strength of the flux in a given plaquette. The shaded plaquettes are connected for the periodic boundary in the cylinder geometry.
Figure 9: The spatial average values of \(\langle W_{p}\rangle\) as a function of the magnetic field for the four-leg Honeycomb lattice with system size \(N=52\) sites. The vertical dashed lines pointing the fields of phase transitions identified from the magnetic susceptibility.
lows: the high \(\langle W_{p}\rangle\) value in the plaquette p to \(+1\), the lower one to \(-1\). With this approximated flux configuration, the estimated gap to the single vortex is calculated following the same approach as for the uniform flux case. The gap to single \(W\) vortex is \(\approx 0.05\) in this approximated uniform crystallised flux configuration. All the energy values mentioned in this article are per unit cell.
Appendix D Topological Overprotection, Non-local String Operators and Emergent glassiness at finite temperature in Disorder free Kitaev model
In this section, we elaborate the discussion on the non-local string excitations in the Kitaev Honeycomb Lattice model and their constrained dynamics. The ground state of the uniform Kitaev model, a zero Fermion number sector, lies in the zero flux sector, as dictated by Lieb's theorem (2D analog of discussions in Sec. II of main text). This sector has full translational invariance and Graphene like Dirac cone spectrum of positive energy Fermion excitations (Majorana fermion) in Bloch states in the Brillouin zone of the honeycomb lattice. Other flux sectors bring in new physics. The spatial distribution of conserved and static \(\pi\)-fluxes, selected randomly from among the \(2^{N}\) sectors, is typically random. Consequently, one particle wave function of the positive energy Fermions will be non-Bloch-like and generically Anderson localized.
We demonstrate that the Kitaev model has features that encourage glassiness at finite temperatures in the absence of disorder. The notion of Topological overprotection [21] induced glassiness was introduced by Chamon using 3D toric code quantum spin models. At the heart of Chamon's work is the observation that coupling of the constituent spin degree of freedom at lattice cites to a dissipative Bosonic (model thermal) bath results in the creation of defect (anyons) clusters. Because of topological protection, defect annihilation and propagation are severely constrained. It results in anomalous and slow relaxation - this is the beginning of glassiness.
To induce dynamics, following Chamon, we couple the constituent spin degrees of freedom at lattice sites to Bose oscillators (of some external heat bath) at every site: \(a_{i}^{\alpha},a_{i}^{\alpha\dagger}\) (this is analogous to applying site-dependent magnetic fields in Eq. 1. And this discussion also applies in case of homogeneous fields mentioned in the main text).
\[H_{spin/bath}=\sum_{i,\alpha}g_{\alpha}S_{i}^{\alpha}\left(a_{i}^{\alpha}+a_{ i}^{\alpha^{+}}\right)\]
Where, \(g_{\alpha}\) is the coupling strength. It was shown [69; 91] that a spin operator at site \(i\), when acting on the ground state, creates a pair of static \(\pi\)-flux excitations in two plaquettes that share a single bond (in \(\alpha\)-direction) and a dynamical Majorana Fermion. During time evolution, the Majorana Fermion propagates away from site \(i\), while the two \(\pi\)-fluxes remain immobile. Dynamics of the two fluxes is restricted (topologically protected) in the following sense: They disappear only when a specific process takes place - when the nearest neighbour spin at a specific site creates/annihilates a (bath) boson and adds two more \(\pi\)-fluxes (thereby annihilating the two \(\pi\)-fluxes that are already present). If a different spin component at the same site \(i\) creates/annihilates a boson, then the \(\pi\)-flux pair do not get annihilated but reorient. If a wrong nearest neighbour spin creates/annihilates a bath Boson, two fluxes split and separate into two next nearest neighbour \(\pi\)-fluxes.
Another extended operator arises from the liberated Majorana fermion. In terms of constituent spin operators, the Majorana fermion operator is a product of string of spin operators. One end of the string is attached to the plaquette pair, and the other end carries the majorana fermion. In other words, the Majorana fermion that has been created by coupling to bath Boson degree of freedom is an extended object (strings). Strings of two Majorana Fermions can cross and get reconnected but never disappear. This feature of topological protection of strings is absent in models discussed in Ref. [21].
The above two types of non-local string operators from \(\pi\)-flux pairs and spin strings attached to Majorana fermions, limit disappearances of fluxes and discourage the proliferation of strings. Equilibration processes get slowed down, and glassiness may emerge. Thus at any finite temperature, because of the production of \(\pi\)-flux excitations and strings, glassiness is induced via coupling to the bath.
From another point of view, the Quantum disentangled liquid [31] character at any finite temperature is manifest and exact in the Kitaev spin liquid. We have thermally produced infinitely massive Z\({}_{2}\) fluxes, in the background of which light Majorana Fermions hop and attempt to localize. In the thermal ensemble, various superselected sectors with static fluxes appear and typically support Anderson localized positive energy neutral Fermions [92]. Thus we have overwhelming members of the thermal ensemble that form a quantum disentangled liquid with a high susceptibility for glassiness and non-thermalization.
|
2305.19873 | Multi-qubit State Tomography with Few Pauli Measurements | In quantum information transformation and quantum computation, the most
critical issues are security and accuracy. These features, therefore, stimulate
research on quantum state characterization. A characterization tool, Quantum
state tomography, reconstructs the density matrix of an unknown quantum state.
Theoretically, reconstructing an unknown state using this method can be
arbitrarily accurate. However, this is less practical owing to the huge burden
of measurements and data processing for large numbers of qubits. Even
comprising an efficient estimator and a precise algorithm, an optimal
tomographic framework can also be overburdened owing to the exponential growth
of the measurements. Moreover, the consequential postprocessing of huge amounts
of data challenges the capacity of computers. Thus, it is crucial to build an
efficient framework that requires fewer measurements but yields an expected
accuracy. To this end, we built a tomography schema by which only a few Pauli
measurements enable an accurate tomographic reconstruction. Subsequently, this
schema was verified as efficient and accurate through numerical simulations on
the tomography of multi-qubit quantum states. Furthermore, this schema was
proven to be robust through numerical simulations on a noisy superconducting
qubit system. Therefore, the tomography schema paves an alternatively effective
way to reconstruct the density matrix of a quantum state owing to its
efficiency and accuracy, which are essential for quantum state tomography. | Xudan Chai, Teng Ma, Qihao Guo, Zhangqi Yin, Hao Wu, Qing Zhao | 2023-05-31T14:10:26Z | http://arxiv.org/abs/2305.19873v1 | # Multi-qubit State Tomography with Few Pauli Measurements
###### Abstract
In quantum information transformation and quantum computation, the most critical issues are security and accuracy. These features, therefore, stimulate research on quantum state characterization. A characterization tool, Quantum state tomography, reconstructs the density matrix of an unknown quantum state. Theoretically, reconstructing an unknown state using this method can be arbitrarily accurate. However, this is less practical owing to the huge burden of measurements and data processing for large numbers of qubits. Even comprising an efficient estimator and a precise algorithm, an optimal tomographic framework can also be overburdened owing to the exponential growth of the measurements. Moreover, the consequential postprocessing of huge amounts of data challenges the capacity of computers. Thus, it is crucial to build an efficient framework that requires fewer measurements but yields an expected accuracy. To this end, we built a tomography schema by which only a few Pauli measurements enable an accurate tomographic reconstruction. Subsequently, this schema was verified as efficient and accurate through numerical simulations on the tomography of multi-qubit quantum states. Furthermore, this schema was proven to be robust through numerical simulations on a noisy superconducting qubit system. Therefore, the tomography schema paves an alternatively effective way to reconstruct the density matrix of a quantum state owing to its efficiency and accuracy, which are essential for quantum state tomography.
pacs: 03.65.-w,32.60.+i, 31.15.-p
## I Introduction
Recently, explosive advancements in the fundamental research and technological field of quantum physics are attributed to a surge in theoretical studies on entanglement [1; 2; 3], superposition [4], and interference [5] as well as technical improvements in precise quantum manipulations [6] and characterization of quantum circuits [7]. These ongoing studies also stimulate the rapid development of quantum information transformation [8; 9], quantum computing [10; 11; 12; 13], quantum cryptography [14; 15], and quantum simulation [16; 17]. Accuracy and security of quantum states are the most concerning issues in the case of quantum information processing. Therefore, considerable attention is focused on the whole quantum information transformation and quantum computing circuits. Their unit operational blocks are the quantum states on which quantum measurement [18; 19] acts and quantum gates where quantum operations can be performed at specific quantum states. Theoretically, a quantitatively certified quantum state could ensure the accuracy and security of quantum information transformation and quantum computing.
Previous studies have discovered a tool to certify the quantum state and reconstruct the density matrix [20; 21] of a quantum state with measurements, namely, quantum state tomography [21; 22; 23; 24; 25; 26; 27; 28]. These studies involved physical processes with an estimation. Physically, extracting information from quantum states results in a large number of measurements. However, mathematically, a valid estimator combined with a recovery algorithm is necessary to post-process the obtained data. This framework suffices to characterize the quantum state in theory precisely. However, tomography reaches its bottleneck owing to the exponentially increasing measurement requirements for qubit multiplications. Accordingly, the huge amount of obtained data might overburden the computer capacity. Moreover, accomplishing a full tomographic scan for qubits larger than 10 will be technologically difficult. Therefore, a more efficient method is urgently needed, which requires fewer measurements but still yields high accuracy. Linear inversion [29], maximum likelihood estimation (MLE)[30], Bayesian mean estimation (BME)[31], compressed sensing (CS) [32; 33], Fisher information [34] and the self-guided method [35; 36] are common estimators. For example, the CS can reduce the measurement requirements owing to its compressed sampling. Recovery algorithms also include CVX [34], a two-step descent method, and Nesterov [37], a one-step descent method. Additionally, the Powel [38], APG [39], CG [40], and APG-CG [41] are used to promote accuracy and speed.
The aforementioned studies [42; 43] attempted to construct an efficient and accurate tomographic framework. These studies are becoming more important in the case of a large number of qubits which is required in practical quantum computation and quantum information processing. Here, we construct a tomographic schema that requires fewer Pauli measurements [44] but still yields high accuracy. Theoretically, this work is rooted in the previous work [37], in terms of picking Phaselift as the es
timator and choosing Nesterov as the recovery algorithm. However, the goal of this work differs from that of [37]. The schema here has been proven to be valid and accurate through numerical simulations in multi-qubit state tomography. In particular, a state without a phase factor [45] can be precisely reconstructed by measuring only two Pauli bases.
## II Method
### Theoretical analysis
In general, a density matrix for an \(n\)-qubit system can be represented by the Stokes parameters Generally, a density matrix for a n-qubit system can be represented by the Stokes parameters:
\[\rho_{n}=\frac{1}{2^{n}}\left(\sum_{\mathbf{u}=0}^{3}c_{\mathbf{u}}\;\sigma_{ \mathbf{u}}\right), \tag{1}\]
where and \(c_{\mathbf{u}}\in\mathbb{R}\), with \(\mathbf{u}=i,j,\cdots,k\), denotes the Stokes parameter, and \(\sigma_{\mathbf{u}}:=\sigma_{i}\otimes\sigma_{j}\otimes\cdots\otimes\sigma_{k}\) in which \(\sigma_{0,1,2,3}\) corresponds to the identity matrix and the Pauli matrices:
\[\sigma_{0} :=I:=\left(\begin{array}{cc}1&0\\ 0&1\end{array}\right), \sigma_{1} :=\sigma_{x} \equiv\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right),\] \[\sigma_{2} :=\sigma_{y} \equiv\left(\begin{array}{cc}0&-i\\ i&0\end{array}\right), \sigma_{3} :=\sigma_{z} \equiv\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right),\]
respectively. A Stokes parameter \(c_{\mathbf{u}}\) can be obtained using Pauli measurement setting \(\mathbf{u}\), in which we mean measuring the observable \(\sigma_{\mathbf{u}}\) was measured under the tomographic state.
\[c_{\mathbf{u}}=\mathrm{tr}(\sigma_{\mathbf{u}}\rho_{n})=p_{\mathbf{u}}^{+}-p_ {\mathbf{u}}^{-}\,, \tag{2}\]
where the probabilities
\[p_{\mathbf{u}}^{\pm}=\sum_{\mathcal{M}_{\mathbf{u},\mathbf{b}}\in\mathcal{S} _{\mathbf{u}}^{\pm}}\mathrm{tr}(\mathcal{M}_{\mathbf{u},\mathbf{b}}\,\rho_{n}), \tag{3}\]
with \(\mathcal{S}_{\mathbf{u}}^{\pm}\)\(\pm\) denotes the eigen subspace of \(\sigma_{\mathbf{u}}\) with the eigenvalues \(\pm 1\), and \(\mathcal{M}_{\mathbf{u},\mathbf{b}}\) denotes a projector for an eigen product basis of \(\sigma_{\mathbf{u}}\). Here note the subscripts \(0,1,2,3\) in Eq. (1) correspond to \(I,X,Y,Z\) measurements, respectively, moreover, we say \(X,Y,Z\), instead of \(1,2\) and \(3\), measurement setting hereinafter.
All the diagonal elements of the density matrix can be determined via the \(Z\) direction measurement, which used \(\sigma_{z}\) combined with \(\sigma_{0}\). All the measurement settings \(U\cdots V\) with \(U,\cdots,V\in\{X,I\}\) can share a common set of measurement product basis, from which the Stokes parameters \(c_{U\cdots V}\) can be obtained from a common set of probabilities. Moreover, for the real part of the density matrix, because all the Stokes parameters with the odd number of \(Y\) are zero, only the measurements with an even number of \(Y\) are needed. Similarly, for the imaginary part of the density matrix, only the measurements with the odd number of \(Y\) are needed.
Based on the aforementioned analysis, we propose the measurement shown in Table 1 for state tomography for a class of entangled pure states including GHZ, W, and cluster state. For the real part (without a phase factor), a \(Z\) direction measurement is necessary to determine the diagonal elements, and an even number of \(Y\) together with \(X(Z)\) direction measurements are used to reconstruct the coherent (off-diagonal) elements. For the imaginary part, odd numbers (smaller than the number of qubits) of \(Y\) together with \(Z\) or \(X\) are necessary to reconstruct the off-diagonal elements. The specific forms of the measurements shown in Table 1 are verified through our numerical simulation in later sections.
We also propose more general law in Table table1 to reconstruct the pure or nearly pure state, by considering an \(n\)-qubit (with the number of qubits \(n>6\)) state, where the situations are different for \(n\) is odd or even.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline qubit & Real part (P) & Imaginary part (P) & Bases (P) & Bases (F) \\ \hline
1 & X,Z & Y & 3 & 3 \\ \hline
2 & XX,ZZ,YY & YZ,ZY & 5 & 9 \\ \hline
3 & XXX,ZZZZ,YYZ & YZZ,ZYZ,ZZY & 6 & 27 \\ \hline
4 & XXX,ZZZZ,YYYY & YZZ,ZYZZ,...,YYYYZ & 11 & 81 \\ \hline
5 & XXXX,ZZZZZZ,YYYYZ & YZZZZ,ZYZZ,...YYYZZ,ZZ & 28 & 243 \\ \hline
6 & XXXXXX,ZZZZZZ,YYYYYY & YZZZZ,ZZZZZZ,XXXXYY,YXXXXX,... & 135 & 729 \\ \hline \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\ \hline \(2n-1\) & \(X^{\otimes 2n},Z^{\otimes 2n-1}Z\) & \(YZ^{\otimes 2n-2},\cdots,YX^{\otimes 2n-2},\cdots\) & \(\begin{array}{c}3+A_{2n-1}^{2}+A_{2n-1}^{2}\\ +\cdots+A_{2n-1}^{2}+A_{2n-1}^{2}\end{array}\) & \(3^{2n-1}\) \\ \hline \(2n\) & \(X^{\otimes 2n},Z^{\otimes 2n},Y^{\otimes 2n}\) & \(YZ^{\otimes 2n-2},\cdots,YX^{\otimes 2n-2},...\) & \(\begin{array}{c}3+A_{2n}^{2}+A_{2n}^{2}\\ +\cdots+A_{2n}^{2}+A_{2n}^{1}\end{array}\) & \(3^{2n}\) \\ \hline \end{tabular}
\end{table}
Table 1: Measurements for reconstructing the density matrices of quantum states of qubits system. The basis measurement law differs between the quantum states with odd \((2n-1)\) and even \((2n)\) numbers.
MSE \(\sim 10^{-2}\) is the stopping condition. The rightmost column is the number of bases for full tomography [Base (F)], compared with that for our method [Base (P)].
The case is simpler when n is even. Here, reconstructing the real parts requires the measurements \(X^{\otimes n}\), \(Y^{\otimes n}\), and \(Z^{\otimes n}\), while \(Y^{\otimes n}\) measurements become unnecessary for states without phase factors, i.e., all the elements of the density matrix are real. When \(n\) is odd, the measurement \(Y^{\otimes n-1}\otimes Z\) or \(Y^{\otimes n-1}\otimes X\) is required.
As for the limitation of our method, it should be pointed out that the measurements presented in Table 1 are not sufficient to uniquely determine an arbitrary pure state among all states. Since it has been proven in [46] that 11 and 31 Pauli measurements are needed to uniquely determine an arbitrary 2- and 3-qubit pure state among all states, respectively. Moreover, note that the efficiency of the Nesterov recovering algorithm in \(\mathbf{B}\) of this Section requires the state should be of low rank. Whilst, we find that, through numerical simulation in Section III, the Pauli measurements in Table 1 can determine the specific states including W, GHZ, and cluster state, with high precision and efficiency.
### Nesterov Recovering algorithm
After framing a schema of measurements using which the density matrices can be reconstructed, an efficient estimator combined with a precise recovering algorithm is mathematically required to solve this problem numerically. Herein, we introduce the PhaseLift method [23] as the estimator and the Nesterov algorithm[47] as the recovery core, the validity of which has been demonstrated previously [37]. Integrating this estimator and recovery core allows us to efficiently recover the elements of the density matrix. Based on the method presented in [37], the reconstruction problem can be formulated into a convex optimization problem:
\[\begin{split}\text{Minimize}\Bigg{\{}&\sum_{ \mathbf{u},\mathbf{b}}\frac{1}{2}\big{[}\text{Tr}\left(\mathcal{M}_{\mathbf{u },\mathbf{b}}\,\rho\right)-f_{\mathbf{u},\mathbf{b}}\big{]}^{2}+c_{1}||\rho|| _{F}^{2}\\ &+c_{2}\text{Tr}(\rho)\Bigg{\}},\quad\text{subject to}\quad\rho \geq 0,\end{split} \tag{4}\]
where the relative frequency \(f_{\mathbf{u},\mathbf{b}}\) denotes the actual probability of the measurement result of \(\mathcal{M}_{\mathbf{u},\mathbf{b}}\), and \(c_{1,2}\) represent optimization parameters in the APGL algorithm [48]. In particular, \(f_{\mathbf{u},\mathbf{b}}\) is approximately equal to the expected value of \(\mathcal{M}_{\mathbf{u},\mathbf{b}}\) when the number of measurements is sufficiently large, \(c_{1}\) is related to the upper bound of the Lipschitz constant and \(c_{2}\) is related to the constrain of the semi-definite property of the density matrix.
The compressed sensing[33] method has been widely used in quantum state tomography, which only requires spending much fewer Pauli measurement settings than that of the full tomographic method. However, the compressed sensing[33] method does not specify which Pauli measurement set can be used.
The previous theoretical research [37] on comparing the performances between other methods including CVX, APG, and CG methods has verified the high accuracy and high speed for large numbers of qubits state tomography, especially for specific entangled states, such as W state, cluster state, and GHZ state.
Recently, IBM has become a very important quantum computation platform in the cloud. We use this platform to simulate and execute real tomography experiments. The tomography methods in qiskit.ignis. verification of IBM can be categorized into two parts, one part is designed for reconstructing the density matrix of the quantum state, and the other part is for characterizing the performance of a quantum circuit by estimating the average gate fidelity or average error rate. Here we focus on the state tomography in qiskit and we find it includes MLE and CVX method. The density matrix reconstruction can be treated to solve the linear problem, we can turn it into an optimization problem by attempting to minimize the problem while subjecting it to additional constraints to ensure it is indeed a density matrix. This is done by state\({}_{\text{cvx fit}}\). Another approach is to solve this optimization problem with no further constraints. The result might not be a density operator, i.e. positive semi-definite with trace 1; in this case, the algorithm first rescales in order to obtain a density operator. This is done using state\({}_{\text{ale fit}}\).
Therefore, on this platform, we numerically simulated multi-qubit graph state tomography on a noisy superconducting qubit platform in the IBM cloud [49] to explore the robustness of this schema. Investigations showed that two Pauli measurements enabled us to reconstruct multi-qubit states [50].
Figure 1: Schematic for the reconstruction of 1- to 6-qubit pure states using our efficient tomography schema. The dashed line (MSE_P)and solid line (MSE_NP) represent the states with and without phase factors, respectively.
### The procedure of the tomographic schema
In order to make it clear, we will explain this method in detail. The procedure of the simulation method (as shown in Fig. 2) is composed of three steps:
Part 1: Get data through numerical simulation.
Part 2: Process the obtained data. Here we feed the data into the estimator, i.e., Eq. (4). And, if we want to feed the relative frequency into the estimator, we need to multiply by the corresponding eigenvalues of the state.
Part 3: Execute the Nesterov algorithm, then we can obtain the estimated density matrix.
Due to the similarity between this work and the previous one[50], it is necessary to demonstrate the difference between them. First, the numerical simulations in [37] just prove that it is possible to do the multi-qubit state tomography accurately and efficiently in theory, and the Gaussian white noise in those simulations is less practical. Here, we prove the practical possibility of our method by feeding the real experimental data, which is obtained from the IBM cloud, into the tomographic schema.
Second, we have verified the accuracy and efficiency of the method in [37], which shows that only partially randomly selected Pauli bases produce a highly precise multi-qubit tomography. Nonetheless, for a specific quantum state, the bases are not specified. Hence in this work, we naturally consider that the number of measurement basis might be cut down, especially for some special quantum states, as recent studies have suggested both in theory and experiment [51].
In summary, while the previous works have paved the theoretical path, this work uses this method in practice and finds that fewer measurements can produce a precise multi-qubit tomography.
## III Simulations on tomography of multi-qubit quantum states
We have theoretically constructed an efficient schema to reconstruct the density matrix of an unknown quantum state by only measuring only a few Pauli bases. Its validity and efficiency turned out to be true through the following simulations on tomography of multi-qubit pure states.
Fig. 1 shows the simulation of the multi-qubit GHZ state reconstructions with an efficient tomography schema. Hereinafter, we used the mean square error (MSE), \(\mathrm{Tr}\left[(\rho_{\mathrm{E}}-\rho_{\mathrm{T}})^{\prime}(\rho_{\mathrm{ E}}-\rho_{\mathrm{T}})\right]\), and fidelity, \(\mathrm{Tr}(\rho_{\mathrm{E}}\rho_{\mathrm{T}})\), to evaluate the accuracy, where \(\rho_{\mathrm{T}}\) and \(\rho_{\mathrm{E}}\) denote the target state and the estimated state, respectively. This schema shows higher accuracy and efficiency for the states without phase factors than the states with phase factors compared with the states with phase factors. Therefore, this method is best suited for the tomography of stats that are expected to have no imaginary parts.
Figure 2: Schematic diagram for the procedure of the tomographic schema.
According to the above simulations, it is necessary to test this efficient tomography schema for some specific entangled states. Therefore, we use the IBM quantum cloud [49] to simulate the tomography process of quantum states in a superconducting qubit system [52]. Three typical graph states are selected, namely, the W, cluster, and GHZ states, owing to their intensive applications in quantum information and quantum computation. The n-qubit W state and n-qubit GHZ state are expressed as
Figure 4: Populations of density matrices of 5-qubit W, cluster, and GHZ states respectively using our efficient tomography schema.
Figure 3: Tomography simulations for the W, cluster, and GHZ states. The solid lines (MSE/Fidelity_(X, Z)) represent our efficient tomography schema, which only measures two directions, \(X\) and \(Z\). The dashed lines (MSE/Fidelity IBM) represent the results of the IBM cloud platform. The accuracy and fidelity of the reconstructions are estimated using MSE (shown in the upper and lower panels, respectively).
follows:
\[|\mathrm{W}\rangle_{n}=\frac{1}{\sqrt{n}}\big{(}|0_{1}\cdots 0_{n-1}1_{n} \rangle+|0_{1}\cdots 0_{n-2}1_{n-1}0_{n}\rangle\] \[\qquad\qquad+\cdots+|1_{1}0_{2}\cdots 0_{n}\rangle\big{)}, \tag{5}\] \[|\mathrm{GHZ}\rangle_{n}=\frac{1}{\sqrt{2}}\left(|0\rangle^{ \otimes n}+|1\rangle^{\otimes n}\right).\]
We consider the linear type cluster state, which has the exact form,
\[|\mathrm{C}\rangle_{n}=\frac{1}{2^{n/2}}\bigotimes_{i=1}^{n}\left[|0\rangle_{i }+|1\rangle_{i}\otimes\sigma_{z}^{(i+1)}\right], \tag{6}\]
with the convention \(\sigma_{z}^{(n+1)}\equiv 1\). From Eq. (6), one can have the exact forms of the 2-, 3-, and 4-qubit cluster states: \(|\mathrm{C}\rangle_{2}=(|0+\rangle+|1-\rangle)/\sqrt{2}\), \(|\mathrm{C}\rangle_{3}=(|0+0\rangle+|-1-\rangle)/\sqrt{2}\), and \(|\mathrm{C}\rangle_{4}=(|+0+0\rangle+|+0-1\rangle+|-1-0\rangle+|-1+1\rangle)/ \sqrt{2}\). Notably, the 2- and 3-qubit cluster states are local unitary equivalence to the GHZ state, whereas any \(n>3\) multiqubit cluster state is not [53]. In a quantum circuit, the cluster state is generated in accordance with its corresponding graph, in which vertices represent qubits with the initial states \(|\pm\rangle\equiv(|0\rangle\pm|1\rangle)/\sqrt{2}\), and the edges represent the controlled-phase gates acting on the qubits afterward.
In simulating the real qubits system in practice, we introduce noise models in a quantum circuit. The errors on each quantum gate and the process of reading out are also considered. In particular, we apply a depolarizing channel model to simulate the errors of quantum gates (where the single- and two-qubit gates error rates are set to be \(a=0.002\) and \(b=0.005\), respectively). Moreover, statistical noises are very important in quantum computation, and in our simulation, this kind of noise has been modeled with a bit flip error with a specific probability when the qubits are measured.
Fig. 3 shows 1- to 6-qubit W, cluster, and GHZ state tomography evaluated by EMS and fidelity. Herein, we reconstructed density matrices using only two measurement settings, \(X^{\otimes n}\) and \(Z^{\otimes n}\) with \(n\) being the number of qubits. Consequently, our tomography schema, with a smaller MSE and higher fidelity, outperformed the IBM cloud with regard to accuracy. Therefore, this schema might be applied to state certifications for large numbers of qubits. Moreover, the schema showed higher efficiency than the IBM cloud platform. The reconstructed density matrices are shown in Fig. 4.
## IV Discussion on the robustness of the efficient tomography schema
A comparison between the full tomography and efficient tomography schema is shown in Fig.5(a). We can observe that the number of measurements for the efficient tomography schema increases polynomially, whereas it increases exponentially for the full tomography schema. In particular, in the absence of a phase factor, two measurements can be used for the reconstruction. Evidently, this method outperforms the full tomography schema in terms of efficiency. A reliable quantum computing and information process requires a precisely controlled qubit system. The qubit system that interacts with its environment at all times becomes easily incoherent, and the noise from the environment technologically challenges the multiplication of the quantum system. Additionally, noises are generated during state preparation and measurement. Therefore, investigating the robustness of the tomography schema under noise for the multiqubit system is essential. Fig.5(b) shows the numerical simulation of multiqubit state tomography for a noisy superconducting qubit system. We investigate the robustness of our tomography schema by varying the two-qubit error rate b from 0 to 0.2 and fixing the single-qubit error rate at a = 0.002 as well as the state-of-the-art noise level of the superconducting qubit system. The simulation shows that the noise has a marginal effect on the state tomography as
Figure 5: (a). Comparison between the required measurements of the full tomography and efficient tomography schema. The dashed-dotted line (Num_FT), solid line (Num_NT_NP), and dashed line (Num_NT_P) denote the number of measurements with full tomography, efficient tomography schema without a phase, and efficient tomography with a phase, respectively. (b). Robustness of our efficient tomography schema. The horizontal axes represent the noise level of the two-qubit gate.
long as the error rate is small enough, where b is smaller than \(~{}0.08\). This type of robustness remains when the number of qubits multiplies. Therefore, this schema has promising robustness, even if the scale of qubits grows.
It is worthy to introduce new researches that are demonstrated both in theory and experiment[51]. The tomographic schema demonstrated by them enables us to accurately determine an unknown quantum state with at most \(e^{O(k)}\log^{2}(n)\) (where \(n\) is the number of qubits and \(k\) is...), which is realized by concrete measurement protocols. The protocol originates from a theory in the quantum computation field which is called perfect hash functions. Using this theory we can partition the qubits into disjoint subsets, in which we perform all parallel measurements over single-qubit measurement bases (\(X\), \(Y\), or \(Z\)), such that all qubits in the same subset have identical measurement settings. This procedure allows us to determine all the reduced density matrices using only \(NM3^{k}\) measurements altogether, where \(N\), \(M\), and \(k\) are the number of the partitions, the measurements in an individual base, and the disjoint subsets, respectively. With this theoretic base, the experiment later realizes this theoretic schema in a phonic quantum state reconstruction, using MLE and BME.
In summary, the efficiency and accuracy of the tomography methods in the above studies are mainly guaranteed by efficient measurement protocol instead of an estimator or recovery algorithm. Hence the tomographic schema we put forward here utilizes a different estimator, called Phaselift, and implements a Nesterov algorithm, which realizes the accuracy and efficiency of tomography.
## V Conclusion
We have theoretically framed a tomography schema using which multiqubit states can be reconstructed efficiently and accurately. Physically, the efficiency was ensured using a fewer number of Pauli measurements compared with those required for full tomography. Mathematically, the accuracy of this tomography schema was achieved using a combination of the PhaseLift method (as the estimator) and the Nesterov method (as the recovery algorithm). Our tomography schema outperformed the full tomography in terms of efficiency and accuracy, particularly in the case of a larger number of qubits, as justified by numerical simulations on the tomography of three entangled states, i.e., W, cluster, and GHZ states. Considering a quantum state certification that requires fewer measurements but yields a precise estimation, this efficient tomography schema might be useful in practical quantum computing and information transformation.
## Acknowledgement
We would like to thank Bo Gao for his careful revision of the manuscript. This work is supported by the NSFC Grant Nos. 11905100 and 11675014. Additional support is provided by the Ministry of Science and Technology of China (2013YQ030595-3).
|
2308.16564 | High-T$_C$ superconductivity in $\mathrm{La_3Ni_2O_7}$ based on the
bilayer two-orbital t-J model | The recently discovered high-T$_C$ superconductor La$_3$Ni$_2$O$_7$ has
sparked renewed interest in the unconventional superconductivity. Here we study
superconductivity in pressurized La$_3$Ni$_2$O$_7$ based on a bilayer
two-orbital $t-J$ model, using the renormalized mean-field theory. Our results
reveal a robust $s^\pm-$wave pairing driven by the inter-layer $d_{z^2}$
magnetic coupling, which exhibits a transition temperature within the same
order of magnitude as the experimentally observed $T_c \sim 80$ K. We establish
a comprehensive superconducting phase diagram in the doping plane. Notably, the
La$_3$Ni$_2$O$_7$ under pressure is found situated roughly in the optimal
doping regime of the phase diagram. When the $d_{x^2-y^2}$ orbital becomes
close to half-filling, $d-$wave and $d+is$ pairing can emerge from the system.
We discuss the interplay between Fermi surface topology and different pairing
symmetries. The stability of the $s^\pm-$wave pairing against Hund's coupling
and other magnetic exchange couplings is discussed. | Zhihui Luo, Biao Lv, Meng Wang, Wéi Wú, Dao-Xin Yao | 2023-08-31T08:57:38Z | http://arxiv.org/abs/2308.16564v4 | High-T\({}_{C}\) superconductivity in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) based on the bilayer two-orbital t-J model
###### Abstract
The recently discovered high-T\({}_{C}\) superconductor La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) has sparked renewed interest in the unconventional superconductivity. Here we study the unconventional superconductivity in pressurized La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) based on a bilayer two-orbital \(t-J\) model, using the renormalized mean-field theory. Our results reveal a robust \(s^{\pm}-\)wave pairing driven by the inter-layer \(d_{z^{2}}\) magnetic coupling, which exhibits a transition temperature within the same order of magnitude as the experimentally observed \(T_{c}\sim 80\) K. We obtain a comprehensive superconducting phase diagram in the doping plane. Notably, the La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) under pressure is found situated roughly in the optimal doping regime of the phase diagram. When the \(d_{x^{2}-y^{2}}\) orbital becomes close to half-filling, \(d-\)wave and \(d+is\) pairing can emerge from the system. We discuss the interplay between the Fermi surface topology and different pairing symmetries. The stability of the \(s^{\pm}-\)wave pairing against Hund's coupling and other magnetic exchange couplings is examined.
+
Footnote †: These authors contributed equally to this work
+
Footnote †: These authors contributed equally to this work
+
Footnote †: These authors contributed equally to this work
## I Introduction
Understanding high transition temperature (\(T_{c}\)) superconductivities remains one of the greatest challenges in the condensed matter physics. For cuprates [1; 2; 3], the fundamental mechanism of the \(d-\)wave pairing is believed to essentially lies in the \(d_{x^{2}-y^{2}}\) orbital in the presence of strong Coulomb repulsion [2]. It is usually referred as unconventional superconductivity distinguished from the more conventional Bardeen-Cooper-Schrieffer (BCS) type of superconductivity. Another prominent example of the unconventional superconductivity is found in the iron-based superconductors [4; 5; 6; 7; 8], where multiple \(d-\)orbitals are often involved in the pairing. Most recently, a new Ruddlesden-Popper nickelate superconductor La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) is found with a \(T_{c}\approx 80\) K [9; 10; 11] under a moderate pressures. Importantly, it represents one of the rare examples of superconductor that hosts \(T_{c}\) higher than the liquid nitrogen boiling temperature. On one hand, La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) is similar to cuprates, as they both have the NiO\({}_{2}\)/CuO\({}_{2}\) plane hosting the crucial \(d_{x^{2}-y^{2}}\) orbital at Fermi level. On the other hand, La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) differs from the cuprates, as its apical oxygens and \(d_{z^{2}}\) orbitals come to play a role in low-energy physics [9; 12]. Given this context, we would like to ask that whether the underlying pairing mechanism in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) resemble cuprates? Or it belongs to a novel scenario that distinct from the extensively investigated cuprates? From theoretical perspective, to address these questions a first step is to map out a superconducting phase diagram of the relevant prototypical physical models.
Electronic structure studies [13; 14; 15; 16; 17] as well as optical experimental probe [18] show that in pressurized La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\), Ni-\(d_{z^{2}}\) orbital is involved in Fermi energy due to the strong inter-layer coupling via apical oxygen in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\). Via hybridization with the in-plane oxygen \(p-\)electrons, the \(d_{z^{2}}\) orbital also interacts with \(d_{x^{2}-y^{2}}\) orbital, eventually giving rise to the three-pocket Fermi surface structure. Such Fermi surface is distinctly different from the cuprates so as to drifts its pairing ground state far away from the \(d-\)wave upon doping a single \(d_{x^{2}-y^{2}}\) orbital. Another issue to be addressed is electronic occupancy, which is important to reflect charge/spin correlation and effective superexchanges. La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) has a nominal configuration \(d^{7.5}\)[9; 19] that is significantly smaller than \(d^{9}\) in cuprates. This indicates an average 2.5 holes for each Ni-\(3d\) shell, which reside in two active \(e_{g}\) orbitals. The computed values of electrons density for each orbital varies considerably in different research works [20; 21; 22; 23; 24]. Nevertheless, it in general can be regarded as a heavily hole-doped multi-orbital system with strong electron correlations. Regarding the superexchange couplings, studies based on atomic limit analysis and cluster dynamic mean field theory [25; 20] pointed out that the \(J_{\perp}\) connecting two apical \(d_{z^{2}}\) orbitals may acquire a magnitude being 1.75\(\sim\)2 times larger than the intra-layer \(d_{x^{2}-y^{2}}\) exchange coupling, with the latter estimated to be close to the cuprate counterpart [20]. Such large \(J_{\perp}\) is very likely to be responsible for the high \(T_{c}\) in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\)[20; 21; 26; 27; 28; 29].
In this paper, we systematically investigate the superconductivity in the bilayer two orbital \(t-J\) model that prototypes the low-energy physics of pressurized La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\), using the renormalized mean-field theory (RMFT) [30; 31; 32]. RMFT approach can be seen as a concrete implementation of Anderson's resonating valence bond (RVB) concept of the unconventional superconductivity[33]. In line with Gutzwiller scheme, the renormalization effects from strong electron correlations are considered on different levels by introducing the doping-dependent renormalization factors \(g_{t},g_{J}\)[30] in RMFT. Meanwhile, the mean-field decomposition of
the magnetic exchange couplings \(J\) allows searching for the BCS pairing instabilities of the system. Despite its simple formulations, RMFT has shown to be able to describe various aspects of the cuprates, such as superfluid density, the dome-shaped doping-dependence of \(T_{c}\), as well as pseudogap [31]. In RMFT, the pairing order parameter \(g_{t}\Delta\) involves two competing energy scales with increasing doping [30]. In our study, we find that the outcome of this competition leaves La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) residing in the optimal doping regime in the superconducting phase diagram. Our calculation suggests a \(T_{c}\) comparable to that of the experiments, indicating that our considerations are in general relevant to the La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) superconductor. We also elucidate different possible pairing symmetries in a broad doping range.
The paper is organized as follows. In Sec. II, we introduce the model and the RMFT method. In Sec. III, we present our results including \(T_{c}\) for parent compound, doping phase diagram and Fermi surface. The influence of the strength of superexchanges and Hund's coupling are also discussed at the end of the section. In Sec. IV we discuss the stability of the \(s^{\pm}-\)pairing. Section V provides a conclusion. Finally, details about the employed method and supporting materials can be found in the appendixes.
## II Model and Methods
In the strong-coupling limit, the bilayer two orbital Hubbard model [13] can be mapped into a \(t-J\) model as done in Wu, \(et\)\(al.\)\(\cdot\)'s work [20],
\[\mathcal{H} =\mathcal{H}_{t}+\mathcal{H}_{J} \tag{1}\] \[\mathcal{H}_{t} =\sum_{ijst\sigma}(t^{st}_{ij}-\mu\delta_{ij}\delta_{s^{\star}}) c^{\dagger}_{is\sigma}c_{jt\sigma}\] \[\mathcal{H}_{J} =J_{\perp}\sum_{i}\mathbf{S}_{iz_{1}}\cdot\mathbf{S}_{jz_{2}}+J_{//} \sum_{<ij>}^{s=x_{1},x_{2}}\mathbf{S}_{is}\cdot\mathbf{S}_{js}\] \[+J_{xz}\sum_{<i,j>}^{st=x_{1}z_{1},x_{2}z_{2}}\mathbf{S}_{is}\cdot\bm {S}_{jt}+J_{H}\sum_{i}^{s>t}\mathbf{S}_{is}\cdot\mathbf{S}_{it},\]
where \(\mathcal{H}_{t}\) is the tight-binding Hamiltonian taken from downfolding the DFT band structure [13], which is defined in a basis of \(\Psi_{\sigma}=(c_{x_{1}\sigma},c_{z_{1}\sigma},c_{x_{2}\sigma},c_{z_{2}\sigma })^{T}\), with \(c_{s\sigma}\) representing annihilation of an electron on \(s=x_{1},z_{1},x_{2},z_{2}\) orbital with spin \(\sigma\). \(\mu\) is the chemical potential. \(\mathcal{H}_{J}\) is the Heisenberg exchange couplings, and the spin operator \(\mathbf{S}_{is}=\frac{1}{2}\sum_{\alpha\beta}c^{\dagger}_{is\alpha}\mathbf{\sigma}_{ \alpha\beta}c_{is\beta}\). According to the estimated antiferromagnetic correlations in La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\)[20], there can be three major magnetic exchange couplings \(J_{\perp},J_{//},J_{xz}\), which respectively represents nearest-neighbor inter-layer exchange of \(d_{z^{2}}\) orbital, intra-layer exchange of \(d_{x^{2}-y^{2}}\) and intra-layer exchange between \(d_{x^{2}-y^{2}}\) and \(d_{z^{2}}\). The Hund's coupling \(J_{H}\) and \(J_{xz}\), as we investigate in section III.5, exhibits no significant effect on the superconductivity here, hence will be neglect in the following studies unless otherwise stated.
To proceed with RMFT [30], we first define the following mean-field parameters,
\[\chi^{st}_{ij} =\frac{1}{2}\langle c^{\dagger}_{is\uparrow}c_{jt\uparrow}+c^{ \dagger}_{is\downarrow}c_{jt\downarrow}\rangle=\langle c^{\dagger}_{is\uparrow} c_{jt\uparrow}\rangle \tag{2}\] \[\Delta^{st}_{ij} =\frac{1}{2}\langle c^{\dagger}_{is\uparrow}c^{\dagger}_{jt \downarrow}-c^{\dagger}_{is\downarrow}c^{\dagger}_{jt\uparrow}\rangle=\langle c ^{\dagger}_{is\uparrow}c^{\dagger}_{jt\downarrow}\rangle, \tag{3}\]
where \(\chi^{st}_{ij}\) and \(\Delta^{st}_{ij}\) are particle-hole and particle-particle pairs relating \(is\) and \(jt\). Here we assume no magnetic ordering. For each type of \(J_{r}\), the mean-field decomposition of \(\mathcal{H}_{J}\) generates condensations of \(\chi\) and \(\Delta\) in the corresponding \(r-\)channel,
\[H^{X}_{J_{r}} =-\frac{3}{4}J_{r}\sum_{<ij>\sigma}(\chi^{st}_{\delta}c^{\dagger}_{i \sigma}c_{j\sigma}+h.c.)+\frac{3}{2}J_{r}N|\chi^{st}_{\delta}|^{2} \tag{4}\] \[H^{\Delta}_{J_{r}} =-\frac{3}{4}J_{r}\sum_{<ij>\sigma}(\sigma\Delta^{st}_{\delta}c^{ \dagger}_{i\sigma}c^{\dagger}_{j\bar{\sigma}}+h.c.)+\frac{3}{2}J_{r}N|\Delta^{ st}_{\delta}|^{2}\]
Assuming transnational symmetry, \(\delta=R_{j}-R_{i}\) denotes different bonds in real-space associated with \(J_{r}\), and \(N\) is the total size of the lattice.
We now introduce two renormalization factors [34],
\[G^{s}_{t}=\sqrt{\frac{1-n_{s}}{1-n_{s}/2}},\qquad G^{s}_{J}=\frac{1}{(1-n_{s}/ 2)}. \tag{5}\]
These two quantities essentially reflects the renormalization effects by the electrons repulsions on top of the single-particle Hamiltonian in the Gutzwiller approximation [30; 32; 35; 36], with \(n_{s}=\sum_{\sigma}\langle c^{\dagger}_{s\sigma}c_{s\sigma}\rangle\) representing density of orbital \(s\). This eventually leads to the renormalized mean-field Hamiltonian,
\[H^{MF}_{t} =\sum_{ijst\sigma}G^{s}_{t}G^{st}_{t}t^{st}_{ij}c^{\dagger}_{is \sigma}c_{jt\sigma}, \tag{6}\] \[H^{MF}_{J} =\sum_{r}G^{s}_{J_{r}}G^{t}_{J_{r}}(H^{X}_{J_{r}}+H^{\Delta}_{J_{r }}).\]
where one sees that, when \(t^{st}_{ij}=t_{ij}\delta_{s,t}\), the above Hamiltonian reduces to the classical formulas for the single-band \(t-J\) model of cuprate superconductors [30], which shows \(g_{t}=G^{2}_{t}=\frac{2p}{1+p}\), \(g_{J}=G^{2}_{J}=\frac{4}{(1+p)^{2}}\), with doping \(p=1-n\). Now the physical pairing order parameter can be defined as \(g^{s}_{t}|\Delta^{\alpha}_{s}|\)[30] for \(s-\)orbital component and \(\alpha-\)pairing symmetry. It is worth noting that at zero temperature \(T=0\), the approximate correspondence between the RMFT and \(U(1)\) slave boson mean-field theory (SBMFT) [2; 37] self-consistent equations can be established if one assumes \(g^{s}_{t}\) is related to the Bose condensation of holons, and \(\Delta^{st}_{ij}\) is linked to the spinon pairing in SBMFT.
Applying Fourier transforms to Eq. (4-6), we obtain the mean-field Hamiltonian in momentum space,
\[H^{MF}=\sum_{\rm k}\Phi^{\dagger}_{\rm k}\left(\begin{array}{cc}H^{MF}_{t,{\rm k }}+H^{MF,X}_{J,{\rm k}}&H^{MF,\Delta}_{J,{\rm k}}\\ [H^{MF,\Delta}_{J,{\rm k}}]^{\dagger}&-[H^{MF}_{t,{\rm k}}+H^{MF,X}_{J,{\rm k}}] ^{*}\end{array}\right)\Phi_{\rm k}. \tag{7}\]
where \(\Phi_{\rm k}=(\Psi^{T}_{\rm k\uparrow},\Psi^{\dagger}_{-{\rm k}\downarrow})^{T}\) is the corresponding Nambu basis set. This equation can be solved self-consistently, combining Eq. (2-3) to determine the final mean-field parameters.
## III Results
Now we present the RMFT result on the superconducting instabilities of the bilayer two-orbital \(t-J\) model. In particular, we provide detailed investigations in the parameter regime that most relevant to the La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) system. The impacts of several key factors including temperature \(T\), doping levels of \(d_{x^{2}-y^{2}}\), and \(d_{z^{2}}\) orbitals: \(p_{x},p_{z}\), as well as the geometry of Fermi surface are analyzed while eyeing on the evolution of the pairing order parameter \(g^{s}_{t}|\Delta^{\alpha}_{s}|\). To be clear, in our convention, \(\alpha=d,s^{\pm}\) are pairing symmetries, while \(s=\left//x,//z,\bot z\right.\) denote orbitals that Cooper pairs reside on, representing respectively intra-layer \(d_{x^{2}-y^{2}}\), intra-layer \(d_{z^{2}}\) and inter-layer \(d_{z^{2}}\) pairing. Without loss of generality, we adopt typical values of \(J_{\bot}=2J_{//}=0.18\) eV throughout the paper unless otherwise specified.
### \(T_{c}\) for parent compound
We first present the RMFT calculated superconducting transition temperature \(T_{c}\) at \(\mu=0\) that corresponds to La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) under pressure, which is to be dubbed as the parent compound (PC) case hereafter. For the PC case, we have \(\mu=0,n_{x}\approx 0.665,n_{z}\approx 0.835\), and \(n=n_{x}+n_{z}=1.5\)[13]. In Fig. 1, the superconducting order parameter \(g_{t}|\Delta^{\alpha}|\) is plotted as a function of \(T\), which clearly demonstrate two dominant branches of the pairing fields at small \(T\): the intra-layer \(d_{z^{2}}\) pairing (dashed line) and the inter-layer \(d_{z^{2}}\) pairing (solid line), forming the \(s^{\pm}-\)wave pairing of the system. This result is in agreement with several other theoretical studies [26, 28, 29, 38, 39, 40]. As increasing \(T\), the order parameters decrease in a mean-field manner and eventually drop to zero at around 80 K. The computed value of \(T_{c}\) is somehow coincides with the experiment [9], highlighting that the various energy scales under our consideration can effectively capture the major physics of the realistic compound. However, we would like to stress that the superconducting \(T_{c}\) from RMFT in fact dependent on the value of \(J\) essentially in a BCS manner. Hence it can be sensitive to the strength of the superexchange couplings. The coincidence between the experimental and RMFT value of \(T_{c}\) should not be taken as the outcome of RMFT capturing La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) superconductivity in a quantitative correct way. We note that the \(s^{\pm}-\)wave pairing has also finite \(d_{x^{2}-y^{2}}\) orbital component, as shown by the dotted lines in Fig. 1, despite that its order parameter are much smaller than that of the \(d_{z^{2}}\) orbitals. The \(d-\)wave order parameters (purple), on the other hand, are fully suppressed, suggesting that the \(d+is-\)wave pairing instability can be ruled out in our model for La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\). We note that RMFT is originally formulated at zero temperature [30]. In Fig. 1 we have directly extended it to the finite temperature study and have neglected the project entropy term [41].
Figure 1: Pairing order parameters \(g_{t}|\Delta^{\alpha}|\) as a function of temperature \(T\). Note that the two purple lines overlap with each other with zero magnitude over the whole temperature range. Since here doping is fixed as \(p=0\), \(g_{t}\) is constant here according to Eq.5.
Figure 2: Pairing order parameter \(g_{t}|\Delta^{\alpha}|\) as a function of doping \(p\), with \(p>0\) for hole doping and \(p<0\) for electron doping. The round shape at \(p=0\) indicates the parent compound (PC). We asymptotically vary \(p\) with a fixed ratio of \(p_{x}/p_{z}=2.048\) so as to reach the half-filling (HF) of both \(e_{g}\) orbitals, as indicated by the diamond shape at \(p=-0.5\).
### Doping evolution
Now we focus on the doping dependence of the pairing order parameter \(g_{t}|\Delta^{\alpha}|\). In the following, \(p<0\) means doping the parent compound with electrons, and \(p>0\) denotes hole doping. In varying the doping level \(p\) of the system, a fixed ratio of the doping levels of the two \(e_{g}\) orbitals is kept, namely, \(p_{x}/p_{z}=2.048\), such that both \(e_{g}\) orbitals are half-filled, _i.e.,_\(n_{x}=1,n_{z}=1\) at \(p=-0.5\). From Fig. 2, one learns that the \(s^{\pm}-\)wave pairings (green) are quite robust over a wide range of doping \(p\). The maxima of \(g_{t}|\Delta^{s\pm}|\) are located at \(p\approx-0.04\) which is very close to \(p=0\) for PC. This indicates that, interestingly, the La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) under pressure corresponds to roughly the optimal doping in our superconducting phase diagram. At extremely large electron dopings (\(p<-0.25\)), \(d-\)wave pairing can build up which also exhibits a predominant superconducting dome (purple dotted line). In this doping regime, small \(s^{\pm}-\)wave components of the pairing order parameter are found to coexist with the \(d-\)wave components, indicating the emergence of \(d+is-\)wave pairing. At half-filling in Fig. 2 (\(p=-0.5\)), all pairing channels are fully suppressed due to the vanishing renormalization factors \(g_{t}\to 0\), reflecting the Mott insulating nature at half-filling [20].
It is worth noting that, as a general prescription of the mean-field approaches, different \(J_{r}\) terms in Eq. 4 can be decomposed into different corresponding pairing bonds \(\Delta_{\delta}\), such as \(J_{\perp}\) ( for \(\Delta_{\perp z}^{d/s\pm}\)) and \(J_{//}\) ( for \(\Delta_{//x}^{d/s\pm}\)). Nevertheless, in our calculation, the pairing component \(\Delta_{//z}^{d/s\pm}\) (dashed line in Fig. 2) that represents the intra-layer pairing of \(d_{z}\) orbital, does not have a corresponding \(J\) term in the Hamiltonian. Hence its value is not determined by the competition between the \(\Delta_{//z}^{d/s\pm}\) and \(|\Delta_{//z}^{d/s\pm}|^{2}\) terms in minimizing the free-energy. Instead, it should be interpreted as the pairing instability driven by the pre-existing inter-layer \(d_{z^{2}}\) pairing. Indeed, as shown in Fig. 2, \(\Delta_{//z}^{s\pm}\) and \(\Delta_{\perp z}^{s\pm}\) displays similar behavior as a function of doping \(p\). Finally, we notice that a small tip appears at hole doping \(p\sim 0.4\), which can be attributed to the van Hove singularity (VHS) associated with the \(\beta-\)sheet of the Fermi surface, see also Section D.
### Doping phase diagram
To gain further insights into the RMFT result of La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) system, we obtain a phase diagram in the \(p_{x}-p_{z}\) doping plane where now the two dopings are independent variables. As shown in Fig. 3, RMFT reveals that \(d\), \(s^{\pm}\), and \(d+is\) pairing symmetries, as well as the normal state occur in different doping regimes. Here a dashed white line indicates the \(p_{x}-p_{z}\) trajectory along which Fig. 2 is plotted. The black symbols label out sets of \((p_{x},p_{z})\) parameters on which the result will be further discussed in Fig. 5. The major feature of Fig. 3 is that the \(d-\)wave and \(s^{\pm}-\)wave pairings span roughly a vertical and a horizontal stripe respectively in the phase diagram. In other words, \(s^{\pm}-\)wave pairing (green) dominates the regime where \(-0.1\lesssim p_{z}\lesssim 0.1\), and it is insensitive to the value of \(p_{x}\). Likewise, the \(d-\)wave pairing (purple) prevails in the doping range of \(-0.34\lesssim p_{x}\lesssim 0.2\), and it is in general independent of \(p_{z}\). As a result, the \(d+is-\)wave pairing (orange) naturally emerges at the place where the two stripes overlap. In order to have a better understanding of this phase diagram, we show the magnitudes of the four major pairing bond \(g_{t}|\Delta^{\alpha}|\) in Fig. 4, from which one sees that for \(s^{\pm}-\)wave pairing, the pairing tendencies of \(\Delta_{\perp z}^{s\pm}\) (Fig. 4a) and \(\Delta_{//z}^{s\pm}\) (Fig. 4b) show similar pattern in the \(p_{x}-p_{z}\) plane, in consistence with Fig. 2. For the \(d-\)wave pairing, the situation is however different. For intra-layer pairing, \(g_{t}^{s\pm}\Delta_{//z}^{d}\) is enhanced when \(d_{x^{2}-y^{2}}\) approaches half-filling (\(p_{x}\approx-0.25\)), and \(p_{z}\) becomes heavily electron doped (\(p_{z}\sim-0.2\)). This is because in such a situation, the \(M-\)pocket of \(d_{z^{2}}\) orbital descends into the Fermi sea such that the system becomes effectively a single band system of the active \(d_{x^{2}-y^{2}}\) orbital. Hence the dominant \(d-\)wave pairing of the single-band \(t-J\) model is recovered for \(d_{x^{2}-y^{2}}\) orbital in this limit. On the other hand, the \(d_{z^{2}}\) component of the \(d-\)wave pairing \(g_{t}^{z}\Delta_{//z}^{d}\) is enhanced when \(p_{z}>0.2\), as shown in Fig. 4d. This can be understood considering the fact that since \(J_{//z}=0\) in our study, the \(d-\)wave instability driven by \(J_{//}\) of the \(d_{x^{2}-y^{2}}\) orbital should be less sensitive to the details of \(d_{z^{2}}\) orbital. Hence, the dependence on \(p_{z}\) of the order parameter can bee seen vastly as a result of a growing \(g_{t}^{z}\) with decreased \(n_{z}\) according to Eq. 5. Finally, it is interesting to note that although \(J_{\perp}=2J_{//}\), Fig. 4 shows that the the maximal value of \(d-\) wave pairing order parameter are roughly two times larger than that of the
Figure 3: Pairing phase diagram with varying doping \(p_{x}\) and \(p_{z}\). The dashed white line indicates the \(p_{x}-p_{z}\) trajectory along which Fig. 2 is plotted. The black symbols label a set of \((p_{x},p_{z})\) that are further discussed in Fig. 5\(J_{\perp}=2J_{//}=0.18\) are applied.
\(s^{\pm}-\) wave pairing. This is expected since the vertical exchange coupling \(J_{\perp}\) has a smaller coordination number \(z=2\), compared to its the in-plane counterpart \(J_{//}\), where \(z=4\).
### Fermi surfaces
In Fig. 5 we display Fermi surfaces for four typical dopings with each characterizing one type of pairing symmetries (a-c) or the one with vanishing pairing order parameter (d). Fig. 5a shows the FS of the PC case with \(s^{\pm}\) pairing, which has three sheets of FS with one \(\Gamma-\)pocket and one \(M-\)pocket [13]. Decreasing \(p_{x}\) from the PC can drives the \(d_{x^{2}-y^{2}}\) orbital closer to half-filling. As shown in Fig. 5b, the \(\alpha,\beta-\)sheet of FS as a whole is also driven closer to the magnetic Brillouin zone (MBZ) edge (green dashed lines), which is accompanied by the evolving of the pairing symmetry from \(s^{\pm}\) to \(d+is-\)wave. The occurrence of the \(d-\)wave pairing at this doping level unambiguously signals the importance of the intra-orbital physics in \(d_{x^{2}-y^{2}}\) orbital as it approaches half-filling. On the other hand, the \(\gamma\) pocket that hosts the \(s^{\pm}-\)wave, is less affected by the changing of \(p_{x}\), as shown in Fig. 5b. Fig. 5c displays that as lowing \(p_{z}\) from the PC case, the \(M-\)pocket vanishes from the Brillouin zone. Consequently, the \(s^{\pm}-\)pairing order parameter vanishes at \(p_{z}\sim-0.15\). In this case, similar to PC, no finite \(d-\)wave pairing order parameter is observed. Fig. 5d shows the last case with the \(M-\)pocket vanishing from Fermi level. As expected, in this case, the \(d-\)wave pairing with only finite \(g_{t}^{x}|\Delta_{//x}^{d}|\) is found. As discussed above, here the physics of the system can be essentially captured by the single-band \(t-J\) model with the presence of only \(d_{x^{2}-y^{2}}\) orbital.
### Superexchanges \(J\)
Finally, we investigate the influence of the magnitudes of the superexchanges for the parent compound. In Fig. 6, we present \(g_{t}|\Delta^{\alpha}|\) as a function of \(J_{//}/J_{\perp}\). The dashed vertical line indicates the values used for aforementioned calculations, where \(s^{\pm}-\)wave is found for PC. As decreasing/increasing \(J_{//}/J_{\perp}\), \(s^{\pm}-\)wave order parameters (green lines) decrease/increase very slightly with \(J_{//}\). When \(J_{//}/J_{\perp}\sim 1.1\), \(d-\)wave (purple solid line) starts to build up at \(d_{x^{2}-y^{2}}\) orbital. For La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) under pressure, this large value of \(J_{//}/J_{\perp}\) is however not very realistic [20]. Hence, the \(d-\) wave pairing instability should be excluded for the realistic parent compound in our study. To check the stability of the pairings, the results for \(J_{xz}=0.03\) (dashed line) and \(J_{H}=-1\) (dotted line) are also shown in Fig. 6. As one can see that, although both \(J_{xz}\) and \(J_{H}\) act as pair-breaking factors, they do not significantly modify the result we obtained above. In particular, for the \(s^{\pm}-\) pairing, the changes of the order parameter \(g_{t}|\Delta^{\alpha}|\) introduced by \(J_{xz}\) ( green dashed line) and \(J_{H}\) (not shown here) is negligible.
## IV Discussion
In our study, the RMFT equations are solved in such a way that the pairing fields on different bonds can be varied independently, namely, no specific pairing symmetry is presumed in the self-consistent process. The symmetries of the electron pairing emerge naturally as a result of the minimized energy in our calculations. This
Figure 4: Magnitudes of different pairing order parameters \(g_{t}|\Delta^{\alpha}|\) with varying doping \(p_{x}\) and \(p_{z}\). Symbols denote typical dopings to be analyzed in Fig. 5.
Figure 5: Fermi surfaces for a few sets of \((p_{x},p_{z})\) labeled by different symbols, as also indicated in Fig. 3 and 4. The green and purple dashed lines indicate the nodes of \(s^{\pm}\) and \(d-\)wave, respectively. The green line can also denote the antiferromagnetic magnetic Brillouin zone.
protocol prevents potential overlooking of pairing symmetries. Concerning the dominant \(s^{\pm}\) pairing for the parent compound at \(p_{x},p_{z}=0\), we note that even when only \(J_{\perp}\) (\(J_{//}=0,J_{xz}=0\)) is considered, \(\Delta_{//x}\) is finite despite being much smaller than \(\Delta_{//z}\) and \(\Delta_{\perp z}\). Since the effective mass of \(d_{x^{2}-y^{2}}\) orbital is small comparing to the \(d_{z^{2}}\) orbital, it may still contribute a significant portion of the superfluid density in the superconducting state of the system. The two-folded effects of the Hund's coupling \(J_{H}\), namely, the alignment of the on-site spins of the \(d_{x^{2}-y^{2}}\) and \(d_{z^{2}}\) orbitals, as well as the enhanced \(J_{//}\) and \(J_{xz}\) couplings, exhibit no crucial impact on the dominant \(s^{\pm}\) pairing in our RMFT study according to Fig. 6. We note that the superconducting \(T_{c}\) obtained by RMFT should be seen in general overestimated, since where both the temporal and spacial fluctuations are neglected. This is particular true when considering here the pairing fields originate from the local inter-layer \(d_{z^{2}}\) magnetic couplings, where phase fluctuations can play a more important role in suppressing \(T_{c}\) comparing to its single-band \(t-J\) model counterpart in cuprate superconductors. Finally we note that a recent theory work [21] proposes that the in the framework of composite pairing, the phase fluctuations could be suppressed by the hybridization effects between \(d_{x^{2}-y^{2}}\) and \(d_{z^{2}}\) orbitals. Verifying this conjecture is, however, beyond the scope of this work.
## V Summary
Employing the renormalized mean-field theory, we have established a comprehensive superconducting phase diagram for the bilayer two-orbital \(t-J\) model. A robust \(s^{\pm}-\)wave pairing is found to exist in the parameter regime relevant to La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) under pressure, which in general corresponds to the optimal doping of the superconducting phase diagram. The dependence of the pairing instabilities on doping levels, exchange couplings, as well as the Hund's coupling effects are carefully investigated. Our study will have significant impact on the theoretical understanding of the superconductivity La\({}_{3}\)Ni\({}_{2}\)O\({}_{7}\) under pressure.
## Acknowledgements
We thank the helpful discussions with Xunwu Hu, Zhong-Yi Xie, and Guang-Ming Zhang. This project was supported by the National Key Research and Development Program of China (Grants No. 2022YFA1402802, 2018YFA0306001), the National Natural Science Foundation of China (Grants No. 92165204, No.12174454, No. 11974432, No.12274472), the Guangdong Basic and Applied Basic Research Foundation (Grants No. 2022A1515011618, No. 2021B1515120015), Guangdong Provincial Key Laboratory of Magnetoelectric Physics and Devices (Grant No. 2022B1212010008), Shenzhen International Quantum Academy (Grant No. SIQA202102), and Leading Talent Program of Guangdong Special Projects (201626003).
## Appendix A Model Details
The tight-binding Hamiltonian \(\mathcal{H}_{t}\) in Eq. (1) is taken from our previous works [13; 20]. Here we would like to rewrite it again for reader's convenience.
\[\mathcal{H}_{t}=\sum_{\mathrm{k}\sigma}\Psi^{\dagger}_{\mathrm{k}\sigma}H( \mathrm{k})\Psi_{\mathrm{k}\sigma}, \tag{8}\]
with the matrix
\[H(\mathrm{k})_{1,1}=\mathrm{H}(\mathrm{k})_{3,3} =2t_{1}^{x}(\cos k_{x}+\cos k_{y})\] \[\quad+4t_{2}^{x}\cos k_{x}\cos k_{y}+\epsilon^{x}\] \[H(\mathrm{k})_{2,2}=\mathrm{H}(\mathrm{k})_{4,4} =2t_{1}^{z}(\cos k_{x}+\cos k_{y})\] \[\quad+4t_{2}^{x}\cos k_{x}\cos k_{y}+\epsilon^{z}\] \[H(\mathrm{k})_{1,2}=\mathrm{H}(\mathrm{k})_{3,4} =2t_{3}^{xz}(\cos k_{x}-\cos k_{y}) \tag{9}\] \[H(\mathrm{k})_{1,4}=\mathrm{H}(\mathrm{k})_{2,3} =2t_{4}^{xz}(\cos k_{x}-\cos k_{y})\] \[H(\mathrm{k})_{1,3} =t_{\perp}^{x}\] \[H(\mathrm{k})_{2,4} =t_{\perp}^{\pm}.\]
The basis is defined as \(\Psi_{\sigma}=(c_{x_{1}\sigma},c_{z_{1}\sigma},c_{x_{2}\sigma},c_{z_{2}\sigma})^ {T}\). The hopping parameters take the following values [13]:
Figure 6: Pairing order parameters \(g_{t}|\Delta^{\alpha}|\) as a function of \(J_{//}/J_{\perp}\) for parent compound. The solid and dotted lines denote \(J_{xz}=0\), dashed lines denote \(J_{xz}=0.03\), and dash-dotted line denotes \(J_{H}=-1\). \(J_{\perp}=0.18\) eV is fixed, hence the vertical dotted grey line at \(J_{//}/J_{\perp}=0.5\) indicates the default values for previous calculations.
\(t_{1}^{x}\)=-0.483, \(t_{2}^{x}\)=0.069, \(t_{1}^{z}\)=-0.110, \(t_{2}^{z}\)=-0.017, \(t_{3}^{xz}\)=0.239, \(t_{4}^{xz}\)=-0.034, \(t_{1}^{x}\)=0.005, \(t_{\perp}^{z}\)=-0.635, and the site energies: \(\epsilon^{x}\)=0.776, \(\epsilon^{z}\)=0.409.
## Appendix B Benchmark
To certify our calculations, we present a benchmark of our RMFT results for one-band \(t-J\) model. In Fig. 7, mean-field order parameters are shown as a function of \(p\). The lines are from our calculations and markers are from Ref. [30], with \(J/t=0.2\), \(T=0\). As can be seen they are in good agreement, as both capture the most striking domed-shape doping dependence of \(T_{c}\) featured by \(g_{t}\Delta^{d}\).
## Appendix C Doping Phase Diagram of \(\Delta\)
Figure. 8 shows the magnitudes of four major pairing order parameters \(|\Delta^{\alpha}|\) with varying \(p_{x},p_{z}\). Compared with Fig. 4, the renormalization factor \(g_{t}\) is removed from the plot. As can be seen the intensity distributions basically coinside with the former. The major variation comes in the lower boundary in Fig. 8a and left boundary in Fig. 8c, in which the magnitudes are notably enhanced. This is expected since \(g_{t}\to 0\) when approaching half-filling. Physically it reflects suppression of charge motion by strong correlation effect, so as to suppress formation of pair bonds. We also note that, the stripe-like shape of intensity distributions in Fig. 8(a-b) seem overall move downward compared with Fig. 4(a-c), which means, in another word, the optimal doping regime is more deviated from PC if \(g_{t}\) is ignored.
|
2309.16895 | Magnetically Induced Schrödinger Cat States: The Shadow of a Quantum
Space | Schr\"odinger cat states, which are superpositions of macroscopically
distinct states, are potentially critical resources for upcoming quantum
information technologies. In this paper, we introduce a scheme to generate
entangled Schr\"odinger cat states in a non-relativistic electric dipole system
situated on a two-dimensional plane, along with an external potential and a
uniform strong magnetic field perpendicular to the plane. Additionally, our
findings demonstrate that this setup can lead to the phenomenon of collapse and
revival of entanglement for a specific range of our model parameters | Partha Nandi, Nandita Debnath, Subhajit Kala, A. S. Majumdar | 2023-09-28T23:44:19Z | http://arxiv.org/abs/2309.16895v2 | # Magnetically Induced Schrodinger Cat States: The Shadow of a Quantum Space
###### Abstract
Schrodinger cat states, which are superpositions of macroscopically distinct states, are potentially critical resources for upcoming quantum information technologies. In this paper, we introduce a scheme to generate entangled Schrodinger cat states in a non-relativistic electric dipole system situated on a two-dimensional plane, along with an external potential and a uniform strong magnetic field perpendicular to the plane. Additionally, our findings demonstrate that this setup can lead to the phenomenon of collapse and revival of entanglement for a specific range of our model parameters.
## I Introduction
In quantum theory, the transition between the microscopic and macroscopic worlds is one of the less-understood features [1]. Such a transition plays a direct role in the realm of quantum measurements. In an ideal measurement paradigm, the interaction of macroscopic equipment and a microscopic system yields entanglement and a superposed quantum state with both macroscopic and microscopic components [2]. Schrodinger was the first to highlight the physical subtleties of this kind of superposition by replacing the macroscopic part of the system by a "cat", in order to illustrate a dramatic superposition of "states" of both alive and dead cats, that should, in practice, be distinguished macroscopically [3]. The superposition of macroscopically different quantum states, generically referred to as non-classical Schrodinger Cat State (SCS) [3; 4; 5], is crucial for understanding the conceptual underpinnings of quantum physics, especially with reference to wave function collapse models [6; 7; 8; 9]. In recent years, the advancement of quantum technologies has brought into sharp focus the utility of several quantum phenomena such as photon anti-bunching [10], sub-Poissonian statistics [11] and squeezing [12], along with the dynamics of SCS.
The success of quantum information theory and its potential applications [13; 14] that significantly outperform their classical equivalents have recently sparked a renewed interest in the generation of non-classical states such as SCS. Several applications of cat states have been suggested in the realm of quantum information [15], quantum metrology [16], quantum teleportation [17], and quantum error correction schemes [18; 19]. Besides, the concept of decoherence between two superposed quantum objects, or the quantum-to-classical transition, can be studied using the SCS as a platform. In quantum optics, a superposition of two diametrically opposite coherent states \(|\pm\alpha>\) with large value of \(|\alpha\mid\), can be interpreted as a quantum superposition of two macroscopically distinct states, _i.e._, a Schrodinger cat-like state [20; 21]. However, due to decay of their interference properties, it is extremely difficult to detect such states in practice [22]. Nonetheless, the universality of SCS enables it to be realized in a wide variety of physical arenas such as nonlinear quantum optics [23], quantum dot systems [24], superconducting cavities [25], Bose Einstein condensates (BEC) [26] and quantization of weak gravity [27; 28; 29]. A fascinating direction of research in recent years has been the mechanism for the natural generation of SCS in some specific condensed matter systems [30; 31].
Schrodinger cat states with entanglement based protocols provide a novel technique to explore short-distance quantum physics in a non-relativistic domain when there is a magnetic dipole interaction background [32]. At extremely short distances, the space-time structure needs to be "granular" in order to account for both gravity and quantum uncertainty [33]. A viable approach towards quantum gravity is through quantizing space-time itself [34], rather than the construction of an effective field theory of gravity. This approach is an active area of research on quantum gravity, commonly referred to as non-commutative geometry [35; 36]. The fundamental goal is to derive classical geometry from a suitable limit of a non-commutative algebra. Though such a proposal may appear as ad-hoc [37], the physical justification for such a non-commutative space-time is strong since it provides a solution to the geometric measurement problem near the Planck scale.
Non-commutative geometry appears naturally in various non-relativistic planar systems. For instance, it occurs using the lowest Landau-level (LLL) projection to study the behavior of charged particles in a strong magnetic field [38]. Further, the incompressibility of fractional quantum Hall fluids [39] has a strong connection to the emergence of a non-commutative geometry in which the fundamental Planck length is substituted by the magnetic length. Non-commutative space-time forms an alternative paradigm for studying the behavior of relativistic anyonic systems in interaction with
the ambient electromagnetic field [40; 41]. Additionally, non-commutative properties of real-space coordinates in the presence of the Berry curvature [42] produce skew scattering by a non-magnetic impurity without relativistic spin-orbit interactions in a condensed matter system. Non-commutative space provides a paradigm for describing the behavior of the quantum to classical transition under the influence of decoherence [43; 44], which is relevant for implementation of quantum information protocols. From an experimental standpoint, there have been efforts in search of evidence of possible non-commutative effect manifestations in cosmology and high-energy physics [45; 46; 47]. A testable framework has been suggested in low-energy experiments in the arena of quantum Hall effect [48; 49].
The motivation for the present study is to investigate whether multi-component entangled non-classical SCS could be produced in deformed quantum space, where non-commutativity arises naturally in an easily accessibly low energy physical system. In this article, we investigate the phenomenology of a two-particle electric dipole model with an additional harmonic interaction and a strong background magnetic field, with motion constrained to the plane perpendicular to the field. Such a system may be considered as a toy version of a real Excitonic dipole set-up [50]. By exploring the high magnetic field limit, we reveal the emergence of planar non-commutative space as a natural consequence. Furthermore, we establish the deformed Heisenberg algebra as the origin of multi-component entangled SCS in this system. Moreover, we quantify the degree of entanglement of our SCS, and show that the phenomenon of collapse and revival of entanglement [51; 52; 53] occurs in this system under the influence of the harmonic potential.
The organization of our paper is as follows. The interacting two particle electric dipole system is introduced in Section 2, showing how classical non-commutative space appears in the presence of a very strong, constant, uniform magnetic field. Then, in Section 3, we move on to the quantum picture, where intricacies of the system dynamics are revealed, in context of mapping between two reference frames. Section 4 discusses how our model with a harmonic oscillator potential that is dependent only on one spatial variable is able to generate entangled multi-component Schrodinger cat states. In Section 5, we compute the degree of entanglement in the generated SCS system and demonstrate that it exhibits the phenomenon of entanglement collapse and revival. Section 6 is reserved for concluding remarks and discussions.
## II Two-particle system: classical picture
We begin by considering a pair of non-relativistic, oppositely charged particles with equal mass \(m\) moving on the plane subjected to a constant magnetic field \(B\) along the \(z\) axis (ignoring Coulomb and radiation effects). In component form, \(x_{i}\) and \(y_{i}\) (\(i=1,2\)) correspondingly represents the positive and negative charge coordinates. The \(z\) coordinate can be suppressed since the dynamics of the system is confined in a plane. Standard Lagrangian in C.G.S. units is used to define the system as follows [54; 55; 56]:
\[L = \frac{1}{2}m(\dot{x}_{i}^{2}+\dot{y}_{i}^{2})+\frac{eB}{2c} \epsilon_{ij}(x_{j}\dot{x}_{i}-y_{j}\dot{y}_{i}) \tag{1}\] \[-\frac{K_{0}}{2}(x_{i}-y_{i})^{2}-V(x_{1});\quad i,j=1,2\]
where \(c\) is the speed of light in vacuum and \(K_{0}\) is the spring constant corresponding to the harmonic interaction between the two oppositely charged particles. This model is constructed in the spirit of the "2D excitonic dipole model" [57; 58; 59], wherein \(m\) can be realized by the effective mass of the "electron-hole" pair in some specific cases where the magnitude of the effective mass of electrons and holes can be considered as approximately same and the Fermi velocity provides an upper bound for its characteristic velocity in a real physical solid state system. Note that the first term of the above Lagrangian (1) represents the kinetic term of the charges and the second term represents their interaction with the external magnetic field \(\vec{B}\). We use a rotationally symmetric gauge to define the vector potential \(\vec{A}\) satisfying the equation \(\vec{\nabla}\times\vec{A}=B\hat{z}\). The third term is the harmonic interaction between the two charges, and finally, the fourth term describes the additional interaction of the positive charge with an impurity in the \(x_{1}\) direction. The limit of a strong magnetic field \(B\) and small mass \(m\) such as \(\frac{m}{eB}\to 0\) is of interest here, in which the kinetic energy term becomes negligible in the Lagrangian (1) [60]. Thus, we may approximate the dynamics by the effective Lagrangian,
\[L_{0}=\frac{eB}{2c}\epsilon_{ij}(x_{j}\dot{x}_{i}-y_{j}\dot{y}_{i})-V_{0}(x_{i },y_{i}) \tag{2}\]
where \(V_{0}(x_{i},y_{i})=\frac{K_{0}}{2}(x_{i}-y_{i})^{2}+V(x_{1})\).
The Lagrangian equations of motion of the coordinates of the positive and negatively charged particles are given by,
\[\dot{x}_{i}=\frac{c}{eB}\epsilon_{ij}\frac{\partial V_{0}}{\partial x_{j}}, \ \ \dot{y}_{i}=-\frac{c}{eB}\epsilon_{ij}\frac{\partial V_{0}}{\partial y_{j}} \tag{3}\]
Since our effective Lagrangian (2) is in first-order form, the effective Hamiltonian of the model is given by
\[H=V_{0}(x_{i},y_{i}) \tag{4}\]
In order to show the equivalence between the Lagrangian and Hamiltonian formalism [61; 62], we consider Hamilton's equations of motion:
\[\dot{x}_{i}=\{x_{i},H\}=\{x_{i},V_{0}(x_{i},y_{i})\} \tag{5}\]
\[\dot{y}_{i}=\{y_{i},H\}=\{y_{i},V_{0}(x_{i},y_{i})\} \tag{6}\]
The nontrivial symplectic structure can readily be obtained now by comparing the Lagrangian equations of motion (3) with the form of Hamilton's equations of motion (5, 6) to yield the following brackets:
\[\{x_{i},x_{j}\}=\frac{c}{eB}\epsilon_{ij};\ \ \{y_{i},y_{j}\}=-\frac{c}{eB} \epsilon_{ij};\ \ \{y_{i},x_{j}\}=0 \tag{7}\]
The canonical spatial translation generators for individual charged particles are given by
\[P_{x_{i}}=\frac{eB}{c}\epsilon_{ij}x_{j};\ \ P_{y_{i}}=-\frac{eB}{c}\epsilon_{ ij}y_{j} \tag{8}\]
Using the above expressions and the nontrivial symplectic structures between the position co-ordinates (7), it can be checked that the momentum co-ordinates also satisfy a nontrivial symplectic bracket, given by
\[\{P_{x_{i}},P_{x_{j}}\}=\frac{eB}{c}\epsilon_{ij};\ \ \{P_{y_{i}},P_{y_{j}}\}=- \frac{eB}{c}\epsilon_{ij};\] \[\ \
phase space variables (canonical variables) i.e. the centre of mass coordinates as
\[\hat{x}_{i}=\hat{R}_{i}-\frac{c}{2eB}\epsilon_{ij}\hat{P}_{j},\ \ \ \ i,j=1,2 \tag{22}\]
It may be noted that this transformation is not canonical because it changes the commutation brackets. This transformation has occasionally been called a Darboux map [64] or Bopp's shift [65] which is of relevance in the Bohmian interpretation of non-commutative quantum mechanics [66]. Furthermore, this transformation with an explicit dependence on the deformation parameter, allows us to convert the Hamiltonian in NC space into a modified Hamiltonian in commutative equivalent space. It follows that if we are able to solve the spectrum of the system Hamiltonian in commutative equivalent space, we can also obtain the spectrum of the system in primitive non-commutative space, though the states in both situations are not the same. We will discuss how the aforementioned maps aid in the extraction of non-classical cat states in the next section.
## IV Preparation of Schrodinger cat states
Using the formalism presented in the previous section, we are now in a position to investigate the main goal of this work, _viz._, how we might naturally prepare Schrodinger's Cat states. To do so, we first consider a particular Hamiltonian with a harmonic oscillator potential in the \(\hat{x}_{1}\) direction, given by
\[\hat{H}\rightarrow\hat{H}_{NC}=\frac{\hat{P}_{1}^{2}}{2m_{B}}+\frac{\hat{P}_{ 2}^{2}}{2m_{B}}+V(\hat{x}_{1}), \tag{23}\]
where \(V(\hat{x}_{1})=\frac{1}{2}K\hat{x}_{1}^{2}\) and \(m_{B}=\frac{e^{2}B^{2}}{e^{2}K_{0}}\). The corresponding time dependent Schrodinger equation is:
\[i\hbar\frac{\partial}{\partial t}|\psi(t)>_{NC}=\hat{H}_{NC}|\psi(t)>_{NC} \tag{24}\]
Note that, because of the non-commutativity of this theory, it is impossible to construct simultaneous eigenstates with noncommutative coordinates, which makes it difficult to define a local probability density for the wavefunction that corresponds to a particular state \(|\psi(t)>_{NC}\)[67]. However, this issue can be bypassed by using the interpretation mentioned in [67], or by using the coherent states formulation of noncommutative quantum mechanics with the help of the Voros product [68].
In our present case, it can be easily observed that the system Hamiltonian mentioned above can be rewritten as,
\[\hat{H}_{NC}=\hat{U}\hat{H}_{CM}\hat{U}^{\dagger}, \tag{25}\]
with
\[\hat{H}_{CM}=\frac{\hat{P}_{1}^{2}}{2m_{B}}+\frac{\hat{P}_{2}^{2}}{2m_{B}}+V( \hat{R}_{1}), \tag{26}\]
where we have used the fact that \(V(\hat{x}_{1})=V(\hat{U}\hat{R}_{1}\hat{U}^{\dagger})=\hat{U}V(\hat{R}_{1}) \hat{U}^{\dagger}\). Here \(\hat{H}_{CM}\) is the unitarily equivalent form of the system Hamiltonian expressed in terms of the Center of Mass coordinates, whereas the \(\hat{H}_{NC}\) represents the system Hamiltonian written in terms of the positively charged particle coordinates. We can readily recognize that \(V(\hat{R}_{1})=\frac{1}{2}K\hat{R}_{1}^{2}\), where \(K\) is the spring constant of the impurity interaction faced by the positive charge in the \(\hat{x}_{1}\) direction only. Accordingly, the Schrodinger equation (24) transforms as follows:
\[i\hbar\frac{\partial}{\partial t}|\psi(t)>_{CM}=\hat{H}_{CM}|\psi(t)>_{CM} \tag{27}\]
where \(|\psi(t)>_{CM}=\hat{U}^{\dagger}|\psi(t)>_{NC}.\) The ground state of the unitarily equivalent Hamiltonian (\(\hat{H}_{CM}\)) is now represented as
\[|\psi_{0}>_{CM}=|0>\otimes[d_{+}|+k_{2}>+d_{-}|-k_{2}>], \tag{28}\]
where \(|d_{+}|^{2}\) and \(|d_{-}|^{2}\) denote the probability of finding the free particle in \(|+k_{2}>\) and \(|-k_{2}>\) states respectively, \(|0>\) represents the ground state of the 1D harmonic oscillator system with \(\hat{a}_{1}\) and \(\hat{a}_{1}^{\dagger}\) representing the corresponding annihilation and creation operators respectively, satisfying the following algebra:
\[[\hat{a}_{1},\hat{a}_{1}^{\dagger}]=\mathbb{I};\ \ \ \hat{a}_{1}=\frac{m_{B}\omega_{B}\hat{R}_{1}+i\hat{P}_{1}}{ \sqrt{2m_{B}\omega_{B}\hbar}};\ \ \ \ \hat{a}_{1}|0>=0, \tag{29}\]
with \(\omega_{B}=\sqrt{\frac{K}{m_{B}}}\), and \(|\pm k_{2}>\) corresponds to the right and left moving free particle's momentum state respectively, which satisfies:
\[\hat{P}_{2}|\pm k_{2}>=\pm P_{2}|\pm k_{2}>;\ \ \ \ P_{2}=\hbar k_{2} \tag{30}\]
The state vector corresponding to the non-commutative phase space (or in terms of the positively charged particle coordinates) is given by
\[|\psi_{0}>_{NC}=\hat{U}|\psi_{0}>_{CM}, \tag{31}\]
\(|\psi_{0}>_{NC}\) can be expressed as,
\[|\psi_{0}>_{NC}=(\exp\biggl{[}(-\frac{il_{B}^{2}}{2\hbar^{2}})\hat{P}_{1} \otimes\hat{P}_{2}\biggr{]})\] \[[|0>\otimes(d_{+}|+k_{2}>+d_{-}|-k_{2}>)] \tag{32}\]
which leads to
\[|\psi_{0}>_{NC}= d_{+}([\exp\biggl{(}-\frac{il_{B}^{2}k_{2}}{2\hbar}\hat{P}_{1} \biggr{)}]|0>)\otimes|+k_{2}> \tag{33}\] \[+d_{-}([\exp\biggl{(}\frac{il_{B}^{2}k_{2}}{2\hbar}\hat{P}_{1} \biggr{)}]|0>)\otimes|-k_{2}>\]
On substituting \(l_{B}^{2}=\frac{\hbar c}{eB}\) in the above equation, we arrive at-
\[|\psi_{0}>_{NC}= d_{+}([\exp\biggl{[}(-i\frac{ck_{2}}{2eB})\hat{P}_{1}\biggr{]}]|0> )\otimes|+k_{2}> \tag{34}\] \[+d_{-}([\exp\biggl{[}(i\frac{ck_{2}}{2eB})\hat{P}_{1}\biggr{]}]|0> )\otimes|-k_{2}>\]
Now, for a harmonic oscillator potential, the momentum operator \(\hat{P}_{1}\) can be written as-
\[\hat{P}_{1}=i\sqrt{\frac{m_{B}\omega_{B}\hbar}{2}}(\hat{a}_{1}^{\dagger}-\hat{a} _{1}) \tag{35}\]
Putting the above expression in equation (33), we obtain,
\[|\psi_{0}>_{NC}=d_{+}([\exp\biggl{[}(\frac{c\hbar}{2E\hbar})\sqrt{ \frac{m_{B}\omega_{B}\hbar}{2}}(\hat{a}_{1}^{\dagger}-\hat{a}_{1})\biggr{]}|0> )\otimes|+k_{2}>\] \[+d_{-}([\exp\biggl{[}(-\frac{c\hbar}{2E\hbar})\sqrt{\frac{m_{B} \omega_{B}\hbar}{2}}(\hat{a}_{1}^{\dagger}-\hat{a}_{1})\biggr{]}|0>)\otimes|-k_ {2}> \tag{36}\]
It follows that the above state vector (36) may also be written in the form of a superposition of single-component coherent states as
\[|\psi_{0}>_{NC}=d_{+}|+\alpha>\otimes|+k_{2}>+d_{-}|-\alpha>\otimes|-k_{2}>, \tag{37}\]
wherein \(|\pm\alpha>=e^{\pm\alpha(\hat{a}_{1}^{\dagger}-\hat{a}_{1})}|0>\) with \(\alpha=\frac{c\hbar_{2}}{2eB}\sqrt{\frac{m_{B}\omega_{B}\hbar}{2}}\) are real-valued coherent states (or a displacement of the vacuum) that belong to the subset of the over complete space of usual complex parameter valued coherent states [69].
Here it may be worthwhile to mention a property of the coherent state \(|\pm\alpha>\): the dimensionless parameter \(\alpha\) may be rewritten as
\[\alpha=\frac{1}{2}P_{2}(\frac{K}{K_{0}})^{1/4}\sqrt{\frac{c}{2eB\hbar}}=\xi k _{2}l_{B} \tag{38}\]
with \(\xi=\frac{1}{2}(\frac{K}{4K_{0}})^{\frac{1}{4}}\). A coherent state \(|\alpha>\) can have an arbitrarily large amplitude, and hence, the energy of a macroscopic harmonic oscillator [70] can be approximated by the energy of a one-dimensional quantum mechanical HO by suitably choosing \(|\ \alpha\ |\) to be arbitrarily large. For large enough \(|\ \alpha\ |\) values, \(|+\alpha>\) and \(|-\alpha>\) correspond to macroscopically distinguishable states and may be labelled as '(+) (alive) and '(-) (dead) [71; 72]. In this sense, we can regard the above state (37) as an entangled SCS, holding \(|\ \alpha\ |\ \sqrt{\hbar}\) fixed with finite value in the classical limit [73; 74]. Accordingly, one may consider \(|\pm\alpha>\) to be "classical-like" states, but their coherent superposition is endowed with non-classical properties. In fact, this type of Schrodinger cat states have been generated by pulsed stimulation of atomic Rydberg wave packets [75].
In the primitive non-commutative phase space, we may rewrite the state vectors (36) in the following concise way:
\[|\psi_{0}>_{NC}=\mathcal{N}[| +\alpha;+k_{2}>+e^{i\phi}|-\alpha;-k_{2}>];\] \[|\pm\alpha;\pm k_{2}>=|\pm\alpha>\otimes|\pm k_{2}>, \tag{39}\]
with an arbitrary phase factor (\(\phi\)) and normalization constant \(\mathcal{N}\). For the aforementioned reason, the states \(|\pm\alpha>\) may be considered to be "macroscopic" like states with the same amplitude but opposite in phase. (in the present case, the \(|\ \alpha\ |\) parameter is not arbitrary, but is defined in terms of the spring constants, magnetic field and electric charge). However, their superposition (39) has several non-classical characteristics [76]. Particularly, for the relative phase factor \(e^{i\phi}=\pm 1\), we get even and odd cat states that have been well-studied in the literature [4; 5]. Moreover, it is evident from (39) that the coherent states and the free particle states are entangled: when the coherent state parameter has a positive sign, the free particle state is right-moving. On the other hand, the free particle state is left-moving when the coherent state parameter has a negative sign. Therefore, \(|\psi_{0}>_{NC}\) is an entangled Schrodinger cat state containing the coherent superposition [77; 78] of two states that are diametrically opposite to one another.
Since a momentum eigenstate is an idealization [79], we consider a more realistic scenario in which the system's motion in the commutative phase space is localized within a specific length scale \(\sigma\) along the \(\hat{R}_{2}\) direction. In this case, we generalize the notion of free particle states to a propagating Gaussian state given by
\[|\psi_{G}>=\sqrt{\frac{\sigma}{\sqrt{\pi}}}\int_{-\infty}^{+\infty}e^{-\frac{ \sigma^{2}}{2}(k_{2}-k_{0})^{2}}|k_{2}>dk_{2} \tag{40}\]
where \(\sigma\) is the width and \(k_{0}\) is the peak momentum of the wave packet. Now, following the prescription of (28), we can write the composite state of the particle, when the dynamics of the system are realized in terms of the centre of mass coordinates, as
\[|\psi_{0}>_{CM}=|0>\otimes|\psi_{G}> \tag{41}\]
Accordingly, we can generalize the notion of a two-component cat state (39) to
\[|\psi_{0}>_{NC}=\hat{U}|\psi>_{CM}=\sqrt{\frac{\sigma}{\sqrt{\pi}}}\int_{- \infty}^{+\infty}|\alpha(k_{2})>\otimes|k_{2}>e^{-\frac{\sigma^{2}}{2}(k_{2}- k_{0})^{2}}dk_{2} \tag{42}\]
which describes a multi-component entangled Schrodinger cat state [80] where each component is specified through the momentum eigenvalues. Such a state is highly non-classical, which can be verified through the corresponding Wigner function [80]. Thus, in the presence of a strong magnetic field background, one may successfully prepare a Schrodinger Cat State utilizing a non-relativistic electric dipole model, where non-commutativity plays an important role. It may be reiterated here that we explore the system in terms of the positively charged particle coordinates.
## V Collapse and revival of entanglement of SCS
In this section, we will begin by investigating the degree of entanglement of the SCS state \(|\psi>_{NC}\). In order to do so, we first write down the corresponding density
matrix given by
\[\hat{\rho}_{NC}=(\sqrt{\frac{\sigma}{\sqrt{\pi}}})^{2}\int_{-\infty} ^{+\infty}\int_{-\infty}^{+\infty}[|\alpha(k_{2})>_{A}<\alpha(k_{2}^{\prime})|]\] \[\otimes[|k_{2}>_{B}<k_{2}^{\prime}]|e^{\frac{-\sigma^{2}}{2}(k_{2} -k_{0})^{2}}e^{\frac{-\sigma^{2}}{2}(k_{2}^{\prime}-k_{0})^{2}}dk_{2}dk_{2}^{\prime} \tag{43}\]
where the subscripts \(A\) and \(B\) denote two distinct sub-sections of our bipartite system, one of which is associated with coherent states and the other with momentum eigenstates, each of which corresponds to two distinct degrees of freedom in the non-commutative plane. Since \(|\psi>_{NC}\) is a composite pure state, the entanglement between the coherent states and free particle states can be quantified in terms of the von-Neumann entropy given by
\[S=-Tr_{A}[\hat{\rho}_{red}\;ln(\hat{\rho}_{red})] \tag{44}\]
where the reduced density matrix is defined as
\[\hat{\rho}_{red}=\mathrm{Tr}_{B}[\hat{\rho}_{NC}]\] \[=\frac{\sigma}{\sqrt{\pi}}\int_{-\infty}^{+\infty}[|\alpha(k_{2} )>_{A}<\alpha(k_{2})||e^{-\sigma^{2}(k_{2}-k_{0})^{2}}dk_{2} \tag{45}\]
with
\[\mathrm{Tr}(\hat{\rho}_{red})=\frac{\sigma}{\sqrt{\pi}}\int_{-\infty}^{+ \infty}e^{-\sigma^{2}(k_{2}-k_{0})^{2}}dk_{2}=1 \tag{46}\]
For the present purpose, it suffices to compute the purity function [81], given by
\[\mathrm{P}(\alpha)=\mathrm{Tr}(\hat{\rho}_{red}^{2})=\sum_{n}<n| \hat{\rho}_{red}^{2}|n>\] \[=\sum_{m}\sum_{n}<n|\hat{\rho}_{red}|m><m|\hat{\rho}_{red}|n> \tag{47}\]
After a little algebra, one obtains
\[<n|\hat{\rho}_{red}|m>=\frac{\sigma}{\sqrt{\xi^{2}l_{B}^{2}+ \sigma^{2}}}\frac{1}{\sqrt{n!}\sqrt{m!}}e^{(-\sigma^{2}k_{0}^{2})}\] \[(\frac{\xi l_{B}}{2\sigma^{2}})^{n+m}\frac{\partial^{n+m}}{ \partial k_{0}^{n+m}}(e^{\frac{\sigma^{4}+\frac{\theta}{2}}{(2\sigma^{2}l_{B} ^{2}+\sigma^{2}})}) \tag{48}\]
By inserting equation (48) into (47) it follows that
\[\mathrm{P}(\xi_{0};l_{B})=(\frac{1}{1+\xi_{0}^{2}})e^{(-2\sigma^{2}k_{0}^{2}) }[e^{\frac{(\sigma^{2}k_{0}^{2})}{(1+\xi_{0}^{2})}}(e^{\frac{\xi_{0}^{2}}{2 \sigma^{2}}}\frac{\frac{\xi_{0}^{2}}{2\sigma^{2}}}{\frac{\xi_{0}^{2}}{\frac{ \sigma^{2}}{\sigma^{2}}}{\frac{\sigma^{2}}{\sigma^{2}}}{\frac{\sigma^{2}}{ \sigma^{2}}}{\frac{\sigma^{2}}{\sigma^{2}}}{\frac{\sigma^{2}}{\sigma^{2}}}{ \frac{\sigma^{2}}{\sigma^{2}}}})e^{(\frac{\sigma^{2}k_{0}^{2}}{1+\xi_{0}^{2}} )}]; \tag{49}\]
where \(\xi_{0}=\frac{\xi l_{B}}{\sigma}\). The above expression can be rewritten (see Appendix B) as
\[\mathrm{P}(\xi_{0};l_{B})=\frac{1}{\sqrt{1+2\xi_{0}^{2}}} \tag{50}\]
In Figure 1, we plot the Purity function versus the parameter \(\xi_{0}\). It can be observed that the purity function reduces from unit value (separable or disentangled state) with increase of the parameter \(\xi_{0}\), indicating increment of entanglement in the system for higher values of \(\xi_{0}\) (or lower values of the width of the wave packet \(\sigma\)). We consider the quantum length scale \(l_{B}=1.483\times 10^{-8}\)m, and vary the width of the wave-packet in the range of \(O(10^{-11}\to 10^{-6})\). Different \(l_{B}\) values displayed in the figure may originate due to the variation of the magnetic length scale with different accessible magnetic fields in the laboratory.
It may be noted that if we just assume \(\xi_{0}<<1\) with \(\xi\sim 1\) which implies that \(l_{B}<<\sigma\), i.e, the width of the Gaussian packet (\(\sigma\)) is large enough compared to the magnetic quantum length scale such that we can ignore \(\xi_{0}\), then it leads to the unit value of the purity function, or in other words, the collapse of the entanglement in the state. On the other hand, we can make the states entangled by choosing \(\sigma\) comparable to the magnetic length scale \(l_{B}\) where \(\mathrm{P}(\xi_{0};B)\) becomes less than unity. More interestingly, the revival of the entanglement state can occur, if one considers a time-dependent regime. Let us recall from the definition of \(\xi\), that it basically depends on the coupling strength \(K\) of the "impurity" interaction.
The dynamic behaviour of impurities in materials is known to lead to time-varying spring interaction [82; 83]. Such dynamical nature of the coupling has been studied in the literature in the context of several physical systems such as in optical lattices [84], and extensively in the domain of quantum electronic transport [85; 86; 87]. Let us now, consider that the spring "constant" \(K\) is a slowly varying periodic function of time due to some external effects, with the time-variation given by
\[K(t)=K\mathrm{cos}^{4}\omega_{d}t=K\mathrm{cos}^{4}\theta(t) \tag{51}\]
which clearly indicates \(\xi(t)=\frac{1}{2}(\frac{K\mathrm{cos}^{4}\omega_{d}t}{4K_{0}})^{(1/4)}\) and \(\xi_{0}(t)=\frac{\xi(t)l_{B}}{\sigma}\). Hence, the purity function gets modified
Figure 1: The Purity function is plotted against the dimensionless factor \(\xi_{0}\) which varies inversely with the width of the wave-packet \(\sigma\). Plots for several choices of the quantum length scale are displayed.
to,
\[\mathrm{P}(\xi_{0};l_{B})=\frac{1}{\sqrt{1+2\xi_{0}^{2}(t)}} \tag{52}\]
From the above equation, it follows that the Purity function is periodic. It may be noted that even if the \(\sigma\) is comparable to the magnetic length scale \(l_{B}\), disentanglement occurs for \(t_{d}=\frac{\pi}{2\omega_{d}},\frac{3\pi}{2\omega_{d}},\frac{5\pi}{2\omega_{d} },\frac{7\pi}{2\omega_{d}},.....\) with a separation of \(\frac{\pi}{\omega_{d}}\) time period between two successive collapses. For the rest of the time interval, the states are entangled. This distinguishing feature is known as the collapse and revival of entanglement in the literature [88]. In Figure 2, we plot the Purity function versus the periodic parameter \(\theta(t)\) for several different values of the wavepacket width \(\sigma\). It is clearly seen that the magnitude of entanglement revival increases more for narrower wavepackets.
Here it needs to be mentioned that in order to observe entanglement revival of the states, it is required to choose \(\sigma\) of the order comparable to that of the magnetic length scale \(l_{B}\) or less, as \(-1\leq\mathrm{cos}\omega_{d}t\leq+1\). On the other hand, if we choose \(\sigma\) to be much larger than \(l_{B}\), then the additional term in the denominator of Eq.(52) can be completely negligible which will take us back again to the situation of the entanglement collapse, _viz._\(\mathrm{P}(\xi_{0};B)\sim 1\). Instances of the phenomenon of entanglement collapse and revival have been pointed out earlier in the literature predominantly in the context of the Jaynes-Cummings model for optical systems [53; 88]. Here we furnish a striking example of entanglement collapse and revival in the context of an excitonic dipole in a condensed matter system.
## VI Conclusions
To summarize, in this work, we have considered a composite two-particle planar dipole system in the presence of a strong constant and uniform magnetic field, in which two oppositely charged particles interact via harmonic interaction, in addition to an impurity interaction experienced by the positively charged particle. Our system may be regarded as a toy version of excitonic dipole models that can be realized in some specific direct band gap semiconductors [89; 90; 91] having the conduction band minimum for electrons and the valence band maximum for holes both located at the same point of the Brillouin zone, where the effective mass of electrons and holes can be quite similar in magnitude. This typically arises due to specific band structures and symmetries of materials. The additional interaction could arise from intrinsic features such as defects or impurities, as well as from external influences like an external electric field or strain in the material [92].
In our analysis, we have first addressed the classical picture in the context of our system's Lagrangian formulation which is the most natural in a strong magnetic field limit. Using symplectic analysis of this first-order Lagrangian, we have specified the canonical/Weyl-Moyal type deformed NC classical phase space to be an intrinsic part of our model. Next, we have explored the quantum mechanical description of our model by elevating all the phase space variables to the level of Hermitian operators. The spatial and momentum sectors of individual charged particles obey a non-commutative deformed algebra. Here, the non-commutativity emerges as a natural consequence of placing two oppositely charged particles in a strong constant background magnetic field. The square of the magnetic length scale acts as the effective non-commutative parameter.
We have presented a physical interpretation of the mapping from the deformed phase space to the usual commutative phase space. The non-commutative phase space represents the system Hamiltonian written in terms of the positively charged particle coordinates, while the standard quantum mechanical phase space is more suitable for describing our system in terms of the composite system's centre of mass coordinates. The dynamics can, therefore, be analyzed in terms of non-commuting variables or, alternatively, using phase space transformations, in terms of commuting variables. In literature, non-commutativity has been often introduced by hand for a single point particle, thus ruling out any physicality of commutative phase-space variables in such cases. However, in the present case, non-commutativity emerges naturally, thereby giving a physical meaning to the commutative phase-space variables. Determining the Hamiltonian's ground state in the commutative phase space allows us to express the quantum state in the non-commutative phase space as a superposition of two diametrically opposite coherent states, entangled with momentum eigenstates. This reveals the emergence of en
Figure 2: Time evolution of the Purity function is plotted against the parameter \(\theta(t)\), for various widths of the wavepackets \(\sigma\).
tangled and two-component as well as multi-component Schrodinger Cat States (SCS) in our system.
Furthermore, we have estimated the magnitude of entanglement in the system of multicomponent entangled cat states. By utilizing the purity function, we demonstrate that the effective non-commutative parameter (\(l_{B}^{2}\)) is responsible for the entanglement. We show that when the width of the Gaussian wave packet (\(\sigma\)) significantly exceeds the minimal length scale (\(l_{B}\)), the entangled cat states undergo collapse. Conversely, when \(\sigma\) is comparable to the nonzero magnetic length scale \(l_{B}\), the entanglement can be observed. Moreover, we show that if time-dependent impurity potential is chosen, entanglement revival and collapse occurs periodically. So notably, within the same formalism, we observe the phenomenon of collapse and revival of entanglement in the non-commutative plane in the time-dependent regime with a suitable choice of the \(\sigma\) parameter for the revival case, while the collapse is completely controlled by the nodes of the periodic function involved in the impurity interaction.
Before concluding, it may be noted that spin-orbit interactions in solid-state systems introduce electronic band curvature, leading to the emergence of Berry curvature in momentum space. Such Berry curvature modifies the usual phase space symplectic structure of Bloch electrons [93; 94]. In light of non-commutative quantum mechanics, our present analysis can be extended to include investigations on the possible emergence of Schrodinger cat states in solid state systems involving the 2D excitonic Coulomb problem with the Berry curvature of the electron's and the hole's Bloch states [95; 96; 97]. This may open up a new window to experimentally observe quantum superposition for "macroscopic" states.
## VII Acknowledgements
PN and ND acknowledge support from S.N. Bose National Centre for Basic Sciences where this work was initiated. PN would also like to thank the Institute of Theoretical Physics, Stellenbosch University for providing postdoctoral funds during the period when a major part of this work was completed. We thank Biswajit Chakraborty, Debasish Chatterjee, Ananda Dasgupta and Frederik G. Scholtz for some fruitful discussions. ASM acknowledges support from the Project No. DST/ICPS/QuEST/2018/98 from the Department of Science and Technology, Government of India.
## VIII Appendix A
Here we present a manifestation of the non-commutativity of the centre of mass coordinates arising in the case of two oppositely charged particles with different masses \(m_{+}\) and \(m_{-}\) representing the masses of positive and negatively charged particles respectively. The corresponding centre of mass (CM) coordinates of the above-discussed system is:
\[\hat{R}_{i}=\frac{m_{+}\hat{x}_{i}+m_{-}\hat{y}_{i}}{m_{+}+m_{-}};\] \[\hat{P}_{i}=\hat{P}_{x_{i}}+\hat{P}_{y_{i}}=\frac{eB}{c}\epsilon_{ ij}(\hat{x}_{j}-\hat{y}_{j});\ \ i,j=1,2 \tag{53}\]
Now, utilizing the results obtained from equation (10), the commutation brackets between the CM coordinates can be obtained in the following form-
\[[\hat{R}_{i},\hat{R}_{j}]=\frac{m_{+}^{2}-m_{-}^{2}}{(m_{+}+m_{-})^{2}}il_{B} ^{2}\epsilon_{ij};\ \ i,j=1,2 \tag{54}\]
clearly indicating the non-commutativity between the CM position coordinates with \(\theta=\frac{m_{+}^{2}-m_{-}^{2}}{(m_{+}+m_{-})^{2}}il_{B}^{2}\epsilon_{ij}\) being the effective non-commutativity parameter. However, it is straightforward to check that the other two commutation brackets remain preserved.
\[[\hat{P}_{i},\hat{P}_{j}]=0;\ \ [\hat{R}_{i},\hat{P}_{j}]=i\hbar\delta_{ij} \tag{55}\]
It may be noted that the order of magnitude of the non-commutativity between the CM position coordinates is much lesser compared to that of the position coordinates of the individual constituent particles. This is simply because \(l_{B}^{2}\) itself is very small due to the strong magnetic field limit, the presence of the additional mass factor reduces the whole effective non-commutativity parameter \(\theta\) to a much smaller value.
Now, let us introduce the relative coordinate system:
\[\hat{r}_{i}=\hat{y}_{i}-\hat{x}_{i};\ \ \hat{\bar{P}}_{i}=\frac{m_{+}}{m_{+}+m_{-}} \hat{P}_{y_{i}}-\frac{m_{-}}{m_{+}+m_{-}}\hat{P}_{x_{i}};\ \ i=1,2 \tag{56}\]
The commutation relations satisfied by the relative coordinates are given by
\[[\hat{r}_{i},\hat{r}_{j}]=0;\ \ [\hat{\bar{P}}_{i},\hat{\bar{P}}_{j}]= \frac{m_{-}^{2}-m_{+}^{2}}{(m_{+}-m_{-})^{2}}i\frac{k_{B}^{2}}{\hbar}\epsilon_ {ij};\ \ [\hat{r}_{i},\hat{\bar{P}}_{j}]=i\hbar\delta_{ij};\ \ i,j=1,2 \tag{57}\]
It is evident that the relative position coordinates commute as we have considered two oppositely charged particles on a non-commutative space (it has been shown earlier [98], that the non-commutativity of a charged particle differs from its antiparticle and also from any other particle of opposite charge by the sign). On the other hand, the coordinates of relative momenta give rise to a nontrivial commutation algebra with a reduced order of magnitude from that of the individual constituent particle's momentum coordinates.
It may be further noted that the position coordinates of the centre of mass and the position coordinates of the relative motion are not independent, rather they obey the relation given by
\[[\hat{R}_{i},\hat{r}_{j}]=-il_{B}^{2}\epsilon_{ij};\ \ i,j=1,2 \tag{58}\]
So, clearly, there is a connection between the motion of the centre of mass and the relative motion of the
composite system in the non-commutative space. This helps us to reduce the two-body problem completely to a one-body problem for the internal motion in non-commutative space using the CM coordinates of the composite system where the information of the negatively charged particle is solely hidden/encoded within the CM momenta giving rise to a standard commutative algebra.
## IX Appendix B
Here we provide a derivation for the expression of the purity function. We begin with the expression of the reduced density matrix of the equation (45) and the expression of the coherent state \(|\alpha(k_{2})>\) and definition of the Purity function from the equation(47),
\[\mathrm{P}(\alpha)=\sum_{l}\sum_{s}<l|\hat{\rho}_{red}|s><s|\hat{\rho}_{red}|l>\]
\[<l|\hat{\rho}_{red}|s>=\frac{\sigma}{\sqrt{\pi}}\int_{-\infty}^{+\infty}<l| \alpha(k_{2})><\alpha(k_{2})|s>e^{-\sigma^{2}(k_{2}-k_{0})^{2}}dk_{2} \tag{59}\]
The coherent state can be expressed as
\[|\alpha(k_{2})>=e^{-\frac{\alpha^{2}}{2}}e^{\alpha\hat{a}_{1}^{\dagger}}e^{- \alpha\hat{a}_{1}}|0>=e^{-\frac{\alpha^{2}}{2}}e^{\alpha\hat{a}_{1}^{\dagger} }|0>\]
\[<l|\alpha(k_{2})>=<l|e^{-\frac{\alpha^{2}}{2}}\sum_{n=0}^{\infty}\frac{\alpha ^{n}}{\sqrt{n!}}|n>=e^{-\frac{\alpha^{2}}{2}}\frac{\alpha^{l}}{\sqrt{l!}} \tag{60}\]
Similarly, \(<\alpha(k_{2})|s>=e^{-\frac{\alpha^{2}}{2}}\frac{\alpha^{s}}{\sqrt{s!}}\). Plugging this into the equation (59), one gets
\[<l|\hat{\rho}_{red}|s>=\frac{\sigma}{\sqrt{\pi}}\int_{-\infty}^{+\infty}e^{- \alpha^{2}}\frac{(\alpha)^{l+s}}{\sqrt{l!}\sqrt{s!}}e^{-\sigma^{2}(k_{2}-k_{0 })^{2}}dk_{2} \tag{61}\]
Now substituting, \(\alpha(k_{2})=\beta k_{2}\), where \(\beta=\xi l_{B}\), we get
\[<l|\hat{\rho}_{red}|s>=\frac{\sigma}{\sqrt{\pi}}\frac{\beta^{l+s}}{\sqrt{l!} \sqrt{l!}}e^{-\sigma^{2}k_{0}^{2}}\int_{-\infty}^{+\infty}e^{-(\beta^{2}+ \sigma^{2})k_{2}^{2}+2\sigma^{2}k_{0}k_{2}}k_{2}^{l+s}dk_{2} \tag{62}\]
\[=\frac{\sigma}{\sqrt{\pi}}\frac{\beta^{l+s}}{\sqrt{l!}\sqrt{s!}}e^{-\sigma^{2 }k_{0}^{2}}\frac{1}{(2\sigma^{2})^{l+s}}\frac{\partial^{l+s}}{\partial k_{0}^ {l+s}}\{\sqrt{\frac{\pi}{\beta^{2}+\sigma^{2}}}e^{\frac{\sigma^{4}k_{0}^{2}} {2\sigma^{2}+\sigma^{2}}}\} \tag{63}\]
Using the above expressions in the purity function, we get,
\[\mathrm{P}(\alpha)=\frac{\sigma^{2}}{\beta^{2}+\sigma^{2}}e^{(-2\sigma^{2}k_{0 }^{2})}\sum_{l}\sum_{s}\frac{1}{l!}(\frac{\beta^{2}}{\sigma^{2}})^{l+s}[e^{ \frac{\sigma^{4}k_{0}^{2}}{2\sigma^{2}+\sigma^{2}}}\frac{\stackrel{{ \leftarrow}}{{\partial+s}}}{\partial k_{0}^{l+s}}\frac{\stackrel{{ \rightarrow}}{{\partial+s}}}{\partial k_{0}^{l+s}}\frac{\stackrel{{ \rightarrow}}{{\partial+s}}}{\partial k_{0}^{l+s}}\frac{\stackrel{{ \rightarrow}}{{\partial+s}}}{\partial k_{0}^{l+s}}\frac{\stackrel{{ \rightarrow}}{{\partial+s}}}{\partial k_{0}^{l+s}}] \tag{64}\]
Performing the sources, we are led to
\[\mathrm{P}(\alpha(k_{2}))=\frac{\sigma^{2}}{\beta^{2}+\sigma^{2}}e^{(-2\sigma ^{2}k_{0}^{2})}[e^{\frac{\sigma^{4}k_{0}^{2}}{2\sigma^{2}+\sigma^{2}}}(e^{ \frac{\beta^{2}}{2\sigma^{2}}\stackrel{{\leftarrow}}{{\partial+ s}}}{\frac{\partial}{\partial k_{0}}}\frac{\stackrel{{ \rightarrow}}{{\partial+s}}}{\partial k_{0}})e^{\frac{\sigma^{4}k_{0}^{2}} {2\sigma^{2}+\sigma^{2}}}] \tag{66}\]
Now, replacing \(\xi_{0}=\frac{\xi l_{B}}{\sigma}\), we arrive at
\[\mathrm{P}(\xi_{0};l_{B})=(\frac{1}{1+\xi_{0}^{2}})e^{(-2\sigma^{2}k_{0}^{2})} [e^{(\frac{\sigma^{2}k_{0}^{2}}{2\sigma^{2}})}(e^{\frac{\xi_{0}^{2}}{2\sigma^{2 }}\stackrel{{\leftarrow}}{{\partial k_{0}}}}{\frac{\partial}{ \partial k_{0}}})e^{\frac{\xi_{0}^{2}}{2\sigma^{2}}\stackrel{{ \leftarrow}}{{\partial k_{0}}}}{\frac{\stackrel{{ \rightarrow}}{{\partial+s}}}{\partial k_{0}}})e^{(\frac{\sigma^{2}k_{0}^{2}} {2\sigma^{2}})}] \tag{67}\]
Next, we obtain a compactified form of \([e^{\frac{\sigma^{2}k_{0}^{2}}{1+\xi_{0}^{2}}}(e^{\frac{\xi_{0}^{2}}{2\sigma^{2 }}\stackrel{{\leftarrow}}{{\partial k_{0}}}}{\frac{\stackrel{{ \rightarrow}}{{\partial}}}{\partial k_{0}}})e^{\frac{\sigma^{2}k_{0}^{2}} {1+\xi_{0}^{2}}}]\). For that, let us consider the following integral:
\[\int_{-\infty}^{+\infty}e^{-bs^{2}+2sk_{0}}ds=e^{\frac{k_{0}^{2}}{b}}\int_{- \infty}^{+\infty}e^{-b(s+\frac{k_{0}}{b})^{2}}ds=e^{\frac{k_{0}^{2}}{b}} \sqrt{\frac{\pi}{b}} \tag{68}\]
From the expression of \(e^{\frac{\sigma^{2}k_{0}^{2}}{1+\xi_{0}^{2}}}\), it follows that-
\[e^{\frac{\sigma^{2}k_{0}^{2}}{1+\xi_{0}^{2}}}=\sqrt{\frac{1+\xi_{0}^{2}}{\sigma^ {2}\pi}}\int_{-\infty}^{+\infty}e^{-\frac{(1+\xi_{0}^{2})}{\sigma^{2}}s^{2}+2sk _{0}}ds \tag{69}\]
Therefore,
\[[e^{\frac{\sigma^{2}k_{0}^{2}}{1+\xi_{0}^{2}}}(e^{\frac{\xi_{0}^{2}}{2\sigma^{2}} }\frac{\stackrel{{\leftarrow}}{{\partial k_{0}}}}{\frac{\stackrel{{ \rightarrow}}{{\partial}}}{\partial k_{0}}})e^{\frac{\sigma^{2}k_{0}^{2}}{1+ \xi_{0}^{2}}}]\]
\[=\frac{1+\xi_{0}^{2}}{\sigma^{2}\pi}\int_{-\infty}^{+\infty}e^{-\frac{(1+\xi_{0}^{2 })}{\sigma^{2}}s^{2}+2sk_{0}}ds(e^{\frac{\xi_{0}^{2}}{2\sigma^{2}}}\frac{ \stackrel{{\leftarrow}}{{\partial k_{0}}}}{\frac{\stackrel{{ \rightarrow}}{{\partial}}}{\partial k_{0}}})\int_{-\infty}^{+\infty}e^{-\frac{(1+ \xi_{0}^{2})}{\sigma^{2}}s^{2}+2s^{\prime}k_{0}}ds^{\prime}\]
[where we have used the relation \(e^{a\frac{\partial}{\partial k_{0}}}e^{bk_{0}}=e^{ab}e^{bk_{0}}\)]
\[=\frac{1+\xi_{0}^{2}}{\sigma^{2}\pi}\int_{-\infty}^{+\infty}e^{-\frac{(1+\xi_{0}^{2 })}{\sigma^{2}}s^{2}+2sk_{0}}ds\int_{-\infty}^{+\infty}e^{-\frac{(1+\xi_{0}^{2} )}{\sigma^{2}}s^{2}+2(k_{0}+\frac{\xi_{0}^{2}}{\sigma^{2}})s^{\prime}}ds^{\prime}\]
\[=\sqrt{\frac{1+\xi_{0}^{2}}{\sigma^{2}\pi}}\int_{-\infty}^{+\infty}e^{-\frac{(1+ \xi_{0}^{2})}{\sigma^{2}}s^{2}+2sk_{0}}e^{\frac{(\sigma^{2}k_{0}+\xi_{0}^{2}) ^{2}}{\sigma^{2}(1+\xi_{0}^{2})}}ds\]
\[=\sqrt{\frac{1+\xi_{0}^{2}}{\sigma^{2}\pi}}\int_{-\infty}^{+\infty}e^{-\frac{(1+ \xi_{0}^{2})}{\sigma^{2}}s^{2}+2sk_{0}}e^{\frac{(\sigma^{2}k_{0}+\xi_{0}^{2}) ^{2}}{\ |
2309.07740 | Stochastic Phased Array Performance Indicators for
Quality-of-Service-Enhanced Massive MIMO | In this paper, we show that the signal-to-interference-plus-noise ratio
(SINR) at a base station (BS) equipped with an arbitrary physical array antenna
can be expressed as a function of two fundamental figures-of-merit (FoMs): (I)
the instantaneous effective gain (IEG), and (II) the beamforming-channel
correlation (BCC). These two FoMs are functions of the array antenna layout,
the antenna elements, the propagation channel and the applied signal processing
algorithms, and hence they are random variables (RVs) in general. We illustrate
that both FoMs provide essential insights for quality-of-service (QoS)-based
phased array design by investigating their statistics for BSs applying
full-digital (FD) zero forcing (ZF) beamforming. We evaluate various array
designs and show that arrays with higher IEGs and a reduced probability of low
BCCs can increase the ergodic sum rate and reduce the need for scheduling. | Noud Kanters, Andrés Alayón Glazunov | 2023-09-14T14:19:20Z | http://arxiv.org/abs/2309.07740v1 | # Stochastic Phased Array Performance Indicators for Quality-of-Service-Enhanced Massive MIMO
###### Abstract
In this paper, we show that the signal-to-interference-plus-noise ratio (SINR) at a base station (BS) equipped with an arbitrary physical array antenna can be expressed as a function of two fundamental figures-of-merit (FoMs): (1) the instantaneous effective gain (IEG), and (II) the beamforming-channel correlation (BCC). These two FoMs are functions of the array antenna layout, the antenna elements, the propagation channel and the applied signal processing algorithms, and hence they are random variables (RVs) in general. We illustrate that both FoMs provide essential insights for quality-of-service (QoS)-based phased array design by investigating their statistics for BSs applying full-digital (FD) zero forcing (ZF) beamforming. We evaluate various array designs and show that arrays with higher IEGs and a reduced probability of low BCCs can increase the ergodic sum rate and reduce the need for scheduling.
Published 2010
_Introduction_: Phased array antennas are a key component of base stations (BSs) in multi-user wireless communication systems. Traditionally, they are configured along a uniform half-wavelength-spaced lattice to prevent grating lobes. Recently, however, communication-oriented array design [1] has shown that unconventional layouts can enhance the user equipment (UE) quality-of-service (QoS). Examples of considered QoS-based key performance indicators (KPIs) are ergodic channel capacity [2], signal-to-interference-plus-noise ratio (SINR) [3, 4], or SINR-dependent metrics like bit error rate [5] and ergodic sum rate [5, 6, 7, 8]. However, understanding the physical phenomena behind array-layout-induced QoS improvements is complicated, as is illustrated by, for instance, conflicting statements on whether mutual coupling (MC) enhances or deteriorates ergodic channel capacity; see, e.g., [9] and references therein. Typically, different assumptions are made regarding the number and type of antenna elements, the number of served UEs, and the propagation channel model. Moreover, conventional channel normalization may partially hide the impact of the element type, its element pattern, and impedance matching. This hinders the straightforward comparison of the proposed array designs. Conventional phased array KPIs have sidelobe-level and beamwidth provide limited insight into how an array will perform in a multi-user system, especially in channels with a non-line-of-sight (NLoS) component. Hence, generalized KPIs incorporating the effects of the array, the channel and signal processing are needed. In this work, we derive such KPIs. Specifically, we show that the SINR in single-cell systems solely depends on the transmit powers and two random variables (RVs): (1) the instantaneous effective gain (IEG) and (II) the beamforming-channel correlation (BCC). This result is obtained by normalizing the BS-UE channels based on a BS reference array. Subsequently, we illustrate how the array design can affect the statistics of these two RVs, and with that also of QoS-based array design objectives. To this end, we consider BSs equipped with various linear arrays applying full-digital (FD) zero forcing (ZF) beamforming, both with and without user scheduling, and we analyse how the statistics of the two RVs affect the achieved SINR and ergodic sum rate.
_Massive MIMO System Model_: Let's consider a single-cell massive multiple-input-multiple-output (MIMO) system comprising a BS serving \(K\) UEs. Each UE has a single antenna element, whereas the BS has an \(N\)-element phased array antenna. The narrowband uplink received signal \(\mathbf{y}^{\text{UL}}\in\mathbb{C}^{N}\) is defined as in, e.g., [10] and reads
\[\mathbf{y}^{\text{UL}}=\sum_{k=1}^{K}\sqrt{p_{k}}\mathbf{h}_{k}.X_{k}+\sigma_{ \text{UL}}.\mathbf{n}, \tag{1}\]
where \(\mathbf{h}_{k}\in\mathbb{C}^{N}\), \(p_{k}\) and \(\mathbf{x}_{k}\sim\mathcal{N}_{\text{C}}(0,1)\) represent the BS-UE channel vector, the transmit power and the data signal for the \(k^{\text{th}}\) UE, respectively. Moreover, \(\mathbf{n}\sim\mathcal{N}_{\text{C}}(\mathbf{0},\mathbf{N}_{\text{L}})\) is the receiver noise vector and \(\sigma_{\text{UL}}^{2}\) the noise power. Assuming the BS applies linear receive combining using combining matrix \(\mathbf{W}\in\mathbb{C}^{N\times K}=\begin{bmatrix}\mathbf{w}_{1}&\dots& \mathbf{w}_{k}\end{bmatrix}\), it follows that the instantaneous uplink SINR for UE \(k\) equals
\[\text{SINR}_{k}^{\text{UL}}=\underbrace{\sum_{k=1,t\neq k}^{K}p_{t}|\mathbf{w}_ {k}^{H}\mathbf{h}_{k}|^{2}}_{\text{lam-Cell Interference}}+\sigma_{\text{UL}}^{2}\| \mathbf{w}_{k}\|^{2}_{\text{Noise}}. \tag{2}\]
_SINR-Dependent QoS-based Array Design Single-Cell Systems_: In this section, we present the channel normalization technique assumed in the remainder of this paper. We use this to derive a novel expression for the SINR in single-cell systems. The result applies to arbitrary array layouts, arbitrary linear combining algorithms, and arbitrary channel models.
Before computing the SINR in (2), some form of norm-based normalization is generally applied to \(\mathbf{h}_{1},\dots,\mathbf{h}_{K}\) to have control over the signal-to-noise ratio (SNR). Conventional normalization approaches are \(\|\mathbf{h}_{k}\|=\sqrt{N}\;\forall\;k=1,\dots,K\) or \(\|\mathbf{H}\|_{F}=\sqrt{NK}\), where \(\mathbf{H}=[\mathbf{h}_{1}\cdot\mathbf{h}_{K}]\). However, when designing physical array antennas based on a QoS-metric like the SINR, the impact of the antennas is typically embedded in the channel vectors \(\mathbf{h}_{1},\dots,\mathbf{h}_{K}\), see, e.g., [2, 5]. Applying these conventional normalization techniques cancels out essential information regarding the relation between, on the one hand, deterministic array aspects like MC and impedance matching and, on the other hand, the stochastic propagation environment in which the array is deployed. Consequently, assessing the performance of various array designs deployed within a certain channel of a specific array deployed in different channels is not straightforward. To circumvent this problem, we propose normalizing the BS-UE channels relative to a reference array rather than in an absolute sense. Hence, the normalized channel between the BS and UE \(k\) is defined as
\[\mathbf{h}_{k}=\sqrt{N_{net}}\frac{\mathbf{f}_{k}}{\|\mathbf{f}_{k}^{\text{eff }}\|}, \tag{3}\]
where we use \(\mathbf{\hat{n}}\) and \(\mathbf{h}\) to differentiate between non-normalized channels and their normalized counterparts as used in (2), respectively. \(\mathbf{f}_{k}^{\text{eff}}\) represents the BS-UE channel that would be observed if the BS array of interest were replaced by the reference array while leaving the propagation channel (defined by parameters like, e.g, angles-of-arrival (AOAs), complex path gains and Rice factor) unchanged. \(N_{net}\) is the number of elements in the reference array. The reference array does not need the same number of elements as the array of interest. Note that the normalized channel between a UE and the reference array by definition satisfies \(\|\mathbf{h}_{k}^{\text{eff}}\|=\sqrt{N_{net}}\). Although not required, we consider reference arrays composed of isotropic elements in this work.
Assuming that \(p_{1}=\dots=p_{K}=P_{\text{UL}}/N_{net}\) and applying (3), it follows that (2) can be written as
\[\text{SINR}_{k}^{\text{UL}}=\frac{P_{\text{UL}}\;G_{k}^{\text{UL}}\;|\omega_{kk }|^{2}}{\underbrace{P_{\text{UL}}\;\sum_{i=1,t\neq k}^{K}G_{i}^{\text{UL}}\;| \omega_{kk}|^{2}}_{\text{Iams-Cell Interference}}+\underbrace{1}_{\text{Noise}}, \tag{4}\]
where we have assumed without loss of generality that \(\sigma_{\text{UL}}=1\), and where we've introduced the complex-valued BCC coefficient \(\omega_{kk}\) and the IEG \(G_{i}^{\text{UL}}\). Here, \(\omega_{kk}\) is defined as
\[\omega_{kk}=\frac{\mathbf{w}_{k}^{H}\mathbf{h}_{k}}{\|\mathbf{w}_{k}\|\|_{0}}, \tag{5}\]
which satisfies \(0\leq|\omega_{kk}|^{2}\leq 1\) for \(i\in\{1,\dots,K\}\). From (5), it follows that \(|\mathbf{w}_{k}^{H}\mathbf{h}_{k}|^{2}=|\omega_{kk}|^{2}\|\mathbf{w}_{k}^{H}\|^{2} \|\mathbf{h}_{k}\|^{2}\). This is substituted in (2), whereupon we have used that \(p_{t}\|\mathbf{h}_{k}\|^{2}=\frac{\mathbf{w}_{kk}}{\|\mathbf{w}_{k}\|\sqrt{N_{net}} \;\|\mathbf{f}_{k}^{\text{eff}}\|^{2}}=P_{\text{UL}}\frac{\mathbf{w}_{k}\|_{0}^{2} }{\|\mathbf{w}_{k}^{H}\|^{2}}=P_{\text{UL}}\frac{\|\mathbf{w}_{k}\|_{0}^{2}}{ \|\mathbf{w}_{k}^{H}\|^{2}}=P_{\text{UL}}G_{k}^{\text{UE}}\), where the last steps follow from (3) and from the definition of the IEG, i.e.,
\[G_{i}^{\text{UE}}=\frac{\|\mathbf{h}_{k}\|^{2}}{\|\mathbf{w}_{k}^{H}\|^{2}}. \tag{6}\]
Hence, in the definition of the IEG, the numerator represents the instantaneous channel gain observed at the physical BS array under consideration, whereas the denominator represents, for the same UE and the same propagation channel, the instantaneous channel gain observed at the reference array. Therefore, the IEG measures the channel gain of an array of physical elements relative to an isotropic array with no MC. It is worthwhile to note that an expression similar to (4) is obtained for the downlink when assuming that the downlink transmit power is defined as \(p_{i}\|\mathbf{w}_{i}\|^{2}=P_{\mathrm{DL}}/N_{\mathrm{ref}}\) for all \(i\in\{1,\ldots,K\}\). In this case, expressions for uplink and downlink SINR are equivalent if \(P_{\mathrm{UL}}=P_{\mathrm{DL}}\). For the sake of conciseness, we only focus on the uplink.
The principle behind SINR-dependent QoS-based array layout design in single-cell systems can be understood from (4). The stochastic propagation channel, the deterministic BS array antenna, and the applied signal processing algorithms (e.g., user scheduling and beamforming) jointly determine the statistics of the coefficients \(G_{i}^{\mathrm{in}}\) and \(|\omega_{kl}|^{2}\), \(i\in\{1,\ldots,K\}\). Both coefficients are RVs in general, and consequently, \(\mathrm{SINR}_{k}^{\mathrm{UL}}\) is an RV as well. Through proper design of the array antenna, the probability distributions of \(G_{i}^{\mathrm{in}}\) and \(|\omega_{kl}|^{2}\) can be shaped to optimize the design objective, which is typically a specific statistic of (a function of) \(\mathrm{SINR}_{k}^{\mathrm{UL}}\).
Channel Model and Signal ProcessingThe theory presented in this paper applies to arbitrary channel models. However, we limit ourselves to pure line-of-sight (LoS) far-field channels for conciseness. Furthermore, we assume that all antenna elements are purely vertically polarized. The UEs use isotropic antennas, whereas the BS uses a physical array antenna. Hence, the channel between UE \(k\) and the BS can be represented as [11]
\[\mathbf{\hat{f}}_{k} =\mathbf{a}\left(\phi_{k},\,\theta_{k}\right)\] (7a) \[=\mathbf{g}\left(\phi_{k},\,\theta_{k}\right)\odot\mathbf{a}_{ \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i} \mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\mathrm{i}\
\(4\pi\), ultimately resulting in the well-known \(2.15\) dBi gain in the horizontal plane [18]. Note that the impedance ratio appearing in the integral can alternatively be taken into account by introducing a factor \(\frac{Z_{\mathrm{e}}+Z_{\mathrm{e}}}{Z_{\mathrm{e}}^{2}}\) in the definition of the MCM, see, e.g., [14]. For the cosine elements, we apply \(\int_{-\pi}^{\infty}\int_{-\pi/2}^{\pi/2}|\gamma_{R}(\phi,\theta)|^{2}\cos{( \theta)}\,d\theta\,\phi=4\pi\). We consider a 2-dimensional horizontal geometry with a BS serving \(K=8\) UEs, which are uniformly distributed over a \(120^{\circ}\) sector. The azimuth and elevation AOs are defined as \(\phi_{k}-U(-60^{\circ},60^{\circ})\) and \(\theta_{k}=0^{\circ},k=1,\ldots,K\), respectively. We simulate \(10^{4}\) realizations with BS-UE channels modelled as (7), or as (11) for the isotropic reference array.
_Results:_ Results are presented in Fig. 1(a) and Fig. 1(b) for the scenarios without and with user dropping, respectively. They are discussed below.
The first subplots of Fig. 1(a) and Fig. 1(b) show, in dB-scale, the cumulative distribution functions (CDFs) of the IEC \(G_{k}^{\mathrm{E}}\) (e). The percentile at which a graph ends in Fig. 1(b) indicates the probability of a user being dropped. Clearly, this probability is the lowest for the NULAs with \(d_{\mathrm{avg}}=2.4\). Looking at the served (i.e., non-dropped) UEs alone, it can be seen that user dropping has a negligible impact on the statistics of the IEC \(G_{k}^{\mathrm{E}}\). Furthermore, it is observed that the isotropic reference array has an IEC of \(0\) dB. This is expected, as it represents the gain of the reference array relative to itself. On the contrary, for the dipole arrays, the IECs vary. Variations are larger for \(\lambda/2\)-spaced arrays than for \(2\lambda\)-spaced arrays. At large spacings, the MC becomes negligible, and hence the EEPs become approximately equal to isolated dipole patterns, which are omni-directional with a gain of \(2.15\) dBi. At small spacings, however, the MC shapes the EEPs such that the gain towards a certain UE depends on its AOA. The cosine elements achieve the highest IECs for both inter-element spacings. However, they also come with the largest variations inherent to their drivence element patterns.
The second subplots of Fig. 1(a) and Fig. 1(b) show, in dB-scale, the CDFs of the ZF BCC \(|\omega_{kk}|^{2}\) (13). Contrary to what was observed for \(G_{k}^{\mathrm{E}}\), user dropping has a significant impact on the statistics of \(|\omega_{kk}|^{2}\) of the served UEs: it reduces the variation drastically. Moreover, it is observed that for the considered array antennas, \(|\omega_{kk}|^{2}\) is determined to a great extent by the array layout, whereas the element type has only a small effect. Finally, it is observed that in the scenario without user dropping, the \(2\lambda\)-spaced NULAs significantly reduce the probability of having a low BCC. The same arrays also provide a lower probability of dropping users. In the case of ZF combining, a high GSCC \(|\tilde{\alpha}_{kk}|^{2}\), and thus a low BCC \(|\omega_{kk}|^{2}\) (12), implies that suppressing the intra-cell interference of the \(k^{\mathrm{E}}\) UE causes the \(k^{\mathrm{E}}\) UE itself to be suppressed as well, ultimately resulting in low SINRs. To reduce the probability of having high GSCs, one could apply scheduling (here, user dropping). However, as can be concluded from Fig. 1(a), one could also exploit the array layout, thereby reducing the dropping probability.
The third subplots show the CDFs of SINR\({}^{\mathrm{UL}}_{\mathrm{L}}\)\(|_{2\mathrm{F}}\). As expected, they show great correspondence to the individual CDFs of the IEC and the BCC. The resulting UE rates \(R_{k}^{\mathrm{UL}}\) are presented in the fourth subplots for \(P_{\mathrm{UL}}=10\) dB. The dots indicate the average UE rates \(E\left\{R_{k}^{\mathrm{UL}}\right\}\) (computed with the UE rate of a dropped UE set to \(0\)), such that the ergodic sum rate is found through multiplication by \(K\). From the arrays considered in this work, the \(2\lambda\)-spaced NULAs achieve the highest ergodic (sum) rate in the scenario without user dropping. Since the UE rate is a concave function of the SINR (Fig. 1), it intuitively follows that arrays providing a low probability of low SINR\({}^{\mathrm{UL}}_{\mathrm{L}}\) benefit the ergodic sum rate. Although the latter can also be accomplished by employing signal processing (here, user dropping), Fig. 1(b) shows that NULAs are still beneficial since they reduce the probability of a user being dropped. Since the cosine element arrays provide the largest IECs, the \(2\lambda\)-spaced NULA of cosine elements can be considered the optimal array from the ones considered here.
_Conclusions and Future Work:_ It has been shown that SINR-dependent QoS-based array design in single-cell systems is a matter of shaping the probability distributions of two RVs, i.e., the IEC and the BCC. The concept is illustrated in detail for a FD ZF system, for which the latter is merely a function of the GSCC. It is shown that ergodic sum rate enhancements reported for unconventional array layouts mainly result from a reduced probability of a high GSCC and that such arrays can reduce the need for scheduling. In the future, we plan to use the presented concepts to design new array layouts rather than analyzing existing ones.
_Acknowledgements:_ (c) 2023 The Authors. _Electronics Letters_ published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology
_Received:_ DD MM YYYY _Accepted:_ DD MM YYYY
doi: 10.1049/
|
2309.04249 | Random singlets and permutation symmetry in the disordered spin-2
Heisenberg chain: A tensor network renormalization group study | We use a tensor network renormalization group method to study random $S=2$
antiferromagnetic Heisenberg chains with alternating bond strength
distributions. In the absence of randomness, bond alternation induces two
quantum critical points between the $S=2$ Haldane phase, a partially dimerized
phase and a fully dimerized phase, depending on the strength of dimerization.
These three phases, called ($\sigma$,$4-\sigma$)=(2,2), (3,1) and (4,0) phases,
are valence-bond solid (VBS) states characterized by $\sigma$ valence bonds
forming across even links and $4-\sigma$ valence bonds on odd links. Here we
study the effects of bond randomness on the ground states of the dimerized spin
chain, calculating disorder-averaged twist order parameters and spin
correlations. We classify the types of random VBS phases depending on the
strength of bond randomness $R$ and dimerization $D$ using the twist order
parameter, which has a negative/positive sign for a VBS phase with odd/even
$\sigma$. Our results demonstrate the existence of a multicritical point in the
intermediate disorder regime with finite dimerization, where (2,2), (3,1) and
(4,0) phases meet. This multicritical point is at the junction of three phase
boundaries in the $R$-$D$ plane: the (2,2)-(3,1) and (3,1)-(4,0) boundaries
that extend to zero randomness, and the (2,2)-(4,0) phase boundary that
connects another multicritical point in the undimerized limit. The undimerized
multicritical point separates a gapless Haldane phase and an
infinite-randomness critical line with the diverging dynamic critical exponent
in the large $R$ limit at $D=0$. Furthermore, we identify the (3,1)-(4,0) phase
boundary as an infinite-randomness critical line even at small $R$, and find
the signature of infinite randomness at the (2,2)-(3,1) phase boundary only in
the vicinity of the multicritical point. | Yen-Tung Lin, Shao-Fu Liu, Pochung Chen, Yu-Cheng Lin | 2023-09-08T10:28:40Z | http://arxiv.org/abs/2309.04249v2 | Random singlets and permutation symmetry in the disordered spin-2 Heisenberg chain: A tensor network renormalization group study
###### Abstract
We use a tensor network renormalization group method to study random \(S=2\) antiferromagnetic Heisenberg chains with alternating bond strength distributions. In the absence of randomness, an imposed dimerization with bond alternation induces two quantum critical points between the \(S=2\) Haldane phase, a partially dimerized phase and a fully dimerized phase, depending on the strength of dimerization. These three phases, called (\(\sigma\),\(4-\sigma\))=(2,2), (3,1) and (4,0) phases, are valence-bond solid (VBS) states characterized by \(\sigma\) valence bonds (effective spin-1/2 singlet pairs) forming across even links and \(4-\sigma\) valence bonds on odd links. Here we study the effects of bond randomness on the ground states of the dimerized spin chain, calculating disorder-averaged twist order parameters and spin correlations. We classify the types of random VBS phases depending on strength of bond randomness \(R\) and the dimerization \(D\) using the twist order parameter, which has a negative/positive sign for a VBS phase with odd/even \(\sigma\). Our results demonstrate the existence of a multicritical point in the intermediate disorder regime with finite dimerization, where (2,2), (3,1) and (4,0) phases meet. This multicritical point is at the junction of three phase boundaries in the \(R\)-\(D\) plane: the (2,2)-(3,1) and (3,1)-(4,0) boundaries that extend to zero randomness, and the (2,2)-(4,0) phase boundary that connects another multicritical point in the undimerized limit. The undimerized multicritical point separates a gapless Haldane phase and an infinite-randomness critical line with the diverging dynamic critical exponent in the large \(R\) limit at \(D=0\). Furthermore, we identify the (3,1)-(4,0) phase boundary as an infinite-randomness critical line even at small \(R\), and find the signature of infinite randomness at the (2,2)-(3,1) phase boundary only in the vicinity of the multicritical point.
## I Introduction
The ground state properties of antiferromagnetic Heisenberg spin chains have attracted a lot of attention for many decades, in particular after Haldane's conjecture [1; 2] that half-integer and integer spin chains are distinct from each other. Half-integer spin chains with Heisenberg interactions have ground-state properties generically similar to the exactly solvable spin-1/2 chain [3], which has a gapless excitation spectrum [4] and power-law decaying spin correlations. Integer spin chains, on the other hand, have gapped ground states with exponentially decaying spin correlations [1; 2] and a hidden topological order that is characterized by a nonlocal string correlation function [5]. Qualitative differences are also found between half-integer and integer spin chains when the coupling constants alternate between two different values [6; 7]. The ground state is dimerized by infinitesimal alteration for half-integer spins, while the Haldane phase in an integer spin chain changes to a dimer state via a phase transition only with sufficiently strong bond alternation.
Another theme of high interest in condensed matter physics is the interplay between disorder, interactions and quantum fluctuations. The ground-state properties of low-dimensional quantum systems are often modified dramatically by introducing quenched disorder (i.e. time-independent disorder). Remarkably, there are properties resulting from quenched disorder, which are universal for a broad class of quantum spin chains, independently of whether the spin is integer or half-integer. The so-called random-singlet (RS) phase [8] is one of disorder induced phases that possesses such universal properties. This phase describes the ground state of a spin-1/2 Heisenberg chain with any amount of disorder in couplings, and is also the ground-state phase of the spin-1 chain in the strong disorder limit where the excitation spectrum becomes gapless and the string topological order vanishes [9; 10; 11; 12; 13].
The RS phase was first found in the ground state of the random spin-1/2 chain by using the strong-disorder renormalization group (SDRG) method [14; 15; 8; 16]. The SDRG method for the spin-1/2 chain consists of iteratively locking the strongest coupled spin pair into a singlet state, which decouples from the rest of the chain after effective couplings are generated among the remaining spins. This SDRG scheme ultimately flows toward an RS fixed point [8] that asymptotically represents the system's ground state, in which pairs of strongly entangled spins form singlets over all length scales, mostly short ranged but occasionally very long ranged. Those long-ranged singlets are rare; however, they dominate the
average_ spin-spin correlations that decay asymptotically with distance \(L\) as an inverse-square form \(L^{-2}\). By contrast, the correlations between typical pairs of widely separated spins are very weak and decay exponentially with the square root of their distance. Furthermore, the characteristic energy scale, \(\epsilon\), and length scale, \(L\), of the singlets in the RS phase follow:
\[-\ln\epsilon\sim L^{\psi}\,, \tag{1}\]
with \(\psi=1/2\). This energy-length scaling is very different from the standard dynamic scaling, \(\epsilon\sim L^{-z}\), and implies that the dynamic exponent diverges \(z\to\infty\). The RS fixed point is one example of an infinite-randomness fixed point [17], which is characterized by extremely broad distributions of physical properties, even on a logarithmic scale, leading to the distinction between average and typical behaviors.
The SDRG method has been extended to higher spin chains [18; 19; 20; 10; 9; 11]. In particular, Damle and Huse applied an extended SDRG scheme to disordered spin-\(S\) Heisenberg chains with arbitrary \(S\) and obtained a class of infinite-randomness fixed points, called permutation symmetric fixed points [19]. In the valence-bond (VB) picture for spin \(S>1/2\), each spin \(S\) is replaced with \(2S\) virtual spin-\(1/2\) variables, and valence bonds (singlets) are pairwise created between spin-\(1/2\) variables that belong to different spin-\(S\) sites. For a spin-\(S\) chain, there are \(2S+1\) distinct valence-bond solid (VBS) domains, denoted by \((\sigma,2S-\sigma)\) with \(\sigma\in\{0,1,\cdots,2S\}\); a VBS domain of type \((\sigma,2S-\sigma)\) (type \(\sigma\)) consists of \(\sigma\) spin-\(1/2\) singlets over each even bond and \(2S-\sigma\) spin-\(1/2\) singlets over each odd bond (Fig 1). Among these generalized VBS states, the Haldane phase in an integer spin-\(S\) chain is associated with the symmetric \((S,S)\) domain. A dimerized state \((\sigma,2S-\sigma)\) with \(\sigma\neq S\) can be realized in chains with alternating bond strength distributions. In the extended SDRG scheme, the degrees of freedom are effective spins of magnitude
\[S_{\sigma,\sigma^{\prime}}=\frac{|\sigma-\sigma^{\prime}|}{2}\,,\]
localized at the boundaries between distinct domains of type \(\sigma\) and \(\sigma^{\prime}\). These domain-wall spins interact with spins in neighboring domain walls through effective couplings that can be antiferromagnetic or ferromagnetic, depending on the types of environment domains. The renormalization of strong effective couplings between domain-wall spins under the RG leads to reconfigurations of associated domains. Using this domain-wall picture, Damle and Huse have predicted a series of infinite-randomness multicritical points, \(\mathcal{P}_{n}\), that result from a competition between \(n\) domains. At a \(\mathcal{P}_{n}\) multicritical point, \(n\) domains appear with equal probability; hence the multicritical point is called permutation symmetric critical point. For a \(\mathcal{P}_{n}\) multicritical point, we have
\[\psi_{n}=1/n\,, \tag{2}\]
for the energy-length scaling exponent given in Eq. (1); this is a generalization of the RS fixed point with \(n=2\), where two domains occur with equal probability under the action of the RG.
The Griffiths singularity (also known as the Griffiths-McCoy singularity for quantum systems) [21; 22; 23] is another interesting phenomenon arising from the interplay between quantum fluctuations and randomness. This phenomenon is characterized by singular low-energy behavior of various observable, such as the susceptibility and the specific heat, in an off-critical phase. Griffiths effects occur when there are rare but arbitrarily large spatial regions that are locally in a phase \(B\) due to disorder fluctuations, while the system is overall in a phase \(A\). Quantum fluctuations enhance the effects of the rare regions, leading to a power-law behavior of the density of low-energy excitations such that [24]
\[\rho(\epsilon)\sim\epsilon^{-1+1/z} \tag{3}\]
for a one-dimensional system, with a non-universal continuously varying exponent \(z\) that also describes the length scale and the energy scale through \(\epsilon\sim L^{-z}\). This power-law density of states is responsible for power-law singularities of certain observables at low temperatures. Using the SDRG analysis [18; 25; 9], the generalized VBS states in dimerized spin-\(S\) chains, including dimer phases for any \(S\) and the Haldane phase in an integer-\(S\) chain, in the presence of sufficiently strong disorder have been identified as Griffiths phases, where the energy gaps are filled in yet the topological order (Haldane order and dimer order) persists.
Considerable numerical efforts have been devoted to examine the theoretical predictions about the ground-state phases of random spin \(S>1/2\) chains, but so far have been mainly focused on the \(S=1\) chain [26; 27; 28; 29; 12; 13]. Here we first use a tensor network strong-disorder renormalization group (tSDRG) method [29] to study the zero-temperature phases of the random spin-\(2\) chain with alternating bond strength distributions, which to our knowledge have not been previously investigated numerically.
The paper is organized as follows. In Sec. II we define the model and summarize some known properties of the
Figure 1: Five distinct VBS phases (\(\sigma\), \(4\)-\(\sigma\)) with \(\sigma=0,1,2,3,4\) for a spin-\(2\) chain, where \(\sigma\) denotes the number of spin-\(1/2\) singlets (indicated by the arches) over each even bond.
ground-state phases. In Sec. III we outline the SDRG and tSDRG methods. In Sec. IV we provide our numerical results for the ground-state phases depending on randomness and dimerization, focusing on a VBS order parameter and end-to-end correlations. We conclude in Sec. V with a summary and discussion.
## II The model
The model we consider is the spin-2 antiferromagnetic Heisenberg chain, described by the Hamiltonian:
\[H=\sum_{i}J_{i}\,\mathbf{S}_{i}\cdot\mathbf{S}_{i+1}\,, \tag{4}\]
where \(\mathbf{S}_{i}\) is the spin-2 operator at site \(i\), and \(J_{i}\) are random coupling constants given by
\[J_{i}=K_{i}[(1+(-1)^{i}D]\,, \tag{5}\]
where the parameter \(D\), with \(|D|\leq 1\), measures the strength of bond alternation (dimerization), and \(K_{i}\) are random positive variables with the following power-law distribution:
\[P(K)=\begin{cases}R^{-1}K^{-1+1/R},\quad\text{for}\;\;0\leq K\leq 1\,,\\ 0,\quad\text{otherwise}.\end{cases} \tag{6}\]
where \(R>0\), being the standard deviation of \(\ln(K)\), parameterizes the strength of the randomness. This power-law distribution of bond randomness at \(D=0\) has been widely used in previous numerical studies for disordered systems [12; 13; 30; 27; 31].
The spin-2 model hosts a variety of ground-state phases, depending upon the dimerization \(D\) and the randomness \(R\). In the absence of randomness (i.e. \(R=0\)), there are five gapped states; they are the VBS phases of type \(\sigma=0,1,2,3,4\), changing successively as the dimerization parameter \(D\) varies from \(-1\) to \(1\). The valence bond picture suggests that the elementary excitations at all four domain walls are effective spin-1/2 variables and the phase transitions belong to the level-1 SU(2) Wess-Zumino-Witten universality class [32; 33]. The locations of the domain walls have been determined using the level-crossing method [34] and the ground-state expectation value of a so-called twist operator [35]. The critical values for the \((2,2)-(3,1)\) and \((3,1)-(4,0)\) transitions are found at \(D_{c,1}\approx 0.18\) and \(D_{c,2}\approx 0.55\), respectively. Under interchanging even and odd bonds, the \((1,3)-(2,2)\) and \((0,4)-(1,3)\) transitions occur at \(-D_{c,1}\) and \(-D_{c,2}\), respectively.
According to the Damle-Huse domain-wall picture [19], random-singlet RS\({}_{S}\) phases out of spin \(S=1/2,1,2\) variables and multicritical points \(\mathcal{P}_{n}\) with possible \(n=3,4,5\) can occur when disorder comes into play. Since the undimerized spin-1/2 chain with any weak randomness is in the RS\({}_{1/2}\) phase, the critical domain walls at \(|D_{c,1}|\) and \(|D_{c,2}|\) with effective spin-1/2 are expected to evolve into an RS\({}_{1/2}\) state for \(R>0\). On the other hand, the RS\({}_{2}\) phase, arising from a competition between the \((4,0)\) domain and the \((0,4)\) domain, occurs only in the strong disorder limit at \(D=0\). In the RS\({}_{2}\) phase, spin-2 singlets connect sites on different sublattices over arbitrarily long distances in a random fashion, completely analogous to the RS\({}_{1/2}\) state.
For the spin-2 chain, there are three possible arrangements of random-singlet RS\({}_{S}\) phases and multicritical points \(\mathcal{P}_{n}\) in the \(R\)-\(D\) plane between weak and very strong disorder, as shown in Fig. 2, among which the phase diagram with the maximally symmetric multicritical point \(\mathcal{P}_{5}\) may occur only with some additional fine-tuned parameter in the model [19]. All multicritical points \(\mathcal{P}_{n}\) belong to a set of infinite-randomness fixed points with the critical exponent \(\psi_{n}\) given in Eq. (2), and the correlation-length exponent given by [9; 10; 19]
\[\nu_{n}=\frac{2}{\sqrt{4n+1}-1}\frac{1}{\psi_{n}}\,. \tag{7}\]
In particular, the random-singlet RS\({}_{S}\) phases for all \(S\) is the special case with \(n=2\), where we have \(\psi=1/2\) and \(\nu=2\).
The off-critical regions, including the Haldane phase associated with the \((2,2)\) domain and the dimerized phases, have excitation gaps and exhibit topological order in the absence of randomness. The topological order in the Haldane phase can be detected by a nonzero limiting value of the generalized string order parameter [36; 5]
\[O^{z}_{j,k}=-\left\langle S^{z}_{j}\exp(i\theta\sum_{l=j+1}^{k-1}S^{z}_{l})S^{ z}_{k}\right\rangle \tag{8}\]
in the \(|j-k|\rightarrow\infty\) limit, where \(S^{z}_{j}\) is the \(z\) component of the spin-2 operator at site \(j\), and an angle parameter with \(\theta=\pi/2\) is mostly suitable for the spin-2 case since \(O^{z}_{j,k}\) becomes a smooth function of the distance \(|j-k|\)[37], similar to \(\theta=\pi\) for the spin-1 case [5]. The topological order in the dimer phases is enforced by the Hamiltonian, and its sign reflects whether the Hamiltonian favors singlet pairs to be formed over even or odd bonds. Both the Haldane topological order and dimer order can survive in the presence of randomness even when the energy gaps close up. Such a gapless region with topological order is the Griffiths phase.
The main goal of our numerical study is to identify which diagram in Fig. 2 corresponds to the phase diagram of our spin-2 random chain.
## III The numerical method
### Sdrg
The Hamiltonian of a disordered quantum many-body system has an intrinsic separation in energy scales, which
allows us to find its ground state using the SDRG technique. The essential idea of the SDRG method, introduced in Ref. [14] and [15], is to find the largest term in the Hamiltonian successively and put the subsystem associated with this term into its ground state. The couplings between this subsystem and the rest of the system are treated by perturbation theory and effective couplings across the subsystem are generated. For example, for the random spin-1/2 Heisenberg antiferromagnetic chain, an effective coupling is generated between spins \(k-1\) and \(k+2\) with strength
\[\tilde{J}=\frac{1}{2}\frac{J_{k-1}J_{k+1}}{J_{k}}\,, \tag{9}\]
when \(J_{k}\) is the strongest coupling that locks spins \(k\) and \(k+1\) into a singlet at a certain step of RG. The new energy scale is then the strength of the strongest remaining coupling: \(\Omega=\max\{\tilde{J}\}\). By repeating this RG procedure, we gradually lower the energy scale and reduce the number of degrees of freedom in the system. The RG flow equation describing the evolution of the probability distribution under the RG process has been solved by D. S. Fisher in Ref. [8] for the spin-1/2 chain. The multiplicative relation in Eq. (9) suggests that it is more convenient to measure bond strengths on a logarithmic scale. In terms of logarithmic variables defined as \(\Gamma=-\log(\Omega)\) and \(\zeta=\log(\Omega/\tilde{J})\), the fixed point distribution for the spin-1/2 chain corresponds to
\[P(\zeta)=\frac{1}{\Gamma}\,e^{-\zeta/\Gamma}\,, \tag{10}\]
For higher spin chains, renormalized couplings in the conventional SDRG method may become stronger than the decimated couplings when the randomness is not sufficiently strong, which makes perturbation theory invalid. Thus, several extended SDRG methods, based on effective \(S=1/2\) models with both antiferromagnetic and ferromagnetic couplings, have been proposed for higher spin chains [9; 10; 11; 19; 38]. Using a domain-wall model, Damle and Huse have extended Fisher's RG analysis to arbitrarily high spin. In the domain-wall model, one defines \(\rho_{\sigma}\) to be the probability that a specific domain is of type \(\sigma\) and \(W_{\sigma\sigma^{\prime}}\) to be the probability of a domain of type \(\sigma\) followed by one of type \(\sigma^{\prime}\), which are required to formulate the RG flow equations. The fixed point solution, which controls the multicritical point \(\mathcal{P}_{n}\), is found to be given by the bond strength distribution:
\[P_{\sigma}(\zeta)=\frac{n-1}{\Gamma}\,e^{-(n-1)\zeta/\Gamma}\,,\quad\forall\sigma \tag{11}\]
and
\[\begin{split}&\rho_{\sigma}=1/n,\quad\forall\sigma\,,\\ & W_{\sigma\sigma^{\prime}}=1/(n-1),\quad\forall\sigma\neq\sigma ^{\prime}\,,\end{split} \tag{12}\]
indicating the domain permutation symmetry. From the fixed point solution, one can deduce the energy-length scaling relation \(-\ln\epsilon\sim L^{\psi_{n}}\) with \(\psi_{n}\) given in Eq. (2).
### Tensor networks and SDRG
Here we use a tree tensor-network generalization of the SDRG, referred to as tSDRG, to study the spin-2 random chain. This generalized SDRG scheme, proposed by Goldsborough and Romer [29], formulates the RG procedure as a tree tensor network and refines the perturbative approximation of SDRG by including higher energy states at each RG step, along the lines of a previous SDRG extension [39]. The tSDRG method has been applied in studies of the quantum Ising chain [40] and spin-1 chains [13], where accurate results that are compatible with the results obtained by non-approximate quantum Monte Carlo calculations [41] and the density matrix renormalization group [28; 12] are achieved.
The starting point of tSDRG is to express a one-dimensional Hamiltonian of \(L\) sites as a sum of two-site Hamiltonian in terms of matrix product operators (MPOs) [42]:
\[H=\sum_{i}H_{i,i+1}=W^{[1]}W^{[2]}\cdots W^{[L]}\,, \tag{13}\]
where an MPO \(W^{[i]}\) at site \(i\) is a matrix of operators. Specifically for the model considered in this work, the two-site Hamiltonian reads
\[\begin{split} H_{i,i+1}&=J_{i}\mathbf{S}_{i}\cdot \mathbf{S}_{i+1}\\ &=J_{i}[\frac{1}{2}(S_{i}^{+}S_{i+1}^{-}+S_{i}^{-}S_{i+1}^{+})+S_ {i}^{z}S_{i+1}^{z}]\,.\end{split} \tag{14}\]
where \(S^{+}\) and \(S^{-}\) are the ladder operators. For a chain
Figure 2: Three possible phase diagrams of spin-2 chains in the \(R\)-\(D\) plane, predicted in Ref. [19].
with open boundary conditions (OBC), we have the following explicit form of the \(W\) tensors:
\[W^{[i]}=\begin{pmatrix}1&0&0&0&0\\ S_{+}^{+}&0&0&0&0\\ S_{-}^{-}&0&0&0&0\\ S_{i}^{z}&0&0&0&0\\ 0&(J_{i}/2)S_{i}^{-}&(J_{i}/2)S_{i}^{+}&J_{i}S_{i}^{z}&1\end{pmatrix}\, \tag{15}\]
for a site in the bulk, i.e. \(i\neq 1,L\), and
\[W^{[1]}=\begin{pmatrix}0&(J_{1}/2)S_{1}^{-}&(J_{1}/2)S_{1}^{+}&J_{1}S_{1}^{z}& 1\end{pmatrix}\, \tag{16}\]
\[W^{[L]}=\begin{pmatrix}1\\ S_{L}^{+}\\ S_{L}^{z}\\ S_{L}^{z}\\ 0\end{pmatrix}. \tag{17}\]
for two edge sites \(i=1\) and \(L\). For a chain with periodic boundary conditions (PBC), the MPO tensors for \(i=1\cdots L\) are all bulk tensors as given in Eq. (15), where the coupling \(J_{L}\) links between two end sites \(L\) and \(1\). An important observation is that the two-site Hamiltonian \(H_{i,i+1}\) is encoded in the local matrix product \(W^{[i]}W^{[i+1]}\). It is easy to verify that
\[W^{[i]}W^{[i+1]}=\begin{pmatrix}1&0&0&0&0\\ S_{i}^{+}&0&0&0&0\\ S_{i}^{-}&0&0&0&0\\ S_{i}^{z}&0&0&0&0\\ H_{i,i+1}&\frac{J_{i+1}}{2}S_{i+1}^{-}&\frac{J_{i+1}}{2}S_{i+1}^{+}&J_{i+1}S_{ i+1}^{z}&1\end{pmatrix}. \tag{18}\]
That is \(H_{i,i+1}=\left(W^{[i]}W^{[i+1]}\right)_{1,5}\). The essential information required for the SDRG procedure is the list of MPO \(W^{[i]}\) and the list of two-site Hamiltonian \(H_{i,i+1}\).
Similar to the conventional SDRG, in each RG iteration one selects a pair of adjacent sites to be renormalized, depending on the local energy spectrum. Here the selection is based on the largest energy gap, rather than the strongest coupling (corresponding to the lowest gap) [43; 44; 39]. For each two-site Hamiltonian \(H_{i,i+1}\) we identify the energy gap \(\Delta_{i,i+1}\), which is measured as the difference between the highest energy of the \(\chi^{\prime}\)-lowest eigenstates that would be kept and the energy of the \((\chi^{\prime}+1)\)-th eigenstate. We set a _bond dimension_\(\chi\) as the upper bound of the number \(\chi^{\prime}\) to control the accuracy of the calculation. The actual number \(\chi^{\prime}\) is adjusted to keep full SU(2) multiplets. By increasing the bond dimension \(\chi\), we can obtain more accurate results but at the cost of computational resources and time.
After obtaining all local gaps described above we identify the two sites with largest gap. These two sites will be merged into a renormalized site as follows. We first express the \(k\)-th eigenstate \(|\Psi_{k}\rangle\) of \(H_{i,i+1}\) in terms of the local two-site product basis
\[|\Psi_{k}\rangle=\sum_{s_{i},s_{i+1}}\Psi_{k,s_{i}s_{i+1}}|s_{i}\rangle|s_{i+ 1}\rangle, \tag{19}\]
where \(|s_{i}\rangle|s_{i+1}\rangle\) is the local product basis. We then construct a projector that projects the two-site space to the renormalized site space with dimension \(\chi^{\prime}\),
\[V\equiv\sum_{k=1}^{\chi^{\prime}}|s_{i}\rangle|s_{i+1}\rangle\Psi_{k,s_{i}s_{ i+1}}\langle\Psi_{k}|. \tag{20}\]
In terms of matrix notation one can write \(V\) as
\[V=\left(\begin{array}{cccc}|\Psi_{1}\rangle&|\Psi_{2}\rangle&\cdots&|\Psi_{ \chi^{\prime}}\end{array}\right)\,, \tag{21}\]
and its tensor network diagram is given in Fig. 3. Note that the tensor \(V\) has the isometric property \(V^{\dagger}V=1\). To obtain the renormalized MPO \(\tilde{W}^{[i,i+1]}\) associated with the renormalized site, one uses \(V\) to renormalize each element of the product \(W^{[i]}W^{[i+1]}\) as follows
\[\left(W^{[i]}W^{[i+1]}\right)_{b,b^{\prime}}\rightarrow\tilde{W}^{[i,i+1]}_{b,b^{\prime}}\equiv V^{\dagger}\left(W^{[i]}W^{[i+1]}\right)_{b,b^{\prime}}V. \tag{22}\]
The two-site Hamiltonians that contain the renormalized site can be decoded as follows:
\[I_{i-1}\otimes H_{i,i+1}\rightarrow\tilde{H}_{i-1,[i,i+1]}=\left(W^{[i-1]} \tilde{W}^{[i,i+1]}\right)_{1,5}, \tag{23}\]
\[H_{i,i+1}\otimes I_{i+2}\rightarrow\tilde{H}_{[i,i+1],i+2}=\left(\tilde{W}^{[ i,i+1]}W^{[i+2]}\right)_{1,5}. \tag{24}\]
This completes one iteration of tSDRG. We have now an updated list of MPOs and two-site Hamiltonians. Conceptually, one can re-label the sites so that the renormalized MPO and two-site Hamiltonian are labeled as \(W^{[i]}\) and \(H_{i,i+1}\) respectively. In practice the relabeling is not necessary.
To obtain the ground state of the system, we should repeat the tSDRG iteration until the system contains only two renormalized sites. At this stage we diagonalize the top Hamiltonian \(H_{\text{top}}\) to obtain its ground state \(|\Psi_{1}^{\text{top}}\rangle\). From \(|\Psi_{1}^{\text{top}}\rangle\) and the projector \(V\) at each iteration, one
Figure 3: (a) The three-leg tensor \(V\) (triangles), built from the \(\chi^{\prime}\) lowest energy eigenvectors, truncates a two-site tensor into a renormalized site. The blue-shaded squares are local tensors of the MPO. The vertical legs denote physical indices; the horizontal legs denote virtual indices; (b) Isometric property \(V^{\dagger}V=1\) of the tensor \(V\).
can generate an inhomogeneous binary tree tensor network as sketched in Fig. 4(a). The expectation value of a product of local operators can be obtained by contracting the operators with the tree and its conjugate as sketched in Fig. 4(b). The contraction can be evaluated efficiently thanks to the property that \(V^{\dagger}V=1\).
## IV Numerical results
In this section we use the tSDRG method to explore ground-state phases of the random spin-2 chain with alternating bond strength distributions as defined in Eqs. (5) and (6), focusing on two observables: the VBS order parameter based on a unitary operator appearing in the Lieb-Schultz-Mattis theorem Lieb and Schultz (1963) and the end-to-end correlation function.
### VBS order parameter
We consider a unitary operator, called the twist operator, defined for a chain with \(L\) spins as
\[U=\exp\left[i\frac{2\pi}{L}\sum_{j=1}^{L}jS_{j}^{z}\right]\,, \tag{25}\]
which creates spin-wave-like excitations by rotating each spin about the \(z\) axis with a relative angle. The twist operator was first introduced in the Lieb-Schultz-Mattis theorem Lieb and Schultz (1963); Mattis (1963); Lieb and Schultz (1963), which states that for the ground state, \(\Psi_{\rm GS}\), of a half-integer spin chain, one has
\[z_{L}\equiv\langle\Psi_{\rm GS}|U|\Psi_{\rm GS}\rangle=0\,, \tag{26}\]
in the \(L\to\infty\) limit, indicating a gapless excitation spectrum. Furthermore, the ground-state expectation value of this operator has been found to be capable of detecting and characterizing VBS order Kells _et al._ (2015). For a VBS state of type-\(\sigma\), the asymptotic form of the expectation value is given by Kells _et al._ (2015)
\[z_{L}=(-1)^{\sigma}\left[1-\mathcal{O}(1/L)\right]\,, \tag{27}\]
that is, it is positive (negative) in the \(L\to\infty\) limit if \(\sigma\) is even (odd). Using the properties of \(z_{L}\), ground-state phase diagrams of dimerized spin-\(S\) chains with \(S=1/2,1,3/2\), and \(2\) in the absence of randomness were determined in Ref. Kells _et al._ (2015). Remarkably, this order parameter is applicable also for strongly disordered systems, as demonstrated in a quantum Monte Carlo study for the random spin-1 Heisenberg chain Gogolin _et al._ (2015).
Here we calculate \(z_{L}\) using the tSDRG method for the random spin-2 chain with periodic boundary conditions. We explore the behavior of the disorder-averaged order parameter, \(\overline{z}_{L}\), for a wide range of randomness and dimerization, parametrized by \(R\) and \(D\). We restrict ourselves to the \(D>0\) region because the results for \(D<0\) can be obtained simply via the parity symmetry. We have considered system sizes up to \(L=512\) and more than 5,000 random coupling samples to obtain the disorder average. Figure 5 shows the disorder-averaged order parameter for different system sizes at \(R=1.5,1.0\), and \(0.5\) as a function of \(D\). Here, in the cases with \(R=0.5\) and \(R=1.0\), one can clearly observe that the order parameter changes its sign at certain values of \(D\); there are two such zero-crossings for each \(L\), which can be identified as two phase transition points between different random VBS states. The sign of \(\overline{z}_{L}\) indicates that the domains from left (small \(D\)) to right (large \(D\)) are successively in \((2,2)\), \((3,1)\) and \((4,0)\)-dominate phases. By
Figure 4: (a) The tSDRG algorithm as a tree tensor network for a chain of 8 sites with periodic boundary conditions. The squares indicate the \(W\)-tensors in the MPO representation of the system’s Hamiltonian, the triangles are the \(V\)-tensors used to truncate the two-site operators, and the circles represent the top Hamiltonian encoded in the final two-site tensor. The RG iteration proceeds upwards in the vertical dimension. The part below the W-tensors is the conjugate of the upper part. (b) A tree tensor network for the ground-state expectation value of the end-to-end correlator which acts only on two edge sites. Since \(V^{\dagger}V=1\), only those isometric tensors (orange triangles) that affect the two edge sites are considered.
varying \(R\), the two domain walls, located at the zero-crossings of \(\overline{z}_{L}\), form critical lines that connect the two critical points in the clean limit, i.e. \(D_{c,1}\approx 0.18\) and \(D_{c,2}\approx 0.55\) at \(R=0\). The distance between these two critical lines decreases as \(R\) increases. For the case with \(R=1.5\), one can observe the minimum value of \(\overline{z}_{L}\) for \(L=512\) (the largest system size that we consider here) is about zero; we can than identify the associated \((R_{p},D_{p})\) with this minimum as the junction of the two critical lines for the finite chain.
The results described above for the disorder-averaged twist order parameter suggest that there is a multicritical point, at which three phases \((2,2)\), \((3,1)\) and \((4,0)\) meet, in the region of \(D\neq 0\). This implies that the diagram in Fig. 2(c) corresponds to the \(R\)-\(D\) phase diagram of our model. To determine the location of this multicritical point in the limit of \(L\to\infty\), we find the \((R_{p}(L),D_{p}(L))\) point at which the disorder-averaged twist order parameter reaches its minimum at \(\overline{z}=0\), and then estimate the critical values for \(R_{p}\) and \(D_{p}\) from an extrapolation to \(L\to\infty\) using
\[R_{p}(L)-R_{p}(\infty)\sim L^{-1/\nu}\,, \tag{28}\]
and
\[D_{p}(L)-D_{p}(\infty)\sim L^{-1/\nu}\,, \tag{29}\]
with \(\nu=2.3\) for a multicritical point \(\mathcal{P}_{3}\) (see Eq. (7)). By doing so, we obtain \(R_{p}(\infty)\approx 1.15\) and \(D_{p}(\infty)\approx 0.04\), as shown in Fig. 6. We note that, for a control parameter \(\lambda\), the deviation of a finite-size pseudocritical point \(\lambda_{c}(L)\) from the true critical point \(\lambda_{c}(\infty)\) in the limit of \(L\to\infty\) is often parameterized as \(\ln(\lambda_{c}(L))-\ln(\lambda_{c}(\infty))\) for an infinite randomness fixed point [8; 48; 49; 50]. For the bond strength distributions given in Eq. (5) and Eq. (6), the distance from the critical point defined in Eq. (28) is thus consistent with such logarithmic parameterization; the distance given in Eq. (29) is also suitable because
Figure 5: Disorder-averaged twist order parameters \(\overline{z}_{L}\) for different system sizes at \(R=1.5\), \(R=1.0\) and \(R=0.5\), plotted versus the dimerization \(D\).
Figure 6: Finite-size scaling of the location \((R_{p},D_{p})\) at which the twist order parameter reaches its minimum at \(\overline{z}_{L}=0\). The multicritical point in the thermodynamic limit is estimated as \(R_{p}\approx 1.15\) and \(D_{p}\approx 0.04\) from an extrapolation to \(L\to\infty\), using Eqs. (28) and (29) with \(\nu=2.3\).
Figure 7: Disorder-averaged twist order parameters \(\overline{z}_{L}\) for different system sizes at \(D=0\), plotted versus the randomness \(R\).
the values of \(D\) we consider here are so small that the approximation \(\ln(1+D)\approx D\) is valid.
Now we turn to the undimerized region with \(D=0\). Figure 7 shows the dependence of disorder-averaged twist order parameters on \(R\). Here the twist order parameter for each system size \(L\) is positive before converging to \(\overline{z}_{L}=0\) in the large \(R\) region, consistent with the scenario in which the system's ground state changes from a \((2,2)\) phase to an RS phase when the randomness exceeds a critical value; this critical point at \(D=0\) is a multicritical point \(\mathcal{P}_{3}\) at which three phases \((2,2)\), \((4,0)\) and \((0,4)\) meet (see Fig, 2(c)). Since the order parameter \(\overline{z}_{L}\) does not change its sign for the \((2,2)\)-RS phase transition nor for a nearby \((2,2)\)-\((4,0)\) transition, it is difficult to determine the transition point accurately by \(\overline{z}_{L}\). Nevertheless, the results shown in Fig. 7 (and also in Fig. 5 (a)) suggest that the critical \(R\) value at \(D=0\) for the largest size \(L=512\) is about \(R\approx 1.5\) (or slightly higher); thus the multicritical point at \(D=0\) and the one at \(D>0\) have critical \(R\) values that are not far apart.
As seen in Fig. 5(c) and Fig. 7, there are intersection points developing at some nonzero values of \(\overline{z}_{L}\). In Ref. [27], such intersection points of \(\overline{z}_{L}\), instead of zero-crossings, were used to identify the multicritical point and the RS critical line for the random spin-1 chain. Here, by exploring a wide range of parameters \((R,D)\) and accessing larger system sizes, we have found that the \(\overline{z}_{L}\) curve crossings appear only in the region of small \(R\), or the crossing points tend toward \(\overline{z}_{L}=0\). Thus, zero order parameter, \(\overline{z}_{L}=0\), turns out to be a more reasonable indicator for a transition point even in disordered systems.
### End-to-end correlations
In this subsection we investigate the end-to-end correlations in an open chain, which considers correlations between two end spins. This quantity is defined as
\[C_{1}(L)=(-1)^{L-1}\langle\Psi_{\rm GS}|\mathbf{S}_{1}\cdot\mathbf{S}_{L}|\Psi_ {\rm GS}\rangle\,, \tag{30}\]
for the ground state \(|\Psi_{\rm GS}\rangle\) of a chain with \(L\) spins and open boundary conditions.
Here we first summarize some previous results for the end-to-end correlations. The end-to-end correlation of an open chain is closely related to the energy gap [51; 52]. In the RS phase the end-to-end correlations are very broadly distributed, with the typical behavior [8],
\[-\ln C_{1}(L)\sim L^{\psi}\,, \tag{31}\]
where \(\psi=1/2\). On the other hand, the average end-to-end correlation function decays as a power of \(L\) at criticality [51],
\[\overline{C}_{1}(L)\sim 1/L^{\eta_{1}}\,, \tag{32}\]
with \(\eta_{1}=1\) in the RS phase. The behavior of the average end-to-end correlation in Eq. (32) was first derived for the infinite-randomness fixed point of the random transverse-field Ising spin chain [51] and is also valid for the RS phase, as verified numerically in Ref. [12; 13]. At the multicritical point \(\mathcal{P}_{3}\), typical end-to-end correlations go like Eq. (31) but with \(\psi=1/3\), according to SDRG analytical results. There have been so far no theoretical conjectures about the exponent in Eq. (32) for the average end-to-end correlations at \(\mathcal{P}_{3}\); nevertheless, previous numerical results [12; 13] for the spin-1 chain estimated
Figure 9: Average end-to-end correlation versus system size for the undimerized case (\(D=0\)) with various \(R\) in a log-log plot. All lines are fits to the form \(aL^{-\eta}\). From the slopes of the fitting lines, corresponding to the exponents \(\eta_{1}\), we estimate \(R\approx 1.2\) for the multicritical \(\mathcal{P}_{3}\) point. For \(R\geq 1.5\), the slope approaches \(\eta_{1}=1\), indicating that the system enters an RS phase; here it is the RS\({}_{2}\) phase.
Figure 8: End-to-end correlations at the multicritical point (\(D=0.04,R=1.15\)). (a): Finite-size dependence of the average correlation; the solid red line has a slope of \(\eta_{1}\approx 0.69\). (b): Distributions of end-to-end correlations for different sizes. (c): A scaling plot for the data in (b), assuming \(-\ln(C_{1})\sim L^{\psi}\) with \(\psi=1/3\).
\(\eta_{1}\approx 0.69-0.7\) for the \(\mathcal{P}_{3}\) multicritical point, which is expected also for the \(\mathcal{P}_{3}\) point in the spin-2 chain. Away from an infinite-randomness critical point, the distribution of end-to-end correlations has a power-law tail that behaves as [52]
\[P(C_{1})\sim C_{1}^{-1+z}\,, \tag{33}\]
with a finite dynamic exponent \(z\); in a Griffiths phase, the singular low-energy behavior of various observables is characterized by a large and continuously variable dynamic exponent \(z>1\).
We first examine the behavior of end-to-end correlations at the \(\mathcal{P}_{3}\) points. For the dimerized cases, we consider open chains with odd numbers of spins to balance the numbers of strong and weak couplings. Figure 8(a) shows the average of the correlations at \(R=1.15\) and \(D=0.04\), where is the location of a multicritical point \(\mathcal{P}_{3}\) according to finite-size scaling of the twist order parameters discussed in Sec. IV.1. Here we estimate \(\eta_{1}\approx 0.69\) from the results for the average correlation as a function of the chain length \(L\), in good agreement with previous numerical results [12; 13]. Also the distributions of the logarithmic correlations (shown in Fig. 8(b)), which become broader with increasing size, can collapse onto each other by using the scaled variable
\[x=\ln C_{1}/L^{\psi}, \tag{34}\]
with \(\psi=1/3\) [Fig. 8(c)], consistent with the theoretical prediction [19].
For the undimerized case \(D=0\), we show the average correlations plotted against \(L\) for various values of \(R\) in Fig. 9; all data here decay as a power law: \(C_{1}(L)\sim L^{-\eta_{1}}\). From the slopes of the fitting lines, corresponding to the exponents \(\eta_{1}\), we estimate \(R\approx 1.2\) for the multicritical \(\mathcal{P}_{3}\) point by comparing the value of \(\eta_{1}\) with previous numerical results for the \(\mathcal{P}_{3}\) point in the random spin-1 chain [12; 13]. For stronger randomness, such as \(R=1.5\) and \(R=1.6\), the slopes approach \(\eta_{1}=1\), which is the theoretical value for an RS phase; in this case, it is the RS\({}_{2}\) phase.
Now we turn to the correlations on the critical lines at \(D>0\) that are the (2,2)-(3,1) and (3,1)-(4,0) boundaries, determined using the zero-crossings of the \(\overline{z}_{L}\) order parameter for the largest system size \(L=512\). Figure 10 and Figure 11 show the averages and the distributions of end-to-end correlations at four such finite-size critical points. For the critical points, (\(R=1.0,D=0.34\)) and (\(R=0.5,D=0.48\)), at the boundary between the \((3,1)\) and \((4,0)\) phases, our numerical results are found to be in good agreement with the analytic predictions for an RS phase: the average end-to-end correlations decay as \(\overline{C}_{1}(L)\sim 1/L\), as shown in Fig. 10(c) and (d), and the distributions of the correlations, shown in Fig. 11(c) and (d), are extremely broad and become broader with increasing \(L\). Furthermore, the distributions for different system sizes collapse well onto each other by using the scaled variable defined in Eq. (34) with the theoretical value \(\psi=1/2\) for the RS phase, as shown in Fig. 12(c)and (d).
On the other hand, the results for the (2,2)-(3,1) phase boundary are not fully compatible with the scenario of
Figure 10: Finite-size dependence of the average end-to-end correlations at critical points, determined by the zero-crossings of the order parameter \(\overline{z}_{L}\) for \(L=512\), in the region of \(D>0\) at the (2,2)-(3,1) phase boundary [(a) and (b)] and the (3,1)-(4,0) boundary [(c) and (d)]. The mean correlations have a power-law decay; the exponent \(\eta_{1}\) at the (3,1)-(4,0) boundary appears to approach 1, consistent with the theoretical prediction for the RS\({}_{1/2}\) phase, while \(\eta_{1}\) at the (2,2)-(3,1) boundary is much smaller than 1.
Figure 11: The distributions of the end-to-end correlations for different sizes at the same critical points as investigated in Fig. 10. The distributions for (\(R=1.0,D=0.12\)) [Subfigure (a)] located at the (2,2)-(3,1) boundary with sufficiently strong disorder and the distributions at the (3,1)-(4,0) boundary [(c) and (d)] are broad and become broader with increasing \(L\). The correlations for (\(R=0.5,D=0.14\)) [(b)] at the (2,2)-(3,1) boundary with smaller \(R\), on the other hand, are not very broadly distributed, and the data curves for different sizes are very similar, implying the critical phase is not of infinite-randomness type.
an RS phase. The mean end-to-end correlations \(\overline{C}_{1}(L)\) at the (2,2)-(3,1) phase boundary, as shown in Fig. 10(a) and (b) for the two critical points (\(R=1.0,D=0.12\)) and (\(R=0.5,D=0.14\)), appear to decay much slower than \(1/L\), contrary to the behavior in an RS phase. In particular, the distribution of the correlations in the region of small \(R\), such as at (\(R=0.5,D=0.14\)) (see Fig. 11(c)), does not broaden with increasing size, implying that the phase is not associated with an infinite-randomness critical point. Assuming the scaling form given in Eq. (33), we estimate the dynamic exponent \(z\approx 0.41\) from the slope of the power-law tail of the distribution, which gives the scaling plot shown in Fig. 12(b). With stronger disorder at (\(R=1.0,D=0.12\)), the distribution becomes broader with increasing size (see Fig. 11(a)), showing the signature of infinite randomness. We have used the scaled variable in Eq. (34) and \(\psi=1/3\) to achieve good data collapse, as shown in Fig. 12(a). Strong finite size effects and the close distance from the multicritical point at \(R=1.15\) may lead to the discrepancy between the exponent \(\psi\) found here and the theoretical predicted value \(\psi=1/2\) for an RS\({}_{1/2}\) phase.
The small dynamic exponent \(z<1\) estimated for (\(R=0.5,D=0.14\)) does not lead to divergence of the local susceptibility [24; 12], indicating there is no pronounced disorder-induced singular behavior. Another way, suggested in Ref. [12], to identify a nonsingular region with \(z<1\) or a singular region with \(z>1\) is via the so-called inverse average of the end-to-end correlation, defined as
\[C_{1}^{\rm inv}=\left(\overline{C_{1}^{-1}}\right)^{-1}\,, \tag{35}\]
where \(\overline{C_{1}^{-1}}\) is the average of the inverses. The inverse average of \(C_{1}\) is finite, \(C_{1}^{\rm inv}>0\) if \(z<1\), while it is zero if \(z>1\). As shown in Fig. 13, the inverse average \(C_{1}^{\rm inv}\) at \(R=0.5\) is finite for a wide range of \(D\) before it converges to zero at \(D\approx 0.4\) for larger \(L\), while the inverse average vanishes at \(R=1\) independent of \(D\). The results for the inverse average \(C_{1}^{\rm inv}\) also imply that there is no random-singlet phase at \(R=0.5\) in the region with small dimerization \(D<0.4\). This large nonsingular region also gives rise to the weak signature of infinite randomness in the larger \(R\) region at the (2,2)-(3,1) phase boundary.
## V Summary and Discussion
Using a tensor-network SDRG method, we have explored the ground-state phases of the random spin-2 antiferromagnetic chain with alternating bond strength distributions. We have calculated the twist order parameter, defined as the ground-state expectation value of the unitary operator in Eq. (25), to classify the types of random VBS phases depending on the strength of bond randomness \(R\) and the dimerization \(D\). For a disorder-free VBS phase (\(\sigma,4-\sigma\)) in a clean system, the twist order parameter is positive if \(\sigma\) is even and negative if \(\sigma\) is odd [35]. In a random VBS domain, there is nonzero residual VBS order (dimerization) that can be detected by the disorder average of this order parameter, as we have demonstrated in this paper. Therefore, the zero-crossing of the disorder-average twist order parameter can serve to determine the phase transition point between different random VBS states, in the same way as for clean
Figure 12: Scaling plots of the distributions in Fig. 11. The correlations in (c) and (d), at the the (3,1)-(4,0) phase boundary, are rescaled as \(\ln C_{1}/L^{\psi}\) with \(\psi=1/2\), and the data in (a) for a critical point at the (2,2)-(3,1) phase boundary are plotted in terms of the same rescaled variable but with \(\psi=1/3\) to achieve good data collapse. The correlations in (b), for a critical point at the (2,2)-(3,1) phase boundary with smaller \(R\) and \(D\), are rescaled by assuming \(P(C_{1})\sim C_{1}^{-1+1/z}\) with \(z=0.41\), which is estimated from the slopes of the tails at small values of \(C_{1}\).
Figure 13: The inverse average of the end-to-end correlation, defined in Eq. (35), versus the strength of dimerization \(D\).
systems [35]. Our results largely agree with the phase diagram sketched in Fig. 2(c). There is a multicritical point in the intermediate disorder regime with finite dimerization, where (2,2), (3,1) and (4,0) three phases meet. The (2,2)-(3,1) phase boundary and the (3,1)-(4,0) boundary extend to \(R=0\) and are predicted to be in the RS\({}_{1/2}\) phase for any \(R>0\)[19]. However, from the results for end-to-end correlations we see no signs of an RS phase at the (2,2)-(3,1) boundary with small \(R\) and have instead found a large nonsingular region, characterized by \(z<1\), in the small \(R\) regime. Such a nonsingular region with \(z<1\) in the weak disorder limit has previously also been identified in a DMRG study [12] and in a tSDRG study [13] of the random undimerized (\(D=0\)) spin-1 chain; both studies used the same power-law distribution of bond randomness, which is also identical to the distribution we consider here for the undimerized case.
The nonsingular behavior in the weak disorder and weak dimerization limit and the absence of the RS\({}_{1/2}\) phase there can have more than one source. Compared to the Haldane gap value of \(\approx 0.41J\) in the spin-1 case with nearest-neighbor coupling \(J\), the finite gap in the clean spin-2 chain is much smaller, just about \(\approx 0.09J\). Even so, this small energy gap appears to have considerable impact on the ground-state properties of the (2,2) phase at weak disorder. Furthermore, we did not set constraint to the strength difference between odd and even bonds while using the bond strength distribution (given in Eq.(5)) in our calculations; this may produce unsharp dimerization for small systems, especially in the small \(D\) limit, which in turn leads to weak critical signature or the absence of the RS\({}_{1/2}\) phase at the (2,2)-(3,1) boundary. In this regard we present in Fig. 14 the average energy gap of the chain with PBC at \(R=0.5\), obtained from the lowest-lying excitation of the top Hamiltonian \(H_{\rm top}\), versus the dimerization \(D\). Here we observe a clear (local) minimum of the average energy gap at \(D=0.48\) for all system sizes, where is also the (3,1)-(4,0) transition point (indicated by the right dashed line) as estimated from the zero-crossing of the twist order parameter. On the other hand, in the small \(D\) region (corresponding to the nonsingular region), the curve for the largest size \(L=512\) develops a less clear minimum around the estimated (2,2)-(3,1) transition point (indicated by the left dashed line), while the curves for smaller sizes are flat in this region, showing strong finite-size effects.
The twist order parameter is a useful quantity, also for disordered systems, to determine the phase transition point between different random VBS states based on changes of the sign according to the valence-bond configuration. However, the phase diagram of the spin-2 chain considered here has a (2,2)-(4,0) phase boundary where the twist order parameter does not change the sign, which makes it difficult to detect this phase transition. It would be desirable to find an order parameter that can accurately determine the (2,2)-(4,0) phase boundary which connects two \(\mathcal{P}_{3}\) multicritical points.
Recently, there have been an increasing interest in properties of higher-spin materials both from the theoretical and experimental perspective, especially in the context of Kitaev models and quantum spin liquids [53; 54; 55]. Like the spin-2 chain that we consider here, the quasi-one-dimensional versions of these higher-spin materials exhibit rich phase diagrams [56; 57]. Concerning disorder effects on the low-temperature phases, the tS-DRG method and its variants [58; 59; 60; 61; 62; 63] are certainly the most promising numerical tools for studying large-scale systems with accuracy.
###### Acknowledgements.
This work was supported by National Science and Technology Council (NSTC) of Taiwan under Grants No. 111-2119-M-007-009, 112-2119-M-007-008, 112-2112-M-004-008, 111-2112-M-004-005, 110-2112-M-007-037-MY3. We also acknowledge support from the NCTS.
|
2309.15422 | Computing Permanents and Counting Hamiltonian Cycles Faster | We show that the permanent of an $n\times n$ matrix of
$\operatorname{poly}(n)$-bit integers and the number of Hamiltonian cycles of
an $n$-vertex graph can both be computed in time $2^{n-\Omega(\sqrt{n})}$,
improving an earlier algorithm of Bj\"orklund, Kaski, and Williams
(Algorithmica 2019) that runs in time $2^{n - \Omega\left(\sqrt{n/\log \log
n}\right)}$.
A key tool of our approach is to design a data structure that supports fast
"$r$-order evaluation" of permanent and Hamiltonian cycles, which cooperates
with the new approach on multivariate multipoint evaluation by Bhargava, Ghosh,
Guo, Kumar, and Umans (FOCS 2022). | Baitian Li | 2023-09-27T06:05:59Z | http://arxiv.org/abs/2309.15422v1 | # Computing permanents and counting Hamiltonian cycles faster
###### Abstract.
We show that the permanent of an \(n\times n\) matrix of \(\operatorname{poly}(n)\)-bit integers and the number of Hamiltonian cycles of an \(n\)-vertex graph can both be computed in time \(2^{n-\Omega(\sqrt{n})}\), improving an earlier algorithm of Bjorklund, Kaski, and Williams (Algorithmica 2019) that runs in time \(2^{n-\Omega\left(\sqrt{n/\log\log n}\right)}\).
A key tool of our approach is to design a data structure that supports fast "\(r\)-order evaluation" of permanent and Hamiltonian cycles, which cooperates with the new approach on multivariate multipoint evaluation by Bhargava, Ghosh, Guo, Kumar, and Umans (FOCS 2022).
Institute for Interdisciplinary Information Sciences, Tsinghua University
_E-mail address_: [email protected]
## 1. Introduction
Given an \(n\times n\) matrix \(A\) over a commutative ring \(R\), the \(R\)-Permanent is defined by
\[\operatorname{per}A=\sum_{\sigma\in S_{n}}\prod_{i=1}^{n}A_{i,\sigma(i)},\]
where \(S_{n}\) denotes the permutations of order \(n\). Similarly, the \(R\)-HamCycles is defined by
\[\operatorname{hc}A=\sum_{\begin{subarray}{c}\sigma\in S_{n}\\ c(\sigma)=1\end{subarray}}\prod_{i=1}^{n}A_{i,\sigma(i)},\]
where \(c(\sigma)\) denotes the number of cycles in \(\sigma\).
The permanent and Hamiltonian cycles are two fundamental problems in computer science. The problem of deciding whether a given graph has a Hamiltonian cycle is one of Karp's 21 NP-complete problems [11]. Valiant proved that over the integers, the problem computing permanent is #P-complete, even the entries of the matrix are restricted to 0 and 1 [15], and counting Hamiltonian cycles is also #P-complete [16].
Ryser's formula [14] shows that the permanent can be computed with \(O(n2^{n})\) arithmetic operations. It remains a prominent open problem whether the permanent can be computed within less than \(2^{n}\) sized arithmetic circuits, as mentioned by Knuth in _the Art of Computer Programming_[13, Exercise 4.6.4.11].
Indeed, beyond the confines of arithmetic operations, faster algorithms for computing the permanent have emerged. Bax and Franklin [1] gave an algorithm that computes 01-permanent in \(2^{n-\Omega(n^{1/3}/\log n)}\) expected time. Bjorklund [4] introduced the self-reduction paradigm for both the permanent and Hamiltonian cycles. Leveraging tabulation and the Chinese remainder theorem, Bjorklund's algorithm achieved time complexity \(2^{n-\Omega\left(\sqrt{n/\log n}\right)}\). His work is subsequently improved by Bjorklund, Kaski and Williams [6] by applying the construction of Kakeya set to reduce the tabulation size for multivariate polynomial evaluation, and give an algorithm of time \(2^{n-\Omega\left(\sqrt{n/\log\log n}\right)}\), furthermore, their algorithm applies to a more generalized kind of polynomial called fermionant.
## 1. Introduction
### Motivation
We consider the permanent case of a finite field \(\mathbb{F}_{q}\) with \(q\) elements. We consider the permanent case of a finite field \(\mathbb{F}_{q}\) with \(q\) elements.
For an \(n\times m\) matrix \(A\), for subsets \(S\subseteq[n]\) and \(T\subseteq[m]\), we use \(A_{S,T}\) to denote the submatrix of \(A\) with rows indexed by \(S\) and columns indexed by \(T\).
Let \(\binom{n}{\downarrow m}\) to denote the partial sum of binomials, i.e.,
\[\binom{n}{\downarrow m}=\sum_{0\leq i\leq m}\binom{n}{i}.\]
### Inequality for Binomial
We need the estimate of the partial sum of binomials, see [10, Lemma 3.13] for a proof.
**Lemma 1**.: _For \(\alpha\in(0,1/2)\), we have_
\[\binom{n}{\downarrow\alpha n}\leq 2^{nH(\alpha)},\]
_where \(H(\alpha)=-\log_{2}(\alpha^{\alpha}(1-\alpha)^{1-\alpha})\)._
### Hermite Interpolation
We need the following lemma for Hermite interpolation, see [17, Section 5.6] for a proof.
**Lemma 2**.: _Let \(f(t)\in\mathbb{F}[t]\) be a polynomial of degree less than \(d\), and \(m\) distinct points \(\tau_{1},\ldots,\tau\) in \(\mathbb{F}\), with multiplicities \(e_{1},\ldots,e_{m}\) be positive integers such that \(e_{1}+\cdots+e_{m}=d\). Given \(f\bmod(t-\tau_{i})^{e_{i}}\) for each \(i\in[m]\), \(f\) can be recovered in \(\operatorname{poly}(d)\)\(\mathbb{F}\)-operations._
In particular, our algorithm use the case where \(\alpha\) contains all elements of a finite field \(\mathbb{F}_{q}\), and \(e_{i}=r\) for all \(i\).
**Corollary 2**.: _Let \(f(t)\) be a polynomial of degree less than \(qr\), and given \(f\bmod(t-\alpha)^{r}\) for all \(\alpha\in\mathbb{F}_{q}\), \(f\) can be recovered in \(\operatorname{poly}(qr)\)\(\mathbb{F}_{q}\)-operations._
### Multimodular Reduction
Our algorithm uses the Chinese remainder theorem to reduce the problem to small finite fields.
**Theorem 2**.: _Let \(p_{1},\ldots,p_{n}\) be distinct primes, and \(a_{1},\ldots,a_{n}\) be integers such that \(0\leq a_{i}\leq p_{i}\). Let \(M=p_{1}\cdots p_{n}\). Then there exists a unique integer \(a\) such that \(0\leq a<M\) such that \(a\equiv a_{i}\pmod{p_{i}}\) for every \(i\in[n]\). Moreover, \(a\) can be computed in time \(\operatorname{poly}(\log M)\)._
See [17, Section 10.3] for a proof.
We also need an estimate on the product of primes.
**Lemma 3**.: _For an integer \(N\geq 2\), the product of the primes \(\leq 16\log N\) is greater than \(N\)._
See [12, Lemma 2.4] for a proof.
## 3. Common Framework
In this section, we set up the common framework for computing permanents and counting Hamiltonian cycles.
### Self Reduction
We borrow the following two lemmas from [4, Lemma 3, Lemma 4].
**Lemma 4**.: _Suppose \(|\mathbb{F}|\geq k^{2}+1\), given a matrix \(A\in\mathbb{F}^{n\times n}\), one can compute \(m=2^{n-k}n^{O(1)}\) instances \(a_{i}\in\mathbb{F},F_{i}\in\mathbb{F}^{k\times k}\) such that_
\[\operatorname{per}(A)=\sum_{i=1}^{m}a_{i}\operatorname{per}(F_{i}).\]
_And the computation of these instances takes \(2^{n-k}n^{O(1)}\)\(\mathbb{F}\)-operations._
**Lemma 5**.: _Suppose \(|\mathbb{F}|\geq k^{2}+1\), given a matrix \(A\in\mathbb{F}^{n\times n}\), one can compute \(m=2^{n-k}n^{O(1)}\) instances \(a_{i}\in\mathbb{F},F_{i}\in\mathbb{F}^{k\times k}\) such that_
\[\operatorname{hc}(A)=\sum_{i=1}^{m}a_{i}\operatorname{hc}(F_{i}).\]
_And the computation of these instances takes \(2^{n-k}n^{O(1)}\)\(\mathbb{F}\)-operations._
### Kakeya Set
We borrow the definition and construction of Kakeya sets mentioned in [6].
**Definition 1**.: A set \(K\subseteq\mathbb{F}_{q}^{m}\) is said to be a _Kakeya set_ of degree \(u\), if for every \(a_{1},\ldots,a_{m}\in\mathbb{F}_{q}\), there exists degree \(u\)-polynomials \(g_{1},\ldots,g_{m}\), such that the degree \(u\) coefficient of \(g_{i}\) is \(a_{i}\), and the set
\[\{(g_{1}(\tau),\ldots,g_{m}(\tau)):\tau\in\mathbb{F}_{q}\}\]
is a subset of \(K\).
**Theorem 3**.: _Let \(u\) be a positive integer such that \(u+1\) divides \(q-1\). Then there is a Kakeya set \(K\) of degree \(u\) in \(\mathbb{F}_{q}^{m}\) of size at most \((\frac{q-1}{u+1}+1)^{m+1}\). Such \(K\) can be constructed in time \(O(q|K|)\) and for each point \(\mathbf{a}=(a_{1},\ldots,a_{m})\in\mathbb{F}_{q}^{m}\), the coefficients of the corresponding polynomials \(g_{1},\ldots,g_{m}\) can be computed in time \(\operatorname{poly}(u,m)\)._
### Reveal Information From Derivative
We first define \(r\)-order evaluation.
**Definition 2**.: Let \(P\) be a polynomial over \(m\) indeterminates. We call a data structure that supports the following operation a \(r\)_-order evaluation_ of \(P\) at \(\mathbf{a}\in\mathbb{F}_{q}^{m}\): Given a polynomial \(\mathbf{f}(t)=(f_{1}(t),\ldots,f_{m}(t))\), where each \(f_{i}(t)\in\mathbb{F}_{q}[t]\) is a polynomial with degree less than \(r\), and \(\mathbf{f}(0)=\mathbf{a}\). Compute \(P(\mathbf{f}(t))\bmod t^{r}\).
We rephrase the idea of [2] to reveal information from derivative in the language of \(r\)-order evaluation.
**Theorem 4**.: _Let \(P\) be a homogenous degree \(k\) polynomial over \(m\) indeterminates, \(b\) be a positive integer such that \(q\equiv 1\pmod{b}\). Let \(u=(q-1)/b-1\) and \(r=\lceil k/b\rceil\). Let \(K\) be a Kakeya set of degree \(u\), with constructed data structures for \(r\)-order evaluation at all points of \(K\), given any point \(\mathbf{a}\) and the associated curve \(C_{\mathbf{a}}(t)=(g_{1}(t),\ldots,g_{m}(t))\), we can compute \(P(\mathbf{a})\) within \(q\) queries of \(r\)-order evaluation, and \(\operatorname{poly}(k,q)\) arithmetic operations over \(\mathbb{F}_{q}\)._
Proof.: By the definition of Kakeya sets, it's guaranteed that \(C_{\mathbf{a}}(\tau)\in K\) for all \(\tau\in\mathbb{F}_{q}\). The polynomial \(P(C_{\mathbf{a}}(t))\) is of degree \(ku\). Write \(P(x_{1},\ldots,x_{m})\) with
\[P(x_{1},\ldots,x_{m})=\sum_{\begin{subarray}{c}i_{1},\ldots,i_{m}\in\mathbb{ N}\\ i_{1}+\cdots+i_{m}=k\end{subarray}}p_{i_{1},\ldots,i_{m}}x_{1}^{i_{1}}\cdots x _{m}^{i_{m}},\]
since \(g_{i}(t)=a_{i}t^{u}+O(t^{u-1})\), we have
\[P(C_{\mathbf{a}}(t)) =\sum_{\begin{subarray}{c}i_{1},\ldots,i_{m}\in\mathbb{N}\\ i_{1}+\cdots+i_{m}=k\end{subarray}}p_{i_{1},\ldots,i_{m}}g_{1}(t)^{i_{1}}\cdots g _{m}(t)^{i_{m}}\] \[=\sum_{\begin{subarray}{c}i_{1},\ldots,i_{m}\in\mathbb{N}\\ i_{1}+\cdots+i_{m}=k\end{subarray}}p_{i_{1},\ldots,i_{m}}(a_{1}t^{u}+O(t^{u-1} ))^{i_{1}}\cdots(a_{m}t^{u}+O(t^{u-1}))^{i_{m}}\] \[=\sum_{\begin{subarray}{c}i_{1},\ldots,i_{m}\in\mathbb{N}\\ i_{1}+\cdots+i_{m}=k\end{subarray}}p_{i_{1},\ldots,i_{m}}(a_{1}^{i_{m}}\cdots a_{ m}^{i_{m}}t^{ku}+O(t^{ku-1}))\] \[=P(\mathbf{a})t^{ku}+O(t^{ku-1}),\]
we have the coefficient of \(t^{ku}\) in \(P(C_{\mathbf{a}}(t))\) is \(P(\mathbf{a})\).
By the choice of \(u\), we have \(ku=k((q-1)/b-1)<qk/b\leq qr\). Let \(Q(t)=P(C_{\mathbf{a}}(t))\). If we are given \(Q\bmod(t-\tau)^{r}\) for each \(\tau\in\mathbb{F}_{q}\), by Hermite interpolation, we can recover \(Q\) in \(\operatorname{poly}(qr)\) time. So the problem reduces to compute \(P(C_{\mathbf{a}}(t))\bmod(t-\tau)^{r}\) for each \(\tau\in\mathbb{F}_{q}\).
In order to compute \(Q(t)\bmod(t-\tau)^{r}\), one can write \(Q(t)=R(t)+(t-\tau)^{r}D(t)\) where \(\deg R<r\), then \(R(t)\) is the desired result. Thus we have \(Q(t+\tau)=R(t+\tau)+t^{r}D(t+r)\), so we can compute \(Q(t+\tau)\bmod t^{r}\), and then reveal \(Q(t)\bmod(t-\tau)^{r}\) by substituting \(t\gets t-\tau\). The conversion of coefficients only takes \(\operatorname{poly}(r)\) arithmetic operations over \(\mathbb{F}_{q}\). Thus we only need to compute \(P(C_{\mathbf{a}}(t+\tau))\bmod t^{r}\) for each \(\tau\in\mathbb{F}_{q}\), this is exactly an \(r\)-order evaluation at \(C_{\mathbf{a}}(\tau)\)
## 4. Data Structure for Permanent
**Lemma 6**.: _For a commutative ring \(R\) and matrices \(A,B\in R^{n\times n}\), we have_
\[\operatorname{per}(A+B)=\sum_{\begin{subarray}{c}S,T\subseteq[n]\\ |S|=|T|\end{subarray}}\operatorname{per}(B_{S,T})\operatorname{per}(A_{[n] \setminus S,[n]\setminus T}).\]
Proof.: We gave a combinatorial proof. The permanent \(\operatorname{per}(A+B)\) takes the summation over the perfect matchings of the complete bipartite graph \(K_{n,n}\) with the product of edge weights. By expanding the product of each \((A+B)_{i,j}\), this is equivalent to coloring each selected edge with one of two colors \(A\) and \(B\), and taking the product of the weights of edges with the selected color. On another hand, we can first determine the vertices that their matching edge is colored by \(B\), let those vertices in the left side be \(S\) and those in the right side be \(T\), then the contribution of such coloring is \(\operatorname{per}(B_{S,T})\operatorname{per}(A_{[n]\setminus S,[n]\setminus T})\).
**Theorem 5**.: _Given a matrix \(A\in\mathbb{F}^{k\times k}\) and a positive integer \(r\), can precompute with time \(\tilde{O}\left({k\choose{\downarrow r}}^{2}2^{k}\right)\), and answer the \(r\)-order evaluation \(\operatorname{per}(F(t))\bmod t^{r}\) in time \(\tilde{O}\left({k\choose{\downarrow r}}^{2}\right)\). Both are measured in \(\mathbb{F}\)-operations._
Proof.: We write \(F(t)=A+B(t)\), where \(B(t)\) has no constant term.
Note that when \(|S|\geq r\), the term \(\operatorname{per}(B_{S,T})\) does not contribute to the result. Let \(f(S,T)=\operatorname{per}(B_{S,T})\), those \(f(S,T)\) can be computed via dynamic programming, described as follows.
For the base case, we have \(f(\emptyset,\emptyset)=1\).
For \(0<|S|=|T|<r\), let \(s\) be a member of \(S\), by enumerating the matching vertex \(t\) of \(s\), we have
\[f(S,T)=\sum_{t\in T}B_{s,t}f(S\setminus\{s\},T\setminus\{t\}).\]
After computing all \(f(S,T)\), we can compute
\[\operatorname{per}(A+B)=\sum_{\begin{subarray}{c}S,T\subseteq[n]\\ |S|=|T|<r\end{subarray}}f(S,T)g_{S,T}\]
where the \(g_{S,T}=\operatorname{per}(A_{[n]\setminus S,[n]\setminus T})\) can be precomputed via Ryser's formula [14] in time \(\tilde{O}(2^{k})\).
The precomputation time is
\[\sum_{0\leq j<r}{k\choose j}^{2}\cdot\tilde{O}(2^{k})\leq\left(\sum_{0\leq j< r}{k\choose j}\right)^{2}\cdot\tilde{O}(2^{k})=\tilde{O}\left({k\choose{ \downarrow r}}^{2}2^{k}\right),\]
and each query takes time
\[\tilde{O}\left(\sum_{0\leq j<r}{k\choose j}^{2}\right)=\tilde{O}\left({k \choose{\downarrow r}}^{2}\right).\qed\]
Note that when \(r=\alpha k\) for some \(0<\alpha<1/2\), by Lemma 1, precompuation takes time \(\tilde{O}(2^{(1+2H(\alpha))k})\), and each query takes time \(\tilde{O}(2^{2H(\alpha)k})\).
## 5. Data Structure for Hamiltonian Cycles
In [8] they considered that Hamiltonian cycles can be counted as spanning trees with degree restricted and used it to count undirected Hamiltonian cycles in time exponential of treewidth. We give a directed version.
Let \(\sigma\in S_{n}\) be a permutation, let \(P_{\sigma}\) denote the permutation matrix associated with \(\sigma\), such that \((P_{\sigma})_{i,j}=[j=\sigma(i)]\).
**Lemma 7**.: _For a permutation \(\sigma\in S_{n}\),_
\[\det(I-P_{\sigma})_{[n]\setminus\{1\},[n]\setminus\{1\}}=[c(\sigma)=1].\]
Proof.: Consider a directed graph \(G\) with directed edges \((i,\sigma(i))\), then \(L=I-P_{\sigma}\) is exactly the Laplacian of the graph \(G\). By the directed version of matrix tree theorem, \(\det(L_{[n]\setminus\{1\},[n]\setminus\{1\}})\) is the number of directed spanning trees rooted at vertex \(1\). When \(c(\sigma)=1\), then clearly there is exactly one spanning tree, otherwise there is no spanning tree. Thus we can conclude the claimed equality.
Therefore, we use the above characterization of Hamiltonian cycles to help computing HammCycles.
**Theorem 6**.: _Given a matrix \(A\in\mathbb{F}^{k\times k}\) and a positive integer \(r\), can precompute with time \(\tilde{O}\left(\binom{k}{\downarrow r}4^{k}\right)\), and answer the \(r\)-order evaluation \(\operatorname{hc}(F(t))\bmod t^{r}\) in time \(\tilde{O}\left(\binom{k}{\downarrow r}^{3}\right)\). Both are measured in \(\mathbb{F}\)-operations._
Proof.: By the definition of HamCycles and Lemma 7, we have
\[\operatorname{hc}(A)=\sum_{\sigma\in S_{k}}\left(\prod_{i=1}^{k}A_{i,\sigma(i )}\right)\det(I-P_{\sigma})_{[n]\setminus\{1\},[n]\setminus\{1\}}.\]
We also expand \(\det(I-P_{\sigma})_{[n]\setminus\{1\},[n]\setminus\{1\}}\) by Leibniz formula, i.e.,
\[\det(I-P_{\sigma})_{[n]\setminus\{1\},[n]\setminus\{1\}}=\sum_{\begin{subarray} {c}\tau\in S_{k}\\ \tau(1)=1\end{subarray}}\operatorname{sgn}(\tau)\prod_{i=2}^{k}(I-P_{\sigma}) _{i,\tau(i)}.\]
Combining the above two equations, and interpret \(\operatorname{sgn}(\tau)\) as \((-1)^{\operatorname{inv}(\tau)}\), where \(\operatorname{inv}(a)\) denotes the number of inversions for a sequence \(a\), we have
\[\det(I-P_{\sigma})_{[n]\setminus\{1\},[n]\setminus\{1\}} =\sum_{\begin{subarray}{c}\sigma,\tau\in S_{k}\\ \tau(1)=1\end{subarray}}\operatorname{sgn}(\tau)\left(\prod_{i=1}^{k}A_{i, \sigma(i)}\right)\left(\prod_{i=2}^{k}(I-P_{\sigma})_{i,\tau(i)}\right)\] \[=\sum_{\begin{subarray}{c}\sigma,\tau\in S_{k}\\ \tau(1)=1\end{subarray}}(-1)^{\operatorname{inv}(\tau)}\left(\prod_{i=1}^{k}A_{ i,\sigma(i)}\right)\left(\prod_{i=2}^{k}[i=\tau(i)]-[\sigma(i)=\tau(i)]\right).\]
Now consider dynamic programming. For \(S\subseteq[k]\setminus\{1\},T\subseteq[k]\) and say \(s=|S|=|T|\), let \(f(S,T)\) only count in the last \(s\) values of \(\sigma\) and \(\tau\), with domain \(\{\tau(k-s+1),\ldots,\tau(k)\}=S\) and \(\{\sigma(k-s+1),\ldots,\sigma(k)\}=T\), and the inversions of \(\tau\) in the last \(s\) values are counted, i.e.,
\[f(S,T)=\sum_{\sigma,\tau}(-1)^{\operatorname{inv}(\tau)}\left(\prod_{i=k-s+1}^ {k}A_{i,\sigma(i)}\right)\left(\prod_{i=k-s+1}^{k}[i=\tau(i)]-[\sigma(i)=\tau( i)]\right). \tag{1}\]
We let \(a\leftarrow^{+}b\) denote \(a\gets a+b\) for simplicity in describing the updating rules. The base case is simply \(f(\emptyset,\emptyset)=1\), and for each \(s<k-1\), we use the computed values of \(f(S,T)\) with \(|S|=|T|=s\) to compute \(f(S,T)\) with \(|S|=|T|=s+1\) by the following rules. Let \(i=k-s\). For each \(j\notin T\), we can choose \(\sigma(i)\) to be \(j\), then there are two choices of \(\tau(i)\):
* If \(i\notin S\), update with \[f(S\cup\{i\},T\cup\{j\})\leftarrow^{+}(-1)^{\operatorname{inv}(i,S)}A_{i,j}f(S,T),\] denoting the choice that the contribution of term \([i=\tau(i)]\) in Eqn. 1.
* If \(j\notin S\), update with \[f(S\cup\{j\},T\cup\{j\})\leftarrow^{+}(-1)^{1+\operatorname{inv}(j,S)}A_{i,j}f (S,T),\] denoting the choice that the contribution of term \([\sigma(i)=\tau(i)]\) in Eqn. 1.
Here \(\operatorname{inv}(v,S)\) means the number of elements \(x\in S\) such that \(v>x\).
At last, we have the choice of \(\sigma(1)\), thus
\[\operatorname{hc}(A)=\sum_{i=1}^{k}A_{1,i}f([k]\setminus\{1\},[k]\setminus\{i \}).\]
This dynamic programming takes \(\tilde{O}(4^{k})\), which is slower than the usual one, but its dependence on the rows of \(A\) is explicitly graded by \(s\), so is useful for our purpose.
Now suppose the first \(j\) rows are left undetermined, we can first preprocess all the \(f(S,T)\) for \(s\leq k-j\) in time \(\tilde{O}(4^{k})\), since their value does not depend on the first \(j\) rows. Then for each query, i.e., given the first \(j\) rows, can be computed in time
\[\tilde{O}\left(\sum_{i=1}^{j}\binom{k}{i}\binom{k}{i-1}\right)=\tilde{O}\left( \binom{k}{\downarrow j}^{2}\right).\]
Write \(F=A+B(t)\), where \(B(t)\) has no constant term, by the multilinearity on rows of \(\operatorname{hc}(\cdot)\), we have
\[\operatorname{hc}(F(t))=\operatorname{hc}(A+B)=\sum_{S\subseteq[k]} \operatorname{hc}(\operatorname{rep}_{S}(A,B)),\]
where \(\operatorname{rep}_{S}(A,B)\) denote the matrix obtained by replacing the rows indexed in \(S\) of \(A\) by those rows of \(B\). The terms \(|S|\geq r\) do not contribute to the result. For each \(|S|<r\), we can reorder the rows and columns simultaneously to make \(S\) be the first \(|S|\) rows, and use the above dynamic programming to do precomputation and handle queries.
There are \(\binom{k}{\downarrow r}\) ways to choose \(S\), so the precomputation needs \(\tilde{O}\left(\binom{k}{\downarrow r}4^{k}\right)\) time, and \(\tilde{O}\left(\binom{k}{\downarrow r}^{3}\right)\) for each query.
Note that when \(r=\alpha k\) for some \(0<\alpha<1/2\), by Lemma 1, precompuation takes time \(\tilde{O}(2^{(2+H(\alpha))k})\), and each query takes time \(\tilde{O}(2^{3H(\alpha)k})\).
## 6. The Algorithms
We first prove Theorem 1 under some restrictions, and then remove the restrictions by bootstrapping the results.
**Lemma 8**.: _Let \(q\) be \(q\geq n^{2}+1\) and \(q\equiv 1\pmod{b}\), where \(b\geq 10\). There is an algorithm that computes the permanent \(\operatorname{per}(A)\) of a given matrix \(A\in\mathbb{F}_{q}^{n\times n}\) in time \(2^{n-\delta_{b}\sqrt{n}}q^{O(1)}\), for some \(\delta_{b}>0\)._
Proof.: Let \(\theta=\sqrt{\log(1.9)/\log(1+b)}\) and \(k=\lfloor\theta\sqrt{n}\rfloor\), consider the following algorithm.
1. First compute the Kakeya set by Theorem 3 over \(k^{2}\) variables of degree \(u=(q-1)/b-1\).
2. Precompute the data structure for \(r\)-order evaluation for \(r=\lceil k/b\rceil\) at each point of \(K\).
3. Use the self-reduction of permanent to reduce the problem to \(m=2^{n-k}n^{O(1)}\) instances of size \(k\times k\).
4. For each instance, use Theorem 4 to compute the permanent.
Then we analyze the time complexity. In the precomputation phase, by Theorem 3, the size of Kakeya set has size \((\frac{q-1}{u+1}+1)^{k^{2}+1}\leq(b+1)^{\theta^{2}n+1}=O(1.9^{n})\), and by Theorem 5, each data structure takes time \(2^{O(k)}\) time to precompute, so the total time of the first two steps is \(1.9^{n+O(\sqrt{n})}q^{O(1)}\).
The data structure can answer \(r\)-order evaluation in time \(\tilde{O}(2^{2H(\alpha)k})\), here we have \(\alpha=1/b\leq 0.1\), thus one can compute \(2H(\alpha)\leq 2H(0.1)<0.94\), the total time in last two steps is
\[2^{n-k}n^{O(1)}\cdot O(2^{0.94k})q^{O(1)}=2^{n-0.06k}q^{O(1)}=2^{n-0.06\theta \sqrt{n}}q^{O(1)}.\]
In conclusion, we have \(\delta_{b}=0.06\theta\) satisfies the requirement.
**Lemma 9**.: _Let \(q\) be \(q\geq n^{2}+1\) and \(q\equiv 1\pmod{b}\), where \(b\geq 17\). There is an algorithm that computes the permanent \(\operatorname{hc}(A)\) of a given matrix \(A\in\mathbb{F}_{q}^{n\times n}\) in time \(2^{n-\delta_{b}\sqrt{n}}q^{O(1)}\), for some \(\delta_{b}>0\)._
Proof.: The algorithm is similar to the proof of Lemma 8, with replacing the data structure is for Hamiltonian cycles instead of permanent.
By Theorem 6, the data structure can answer \(r\)-order evaluation of Hamiltonian cycles in time \(\tilde{O}(2^{3H(\alpha)k})\), and one can compute that now \(3H(\alpha)\leq 3H(1/17)<0.97\).
Then the total time in the last two steps is
\[2^{n-k}n^{O(1)}\cdot O(2^{0.97k})q^{O(1)}=2^{n-0.03k}q^{O(1)}=2^{n-0.03\theta \sqrt{n}}q^{O(1)}.\]
In conclusion, we have \(\delta_{b}=0.03\theta\) satisfies the requirement.
### Proof of Theorem 1
To prove Theorem 1, we only need to remove the conditions of Lemma 8 and Lemma 9 on \(q\) that \(q\geq n^{2}+1\) and \(q\equiv 1\pmod{m}\) for some fixed modulo \(m\).
Note that for some integer \(\ell\), we can embed \(\mathbb{F}_{q}\) into a larger finite field \(\mathbb{F}_{q^{\ell}}\). We only need to satisfy \(q^{\ell}\geq n^{2}+1\) and \(q^{\ell}\equiv 1\pmod{m}\). When \(q\) is coprime with \(m\), take \(\ell=\varphi(m)\) is enough to satisfy the second condition, where \(\varphi\) is the Euler totient function. Take \(\ell\) be the smallest multiple of \(\varphi(m)\) such that \(q^{\ell}>n^{2}\), we have \(q^{\ell}\leq q^{\varphi(m)}n^{2}\).
For permanent, since \(q\) is a prime power, it must be coprime with one of \(m=10\) or \(m=11\). For Hamiltonian cycles, \(q\) must be coprime with one of \(m=17\) or \(m=18\). Therefore, we have \(q^{\ell}=q^{O(1)}n^{2}\) since we only considered finite possibilities of \(m\).
Therefore, by calling the algorithms in Lemma 8 and Lemma 9 through the finite field \(\mathbb{F}_{q^{\ell}}\), we can compute the permanent and Hamiltonian cycles in time \(2^{n-\Omega(\sqrt{n})}(q^{\ell})^{O(1)}=2^{n-\Omega(\sqrt{n})}q^{O(1)}\).
To really support the computation in the finite field \(\mathbb{F}_{q^{\ell}}\), we need to find an irreducible polynomial \(f\) and identify \(\mathbb{F}_{q^{\ell}}\) as \(\mathbb{F}_{q}[t]/(f)\). We can enumerate the polynomials of degree \(\ell\) over \(\mathbb{F}_{q}\) and test whether it satisfies the conditions, by [17, Theorem 14.37], the time complexity of testing irreducibility is \(\operatorname{poly}(\ell,\log q)\). The time of finding an irreducible polynomial is \(O(q^{\ell}\operatorname{poly}(\log q^{\ell}))\), so this is not a bottleneck.
### Proof of Corollary 1
The absolute value of \(\operatorname{per}(A)\) and \(\operatorname{hc}(A)\) is trivially bounded by \(C=n!M^{n}\). Let \(p_{1},\ldots,p_{r}\) be distinct prime numbers such that \(D:=\prod_{i}p_{i}>2C+1\), then if we can compute \(\operatorname{per}(A)\) and \(\operatorname{hc}(A)\) modulo \(D\), the value of \(\operatorname{per}(A)\) and \(\operatorname{hc}(A)\) are uniquely determined.
By Chinese remainder theorem, we only need to compute \(\operatorname{per}(A)\) and \(\operatorname{hc}(A)\) modulo \(p_{i}\) for each \(i\), and then combine them to get the result modulo \(D\).
By Lemma 3, the primes not greater than \(16\log D=O(n\log M)\) has their product greater than \(D\), so we only need to compute \(\operatorname{per}(A)\) and \(\operatorname{hc}(A)\) under finite fields \(\mathbb{F}_{p}\) with \(p=O(n\log M)\). By Theorem 1, we can compute them in time \(2^{n-\Omega(\sqrt{n})}p^{O(1)}=2^{n-\Omega(\sqrt{n})}(\log M)^{O(1)}\). There are \(O(n\log M)\) instances to compute. Since the product of chosen primes has \(O(n\log M)\) bits, by Theorem 2, it takes \(\operatorname{poly}(n\log M)\) time to combine them, which is not a bottleneck. So the total time is \(2^{n-\Omega(\sqrt{n})}(\log M)^{O(1)}\).
|
2301.13671 | Enhancing Hyper-To-Real Space Projections Through Euclidean Norm
Meta-Heuristic Optimization | The continuous computational power growth in the last decades has made
solving several optimization problems significant to humankind a tractable
task; however, tackling some of them remains a challenge due to the
overwhelming amount of candidate solutions to be evaluated, even by using
sophisticated algorithms. In such a context, a set of nature-inspired
stochastic methods, called meta-heuristic optimization, can provide robust
approximate solutions to different kinds of problems with a small computational
burden, such as derivative-free real function optimization. Nevertheless, these
methods may converge to inadequate solutions if the function landscape is too
harsh, e.g., enclosing too many local optima. Previous works addressed this
issue by employing a hypercomplex representation of the search space, like
quaternions, where the landscape becomes smoother and supposedly easier to
optimize. Under this approach, meta-heuristic computations happen in the
hypercomplex space, whereas variables are mapped back to the real domain before
function evaluation. Despite this latter operation being performed by the
Euclidean norm, we have found that after the optimization procedure has
finished, it is usually possible to obtain even better solutions by employing
the Minkowski $p$-norm instead and fine-tuning $p$ through an auxiliary
sub-problem with neglecting additional cost and no hyperparameters. Such
behavior was observed in eight well-established benchmarking functions, thus
fostering a new research direction for hypercomplex meta-heuristic
optimization. | Luiz C. F. Ribeiro, Mateus Roder, Gustavo H. de Rosa, Leandro A. Passos, João P. Papa | 2023-01-31T14:40:49Z | http://arxiv.org/abs/2301.13671v1 | # Enhancing Hyper-To-Real Space Projections Through Euclidean Norm Meta-Heuristic Optimization+
###### Abstract
The continuous computational power growth in the last decades has made solving several optimization problems significant to humankind a tractable task; however, tackling some of them remains a challenge due to the overwhelming amount of candidate solutions to be evaluated, even by using sophisticated algorithms. In such a context, a set of nature-inspired stochastic methods, called meta-heuristic optimization, can provide robust approximate solutions to different kinds of problems with a small computational burden, such as derivative-free real function optimization. Nevertheless, these methods may converge to inadequate solutions if the function landscape is too harsh, e.g., enclosing too many local optima. Previous works addressed this issue by employing a hypercomplex representation of the search space, like quaternions, where the landscape becomes smoother and supposedly easier to optimize. Under this approach, meta-heuristic computations happen in the hypercomplex space, whereas variables are mapped back to the real domain before function evaluation. Despite this latter operation being performed by the Euclidean norm, we have found that after the optimization procedure has finished, it is usually possible to obtain even better solutions by employing the Minkowski \(p\)-norm instead and fine-tuning \(p\) through an auxiliary sub-problem with neglecting additional cost and no hyperparameters. Such behavior was observed in eight well-established benchmarking functions, thus fostering a new research direction for hypercomplex meta-heuristic optimization.
Keywords:Hypercomplex Space, Real-Valued Projection, Euclidean Norm, Meta-Heuristic Optimization, Benchmarking Functions
## 1 Introduction
Humanity sharpened their mathematical skills over several years of evolution by researching and studying formal and elegant tools to model world events' behavior. In such a context, when dealing with non-trivial problems, it is common to apply mathematical programming to overcome the before-mentioned tasks or even to streamline the process. Furthermore, once any prior knowledge might not be available, mathematical programming, commonly known as optimization [19], provides an attractive approach to tackle the burden of empirical setups.
In the past decades, a new optimization paradigm called meta-heuristic has been used to solve several optimization problems [21]. Essentially, a meta-heuristic is a high-level abstraction of a procedure that generates and selects a heuristic that aims to provide a feasible solution to the problem. It combines concepts of _exploration_ and _exploitation_, i.e., globally searching throughout the space and enhancing a promising solution based on its neighbors, respectively, constituted of complex learning procedures and simple searches usually inspired by biological behaviors. Additionally, they do not require specific domain knowledge and provide mechanisms to avoid susceptibility to local optima convergence.
Although meta-heuristic techniques seem to be an exciting proposal, they still might perform poorly on challenging objective functions, being trapped in local optima and not achieving the most suitable solutions. Some attempts as hybrid variants [12], aging mechanisms [1], and fitness landscape analysis [3] try to deal with this issue. Relying on more robust search spaces, such as representing each decision variable as a hypercomplex number, is an alternative approach that is not fully explored in the literature.
One can perceive that handling hypercomplex spaces is based on the likelihood of having more natural fitness landscapes, although mathematically not proved yet. The most common representations are the quaternions [7] and octonions [6], which have compelling traits to describe the object's orientation in \(n\)-dimensional search spaces, being extremely useful in performing rotations in such spaces [8]. These representations have been successfully used in different areas, as in deep learning [14], feature selection [17], special relativity [11] and quantum mechanics [4]. Regarding meta-heuristic optimization, interesting results have been achieved in global optimization [5, 13, 16], although not yet mathematically guaranteed.
Notwithstanding, hypercomplex optimization also has its particular problems, i.e., before attempting to feed quaternions or octonions to a real-valued objective function, one needs to project their values onto a real-valued space, usually accomplished by the Euclidean norm function. However, to the best of our knowledge, there is no work in the literature regarding how using the standard Euclidean norm function might affect the loss of information when projecting one space onto another. Thus, we are incredibly interested in exploring the possibility of employing the \(p\)-norm function and finding the most suitable \(p\) value that minimizes the loss of information throughout the projection.
In this work, we investigate how employing the \(p\)-norm to refine the solution found by a standard hypercomplex meta-heuristic can affect the obtained re
sult. In short, we optimize a real function using the standard quaternion-based variant of the Particle Swarm Algorithm (Q-PSO) [15], i.e., meta-heuristic operations are performed in the hypercomplex space. In contrast, decision variables are mapped to the real domain through the Euclidean Norm for function evaluation. Notwithstanding, the best solution is refined by finding a more suitable projection between domains using the \(p\)-norm. The rationale for this decision lies in the fact that this operation is a Euclidean norm generalization. Hence we resort to fine-tuning new, yet not explored, hyperparameter in the optimization procedure, thus allowing more robust solutions to be found. Regardless, such a procedure can be applied to any hypercomplex-based meta-heuristic. Therefore, this work's main contributions are twofold: (i) to introduce a generic and inexpensive procedure to refine solutions found by hypercomplex meta-heuristics; and (ii) to foster research regarding how to map better hypercomplex to real values in the context of meta-heuristic optimization.
The remainder of this paper is organized as follows. Sections 2 and 3 present the theoretical background related to hypercomplex-based spaces (quaternions and Minkowski \(p\)-norm) and meta-heuristic optimization, respectively. Section 4 discusses the methodology adopted in this work, while Section 5 presents the experimental results. Finally, Section 6 states conclusions and future works1.
Footnote 1: The source code is available online at [https://github.com/lzfelix/lio](https://github.com/lzfelix/lio).
## 2 Hypercomplex Representation
A quaternion \(q\) is a hypercomplex number, composed of real and complex parts, being \(q=a+bi+cj+dk\), where \(a,b,c,d\in\mathbb{R}\) and \(i,j,k\) are fundamental quaternions units. The basis equation that defines what a quaternion looks like is described as follows:
\[i^{2}=j^{2}=k^{2}=ijk=-1. \tag{1}\]
Essentially, a quaternion \(q\) is a four-dimensional space representation over the real numbers, i.e., \(\mathbb{R}^{4}\). Given two arbitrary quaternions \(q_{1}=a+bi+cj+dk\) and \(q_{2}=\alpha+\beta i+\gamma j+\delta k\) and a scalar \(\kappa\in\mathbb{R}\), we define the quaternion algebra [2] used throughout this work:
\[\begin{split} q_{1}+q_{2}&=(a+bi+cj+dk)+(\alpha+ \beta i+\gamma j+\delta k)\\ &=(a+\alpha)+(b+\beta)i+(c+\gamma)j+(d+\delta)k,\end{split} \tag{2}\]
\[\begin{split} q_{1}-q_{2}&=(a+bi+cj+dk)-(\alpha+ \beta i+\gamma j+\delta k)\\ &=(a-\alpha)+(b-\beta)i+(c-\gamma)j+(d-\delta)k,\end{split} \tag{3}\]
\[\begin{split}\kappa q_{1}&=\kappa(a+bi+cj+dk)\\ &=\kappa a+(\kappa b)i+(\kappa c)j+(\kappa d)k.\end{split} \tag{4}\]
### Minkowski \(p\)-norm
Another crucial operator that needs to be defined is the \(p\)-norm, which is responsible for mapping hypercomplex values to real numbers. Let \(q\) be a hypercomplex number with real coefficients \(\{z_{d}\}_{d=0}^{D-1}\), one can compute the Minkowski \(p\)-norm as follows:
\[\|q\|_{p}=\left(\sum_{d=0}^{D-1}|z_{d}|^{p}\right)^{1/p}, \tag{5}\]
where \(D\) is the number of dimensions of the space (2 for complex numbers, and 4 for quaternions, for instance) and \(p\geq 1\). Common values for the latter variable are 1 or 2 for the Taxicab and Euclidean norms, respectively. Hence, one can see the \(p\)-norm as a generalization of such norm operators.
## 3 Meta-Heuristic Optimization
Optimization is the task of selecting a solution that best fits a function among a set of possible solutions. Several methods have been applied in this context, such as grid-search and gradient-based methods. Nevertheless, these methods carry a massive amount of computation, leading to burdened states in more complex problems, e.g., exponential and NP-complete problems.
An attempt to overcome such behaviors is to employ a meta-heuristic-based approach. Meta-heuristic techniques are nature-inspired stochastic algorithms that mimic an intelligence behavior, often observed in groups of animals, humans, or nature. Such approaches combine exploration and exploitation mechanisms in order to achieve sub-optimal solutions with low effort.
In this work, we employed the quaternion variant of the state-of-the-art Particle Swarm Optimization (PSO) [10] algorithm for function optimization. On the other hand, since fine-tuning the \(p\) hyperparameter is a single-variable optimization task with a small search interval, we resort to the hyperparameter-less Black Hole (BH) [9] algorithm.
## 4 Methodology
This section discusses how the presented meta-heuristics can be combined with quaternions to perform the so-called "hypercomplex-based meta-heuristic optimization". The proposed approach designated "Last Iteration Optimization" (LIO) is presented along with the considered benchmarking functions to evaluate it and the experimental setup.
### Hypercomplex Optimization
In their original formulation, meta-heuristic algorithms were conceived to optimize real-valued target functions with multiple real parameters. However, one may decide to represent each decision variable as quaternions.
In this case, each decision variable is represented by a quaternion with its real coefficients randomly initialized from a uniform distribution in the interval \([0,1]\). Furthermore, the mapping from quaternions to real numbers for function evaluation becomes a paramount operation, which is usually carried out through the Euclidean norm. Still, care must be taken to ensure that this transformation does not yield numbers outside the feasibility region. Hence, hypercomplex coefficients are clipped individually to the real interval \([0,1]\) and the mapping for each decision variable is performed by the following mapping function:
\[\begin{split}\boldsymbol{\hat{q}}_{j}&=M( \boldsymbol{q}_{j},p)\\ &=\boldsymbol{l}_{j}+(\boldsymbol{u}_{j}-\boldsymbol{l}_{j}) \,\frac{\|\boldsymbol{q}_{j}\|_{p}}{D^{1/p}},\end{split} \tag{6}\]
such that \({j=\{1,2,\ldots,n\}}\), \(D\) is the number of hypercomplex dimensions (4 for quaternions), \(\boldsymbol{l}_{j}\) and \(\boldsymbol{u}_{j}\) are the lower and upper bounds for each decision variable, respectively, and \(p=2\) in this particular case.
### Last Iteration Optimization
The main goal of this work consists of refining the solution found by a hypercomplex-based meta-heuristic using a low-cost procedure. To such an extent, given a fitness function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\), we first optimize it through the Q-PSO algorithm, which consists in representing each decision variable as a quaternion with the relations defined in Equations 2, 3, and 4. Once this step is finished, we have the best candidate solution \(\boldsymbol{q}^{\star}\) with a real representation \(\boldsymbol{\hat{q}}^{\star}\in\mathbb{R}^{n}\), which is obtained through Equation 6 with \(p=2\). Shortly, one can compute the best solution fitness \(\mu\) as follows:
\[\mu=f\Big{(}M(\boldsymbol{q}_{1}^{\star},2),M(\boldsymbol{q}_{2}^{\star},2), \ldots,M(\boldsymbol{q}_{n}^{\star},2)\Big{)}. \tag{7}\]
where \(M(\cdot)\) is computed according to Equation 6.
We propose a second phase to the optimization pipeline, where the best solution found is \(\boldsymbol{q}^{\star}\) is kept fixed, while the hyper-parameter \(p\) is unfrozen. Such an approach allows obtaining a better real representation of \(\boldsymbol{q}^{\star}\), which translates to an even smaller fitness value \(\mu^{\star}\). Namely, we aim at solving the following auxiliary optimization problem:
\[\begin{split} p^{\star}=&\operatorname*{arg\,min}_{p }\,f\Big{(}M(\boldsymbol{q}_{1}^{\star},p),M(\boldsymbol{q}_{2}^{\star},p), \ldots,M(\boldsymbol{q}_{n}^{\star},p)\Big{)},\\ &\text{st. }1\leq p\leq p_{\text{max}},\end{split} \tag{8}\]
where \(p_{\text{max}}\) denotes the maximum possible value for parameter \(p\). If \(p_{\text{max}}=2\), for instance, the problem consists in finding a suitable norm between the Taxicab and Euclidean ones.
Since the new search interval is usually small, as it is going to be discussed in Section 4.4, we resort to the traditional BH algorithm since it does not contain
hyperparameters to be tuned, thus making the process even simpler. As this procedure is performed for a single decision variable in a small search space, the time spent in this phase is negligible compared to the Q-PSO step. Furthermore, since this new step is performed as the new last iteration of the optimization pipeline, we name it Last Iteration Optimization (LIO).
### Benchmarking Functions
Table 1 introduces the eight benchmarking functions used to evaluate the proposed approach.
### Experimental Setup
The proposed approach divides function optimization into two parts: global and fine-tuning phases, which correspond to finding \(\mu\) using Q-PSO and \(\mu^{\star}\) by solving Equation 8 through the BH algorithm.
Regarding the first phase, we use the same experimental setup from [16]. Namely, each benchmark function is optimized with \(n\in\{10,25,50,100\}\) decision variables, for \((2000\cdot n)\) iterations using 100 agents. As the amount of iterations grows considerably fast, we adapt to the early stopping mechanism. Such a strategy allows detecting if the optimization is stuck for too long in a
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Function** & **Equation** & **Bounds** & \(\mathbf{f}(\mathbf{x}^{*})\) \\ \hline Sphere & \(f_{1}(x)=\sum\limits_{i=1}^{n}x_{i}^{2}\) & \(-10\leq x_{i}\leq 10\) & \(0\) \\ Csendes & \(f_{2}(x)=\sum_{i=1}^{n}x_{i}^{6}\left(2+sin\frac{1}{x_{i}}\right)\) & \(-1\leq x_{i}\leq 1\) & \(0\) \\ Salomon & \(f_{3}(x)=1-\cos(2\pi\sqrt{\sum_{i=1}^{n}x_{i}^{2}})+0.1\sqrt{\sum_{i=1}^{n}x_ {i}^{2}}\) & \(-100\leq x_{i}\leq 100\) & \(0\) \\ Ackley \#1 & \(f_{4}(x)=-20e^{-0.02\sqrt{n-1}\sum_{i=1}^{n}x_{i}^{2}}-e^{n-1}\sum_{i=1}^{n} cos(2\pi x_{i})+20+e\) & \(-35\leq x_{i}\leq 35\) & \(0\) \\ Alpine \#1 & \(f_{5}(x)=\sum_{i=1}^{n}|x_{i}sin(x_{i})+0.1x_{i}|\) & \(-10\leq x_{i}\leq 10\) & \(0\) \\ Rastrigin & \(f_{6}(x)=10n+\sum_{i=1}^{n}\left[x_{i}^{2}-10cos(2\pi x_{i})\right]\) & \(-5.12\leq x_{i}\leq 5.12\) & \(0\) \\ Schwefel & \(f_{7}(x)=\left(\sum\limits_{i=1}^{n}x_{i}^{2}\right)^{\sqrt{\pi}}\) & \(-100\leq x_{i}\leq 100\) & \(0\) \\ Brown & \(f_{8}(x)=\sum_{i=1}^{n-1}\left[(x_{i}^{2})^{(x_{i+1}^{2}+1)}+(x_{i+1}^{2})^{ (x_{i}^{2}+1)}\right]\) & \(-1\leq x_{i}\leq 4\) & \(0\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Benchmarking functions.
local optimum and unlikely to find a better solution, saving computational time. If the difference of fitness between two consecutive iterations is smaller than \(\delta=10^{-5}\) for 50 iterations or more, the optimization is halted, and the best fitness found so far is deemed the solution. Despite these values being determined empirically, they often present the same results as those obtained using all available iterations, despite using, at most 4% of all iterations for the extreme case when \(n=100\). For the Q-PSO hyperparameters we use \(w=0.7\), \(c_{1}=c_{2}=1.7\), as well established in the literature.
In the second phase, optimization is performed with \(p_{\max}=5\), using 20 agents for 50 iterations, which were determined on preliminary experiments. Further, we do not rely on early stopping for this phase since it is performed much faster than the previous one. Finally, we compare the results obtained by Q-PSO and Q-PSO with LIO. Each experiment is executed 15 times, and the best results with significance smaller than 0.05, according to the Wilcoxon signed-rank [20], are highlighted in bold. Regarding the implementation, we used Opytimizer [18] library.
## 5 Experimental Results
Experimental results are presented in Table 2, where the average fitness values obtained by Q-PSO are compared against their refined versions, computed with LIO. More specifically, we ran Q-PSO, stored the results, and continued the LIO (denoted as Q-PSO + LIO).
### Overall Discussion
Experimental results provided in Table 2 confirm the robustness of the proposed approach since the Q-PSO + LIO outperformed the standard Q-PSO in the massive majority of benchmarking functions and configurations. One can highlight, for instance, that LIO obtained the best results overall, considering all dimensional configurations, in half of the functions, i.e., Sphere, Csendes, Schwefel, and Brown. Besides, Alpine1 and Rastrigin can also be deliberated, although Q-PSO obtained similar statistical results. Further, LIO also obtained the best results considering all functions over three-out-of-four configurations, i.e., 25, 50, and 100 dimensions, being Q-PSO statistical similar in only two of them.
On the other hand, Q-PSO obtained the best results over two functions, i.e., Salomon and Rastrigin, considering a 10-dimensional configuration. Such behavior is very interesting since Q-PSO performed better over two functions who share similar characteristics: both are continuous, differentiable, non-separable, scalable, and multimodal, contemplating the same dimensionality, which may denote some specific constraint to the model.
Finally, as an overview, the proposed approach can significantly improve Q-PSO, with an almost insignificant computational burden, and whose growth is barely insignificant compared to the increase in the number of dimensions, as discussed in the next section.
### Computational Burden
time consumed by Q-PSO, which amounts to \(0.31\) seconds, while decreasing the fitness value by a factor of \(1.7\).
### How does \(p\) Influence Projections?
From the results in Table 2, one can highlight the variation in \(p\)-norm value. As expected, such a variable is highly correlated to the optimization performance, since small changes in its value resulted in better functions minima. On the other hand, one can notice that expressive changes in \(p\) may also support performance improvement, as in Brown function. Besides that, as \(p\) is changed, the mapping process, i.e., the projection, from the hypercomplex space to the real one becomes "less aggressive" to the latter, since the proposed approach gives margin to a smooth fit for the values obtained in the former space.
Therefore, examining the performance on the optimization functions, one can observe that employing LIO's projection, different optimization landscapes are achieved, and such a process provides better value's representation from the hypercomplex search space. It is worth observing that for Rastrigin, Alpine #1, and Ackley #1 functions, LIO found optimal \(p\) values with mean 2 and minimal standard deviations, thus showing this parameter's sensitiveness for some benchmarking functions. Moreover, only LIO optimization for the Schwefel function with 10 dimensions showed a large standard deviation for this hyperparameter. In contrast, in the remaining cases, there was no norm larger than 3, suggesting that in further experiments, and even smaller search intervals (with \(p_{\max}=3\), for instance) could be employed.
## 6 Conclusion
In this work, we introduced the Last Iteration Optimization (LIO) procedure, which consists of refining the solution found by a hypercomplex-based meta-heuristic optimization algorithm by solving a low-cost hyperparameter-less auxiliary problem after the primary heuristic has found the best candidate solution. Such a procedure provided robust results in various benchmarking functions, showing statistically significant gains in 24 out of 32 experiments, over functions with diverse characteristics. Since LIO has a low computational burden and is easy to implement, it can be readily incorporated into other works.
In future studies, we intend to investigate how changing the \(p\) parameter during the global optimization procedure can affect the obtained results. Furthermore, LIO can be extended to find a different \(p\) for each decision variable, making it more flexible, and even other functions can be employed (or learned) to perform the hypercomplex-to-real mapping process. Ultimately, fine-tuning the \(p\) hyper-parameter of the Minkowski norm opens new research directions for hypercomplex-based meta-heuristic function optimization methods. |
2309.08315 | i-Octree: A Fast, Lightweight, and Dynamic Octree for Proximity Search | Establishing the correspondences between newly acquired points and
historically accumulated data (i.e., map) through nearest neighbors search is
crucial in numerous robotic applications. However, static tree data structures
are inadequate to handle large and dynamically growing maps in real-time. To
address this issue, we present the i-Octree, a dynamic octree data structure
that supports both fast nearest neighbor search and real-time dynamic updates,
such as point insertion, deletion, and on-tree down-sampling. The i-Octree is
built upon a leaf-based octree and has two key features: a local spatially
continuous storing strategy that allows for fast access to points while
minimizing memory usage, and local on-tree updates that significantly reduce
computation time compared to existing static or dynamic tree structures. The
experiments show that i-Octree outperforms contemporary state-of-the-art
approaches by achieving, on average, a 19% reduction in runtime on realworld
open datasets. | Jun Zhu, Hongyi Li, Zhepeng Wang, Shengjie Wang, Tao Zhang | 2023-09-15T11:14:07Z | http://arxiv.org/abs/2309.08315v2 | # _i-Octree_: A Fast, Lightweight, and Dynamic Octree for Proximity Search
###### Abstract
Establishing the correspondences between newly acquired points and historically accumulated data (i.e., map) through nearest neighbors search is crucial in numerous robotic applications. However, static tree data structures are inadequate to handle large and dynamically growing maps in real-time. To address this issue, we present the _i-Octree_, a dynamic octree data structure that supports both fast nearest neighbor search and real-time dynamic updates, such as point insertion, deletion, and on-tree down-sampling. The _i-Octree_ is built upon a leaf-based octree and has two key features: a local spatially continuous storing strategy that allows for fast access to points while minimizing memory usage, and local on-tree updates that significantly reduce computation time compared to existing static or dynamic tree structures. The experiments show that _i-Octree_ surpasses state-of-the-art methods by reducing run-time by over 50% on real-world open datasets.
## I Introduction
Nearest neighbors search (NNS) is necessary in many robotic applications, such as real-time LiDAR-based simultaneous localization and mapping (SLAM) and motion planning, where the data is sampled sequentially and real-time mapping is essential. For example, in LiDAR-based SLAM, NNS is crucial to compute feature [1, 2], estimate normal [3, 4], and match new points to the map [5, 6, 7, 8]. Recent advances in LiDAR technology have not only significantly reduced the cost, size, weight, and power of LiDAR but also enabled high performance of LiDAR, making it an essential sensor for robots [9]. However, this also poses challenges to NNS. The current LiDAR sensor can produce a large amount of 3D points with centimeter accuracy at hundreds of meters measuring range per second. The large amount of sequential data requires being processed in real-time, which is a quite challenging task on robots with limited onboard computing resources. To guarantee the efficiency of NNS, maintaining a large map supporting high efficient inquiry and dynamic updates with newly arriving points in real-time comes to vital importance.
Although various static tree data structures have been proposed, they struggle to meet the demands. R-tree [10] and R\(*\)-tree [11] partition the data by grouping nearby points into their minimum bounding rectangle in the next higher level of the tree. k-d tree [12] is a well-known instance to split the space. The octree recursively splits the space equally into eight axis-aligned cubes, which form the volumes represented by eight child nodes. Although k-d tree is a favored data structure in general k-nearest neighbors (KNN) search libraries, it is hard to draw any final conclusions as to whether the k-d tree is better suited for NNS compared to other data structures. Comparative studies [13, 14] show that the performance of different implementations of k-d trees can be diverse, and the octree is amongst the best performing algorithms especially for radius search due to its regular partitioning of the search space. Despite its performance, octree has not been fully exploited. Elseberg et al. [15] proposed an efficient octree to store and compress 3D data without loss of precision. Behley et al. [16] proposed an index-based octree that significantly improves the radius neighbor search in three-dimensional data, while the KNN search and dynamic updates are not enabled. When incorporating these static trees in real robotic applications, re-building the entire tree from scratch [17] repeatedly is inevitable, which is so time-consuming that the robots may fail to run.
In this paper, we propose a dynamic octree structure called _i-Octree_, which incrementally updates the octree with new points and enables fast NNS. In addition, our _i-Octree_ boasts impressive efficiency in both time and memory, adaptable to various types of points, and allows for on-tree down-sampling and box-wise delete. We conduct validation experiments on both randomized data and real-world open datasets to assess the effectiveness of _i-Octree_. In the randomized data experiments, our _i-Octree_ demonstrates significant improvements in runtime compared to the state-of-the-art incremental k-d tree (i.e., _ikd-Tree_[9] proposed recently). Specifically, it reduces run-time by 64% for building the tree, 66% for point insertion, 30% for KNN search, and 56% for radius neighbors search. Moreover, when applied to real-world data in LiDAR-based SLAM, _i-Octree_ showcases remarkable time performance enhancements. It achieves over twice the speed of the original method while often maintaining higher accuracy levels. Furthermore, Our implementation of _i-Octree_ is open-sourced on Github1.
Footnote 1: [https://sites.google.com/view/I-Octree](https://sites.google.com/view/I-Octree)
The remaining paper is organized as follows: the design of _i-Octree_ is described in Section II. Experiments are shown in Section III, followed by conclusions in Section IV.
## II _i-Octree_ Design and Implementation
The _i-Octree_ takes sequential point clouds as input with two objectives: dynamically maintains a global map and performs fast NNS (i.e., KNN search and radius neighbors search) on the map. Fig. 1 illustrates a typical application of the _i-Octree_. The range sensors continuously perceive
their surroundings and generate sequential 3D range data periodically. The initial scan of the range data is utilized to construct the _i-Octree_ and define the global coordinate frame. The _i-Octree_ then facilitates the establishment of correspondences between the newly arriving data and the historical data through KNN search or radius neighbors search. Based on these correspondences, the poses of the new data can be estimated, and the 3D points with pose are added to the _i-Octree_. To prevent the map size in _i-Octree_ from growing uncontrollably, only map points within a large local region (i.e., axis-aligned box) centered around the current position are maintained. In the following, we first describe the data structure and construction of the _i-Octree_, then we focus on dynamic updates and NNS.
### _Data Structure and Construction_
An _i-Octree_ node has up to eight child nodes, each corresponding to one octant of the overlying axis-aligned cube. An octant, starting with an axis-aligned bounding box with center \(\mathbf{c}\in\mathbb{R}^{3}\) and equal extents \(\mathbf{e}\in\mathbb{R}\), would be subdivided recursively into smaller octants of extent \(\frac{1}{2}\mathbf{e}\) until it contains less than a given number of points - the bucket size \(b\) or its extent is less than a minimal extent - \(e_{min}\). For memory efficient, octants without points are not created. In addition, we only keep indices and coordinates of points in each leaf octant and there is no points in non-leaf octants. To enable extremely fast access to each point in each leaf octant, we propose a local spatially continuous storing strategy (as shown in Fig. 2) which reallocates a segment of continuous memory for storing points information in the leaf octant after subdivision. Furthermore, the reallocation facilitates box-wise delete and incremental update, as it allows operating on a segment of memory without influencing others.
Based on above consideration, an octant \(C_{o}\) of our _i-Octree_ contains the center \(\mathbf{c}_{o}\in\mathbb{R}^{3}\), the extent \(e_{o}\), the points \(\mathbf{P}_{o}\) storing coordinates and indices, and a pointer pointing to the address of the first child octant. The subscript "o" is used to distinguish between different octants. Particularly, let \(C_{r}\) be the root octant, and \(\mathbf{P}_{r},e_{r}\), and \(\mathbf{c}_{r}\) are the points, the extent, and the center, respectively.
As for building an incremental octree, we firstly eliminate invalid points, calculate the axis-aligned bounding box of all valid points, and keep only indices and coordinates of valid points. Then, starting at the root, the _i-Octree_ recursively splits the axis-aligned bounding box at the center into eight cubes indexed by Morton codes [18] and subdivides all the points in current octant into each cube according to their cube indices calculated. When a stopping criteria is satisfied, a leaf octant will be created and a segment of continuous memory will be allocated to store information in points of the leaf node.
### _Dynamic Updates_
The dynamic updates include insertion of one or more points (i.e, incremental update) and delete of all points in an axis-aligned box (i.e., box-wise delete). The insertion is integrated with down-sampling, which maintains the octree at a pre-determined resolution.
#### Iii-B1 Incremental Update
When inserting new points, we have to consider the situation that some points may be beyond the boundary of the axis-aligned bounding box of the original tree. Once there are points out of the range of octree, we have to expand the bounding box by creating new root octant whose children contain current root octant. This process may be executed several times to ensure that all new points are within the range of the tree. Then, new points are added to the expanded octree (see Fig. 3).
In consideration of efficient points queries in robotic applications, _i-Octree_ supports down-sampling that executes simultaneously with point insertion. The down-sampling focuses on the new points and deletes the points that satisfy a certain condition: they are subdivided into a leaf octant whose extent is less than \(2e_{min}\) and size is larger than \(b/8\).
The process of adding new points to an octant \(C_{o}\) is similar to the construction of an octant. If \(C_{o}\) is a leaf node and it
Fig. 1: An example of using _i-Octree_ in odometry. The _i-Octree_ and odometry collaborate to estimate the poses of the 3D data obtained from the range sensors. The _i-Octree_ provides a robust and efficient data structure for storing and querying the 3D data, while the odometry enables the estimation of the poses of the data points.
Fig. 3: Fig. (a) and (b) illustrate insertion of new points (red) out of the range to the _i-Octree_. In (a), the left yellow cube is the root octant as well as the leaf octant with points (black) in the beginning. After inserting, the root octant becomes the light purple cube. In (b), the purple node is the root octant and it is updated after inserting new octants (dashed black rectangle).
Fig. 2: Illustration of locations of a octant’s points in memory. (a) scattered locations; (b) continual locations.
satisfies the subdivision criteria, all points (i.e., old points and new added points) in \(C_{o}\) will be recursively subdivided into child octants. If down-sampling is enabled, \(e_{o}\leq 2e_{min}\), and \(|\mathbf{P}_{o}|>b/8\), new points will be deleted later instead of being added to \(C_{o}\). Otherwise, a segment of continuous memory will be allocated for the updated points. If \(C_{o}\) has child octants, the problem becomes assigning the newly added points to various children, and only the new points are to be subdivided. This process is similar to the one mentioned above, except for the recursive updating of octants.
#### Ii-B2 Box-wise Delete
In certain robotic applications, such as SLAM, only the points near the agent are required to estimate the state. Consequently, the points located far away from the agent in the _i-Octree_ are not essential and can be removed for efficiency concerns.
When it comes to removing unnecessary points in an axis-aligned cuboid, instead of directly searching for points within the given space and deleting them, the _i-Octree_ firstly checks whether the octants are inside the given box. All octants inside the given box will be directly deleted without searching for points in them, which significantly reduces the deletion time. The deletion of octants have no influence on others thanks to the local spatially continuous storing strategy. For the leaf octants overlapping with the given box, we delete the points within the box and allocate a new segment of memory for the remaining. If a leaf octant contains no points after deletion, it will be deleted as shown in Fig. 4.
### _K-Nearest Neighbors Search_
Using _i-Octree_, we can retrieve \(k\) nearest neighbors for an arbitrary query point \(\mathbf{q}\in\mathbb{R}^{3}\). The nearest search on the _i-Octree_ is an accurate nearest search [19] instead of an approximate one as [20]. We maintain a priority queue \(\mathbf{h}\) with a maximal length of \(k\) to store the k-nearest neighbors so far encountered and their distance to the query point \(\mathbf{q}\). The last element of \(\mathbf{h}\) always has the largest distance regardless of pushing or popping. The axis-aligned box of each octant is well utilized effectively to accelerate the nearest neighbor search using a "bounds-overlap-ball" test [21] and a proposed priority search order pre-computed according to the fixed indices of child octants.
Firstly, we recursively search down the _i-Octree_ from its root node until reach the leaf node closest to \(\mathbf{q}\). Then the distances from \(\mathbf{q}\) to all points in the leaf node and corresponding indices will be pushed to priority queue \(\mathbf{h}\). All leaf nodes so far encountered will be searched before \(\mathbf{h}\) is full. If \(\mathbf{h}\) is full and the search ball \(\mathbf{S}(\mathbf{q},d_{max})\) defined by \(\mathbf{q}\) and the largest distance \(d_{max}\) in \(\mathbf{h}\) is inside the axis-aligned box of current octant, the searching is over. If a octant \(C_{k}\) doesn't contain the search ball \(\mathbf{S}(\mathbf{q},d_{max})\), then one of the following three conditions must be satisfied:
\[e_{k}-|q_{x}-c_{k,x}|<d_{max}, \tag{1}\]
\[e_{k}-|q_{y}-c_{k,y}|<d_{max}, \tag{2}\]
\[e_{k}-|q_{z}-c_{k,z}|<d_{max}, \tag{3}\]
where \(\mathbf{q}=(q_{x},q_{y},q_{z})^{T},\mathbf{c}_{k}=(c_{k,x},c_{k,y},c_{k,z})^{T}\). If none of the above conditions hold, the search ball is inside the octant.
We update \(\mathbf{h}\) by investigating octants overlapping the search ball \(\mathbf{S}(\mathbf{q},d_{max})\), since only these could potentially contain points that are also inside the desired neighborhood. We define the distance \(d\) between \(\mathbf{q}\) and \(C_{k}\) as below:
\[d=\lVert\sigma(|\mathbf{q}-\mathbf{c}_{o}|-\mathbf{1}e_{o})\rVert_{2}, \tag{4}\]
where \(\mathbf{1}=(1,1,1)^{T}\) and \(\sigma(x)=x\) if \(x>0\), otherwise \(\sigma(x)=0\). \(d<d_{max}\) indicates that \(C_{k}\) overlaps \(\mathbf{S}(\mathbf{q},d_{max})\). In order to speed up the process, we sort the candidate child octants of \(C_{o}\) according to their distances to \(C_{k}\) and get 8 different sequences in \(\mathbf{I}_{order}\). Such that the closer octants are earlier to be searched and the search reaches its end early.
### _Radius Neighbors Search_
For an arbitrary query point \(\mathbf{q}\in\mathbb{R}^{3}\) and radius \(r\), the radius neighbors search method finds every point \(\mathbf{p}\) satisfying \(\lVert\mathbf{p}-\mathbf{q}\rVert_{2}<r\). The process is similar to KNN search except for a fixed radius and an unlimited \(k\). We adopt the pruning strategy proposed by Behley et al. [16] with improvements to reduce computation cost. Before test whether \(\mathbf{S}(\mathbf{q},r)\) completely contains an octant \(C_{k}\), we make a simple test whether \(r^{2}\) is larger than \(3e_{k}^{2}\). The simple test is the necessary condition for \(\mathbf{S}(\mathbf{q},r)\) containing \(C_{k}\) with small cost. Besides, we try to avoid the extraction of a root in the algorithm. These two tricks together with our local spatially continuous storing strategy play key roles in accelerating the searching.
## III Application Experiments
We compare our _i-Octree_ to publicly available implementations of static k-d trees (i.e., k-d tree used in Point Cloud Library (PCL)), incremental k-d tree (i.e., _ikd-Tree_), and PCL octree. We conduct the experiments on randomized data and various open real-world datasets. We first evaluate our _i-Octree_ against PCL octree and the state-of-the-art incremental k-d tree, i.e., _ikd-Tree_, for tree construction, point insertion, KNN search, radius neighbors search, and box-wise delete on randomized three dimensional (3D) point data of different size. Then, we validate the _i-Octree_ in actual robotic applications on real-world dataset by replacing the static k-d tree in the LiDAR-based SLAM without any refinement and evaluate the time performance and accuracy. All experiments are performed on a PC with Intel i7-13700K CPU at 3.40GHz.
Fig. 4: Illustration of box-wise delete. The blue box is the given box, and the numbers 0 to 7 (Morton codes) are the indices of the octants. Octants, with indices 0, 1, 4, and 5 are directly deleted. Octant 7 is also deleted due to having no points.
### _Randomized Data Experiments_
The efficiency of our _i-Octree_ is fully investigated by the following experiments on randomized incremental data.
#### Iv-A1 Performance Comparison
In this experiment, we investigate the time performance of three implementations of dynamic data structure (i.e., the _i-Octree_, the _ikd-Tree_, and the PCL octree). Both _ikd-Tree_ and PCL octree are state-of-the-art implementations with high efficiency and support point insertion, which is the key consideration why we choose them to be compared with our _i-Octree_ for a fair comparison. In this experiment, we adopt the same setup as above with fixed \(b\) and \(e_{min}\). Besides, down-sampling is not enabled on _i-Octree_ and _ikd-Tree_. We record the time for tree construction and point insertion, the total time of kNN search for 200 points, and the total time of radius neighbors search for another 200 points at each step.
The dynamic data structure comparison over different tree size is shown in Fig. 5 and the Table I shows the comparison of average time consumption. When the tree size increases from 200,000 to 400,000, the time for point insertion (without down-sampling) of _i-Octree_ and PCL octree remains stable at 0.8\(ms\) while that for the _ikd-Tree_ is 3 times larger and grows linearly with the tree size.
Even the peak insertion time of _i-Octree_ is less than the regular insertion time of _ikd-Tree_. The time for KNN search of the three implementations is stable. Our method runs 6 times faster than the PCL octree and reduces the run time by 33% over the _ikd-Tree_. As for the time taken for radius neighbors search, the _i-Octree_ exhibits the slowest growth, running approximately 2 times faster than the _ikd-Tree_ and 5 times faster than the PCL octree. Besides, the construction time of the _i-Octree_ is only 36% of that of _ikd-Tree_ and 31% of that of PCL octree.
#### Iv-A2 Box-wise Delete
The _ikd-Tree_ is chosen to be compared with the _i-Octree_ since both support the box-wise delete. This experiment investigates the time performance of deleting all points in an axis-aligned box and the time performance of KNN serach and radius neighbors search after delete. In the experiment, we sample 400,000 points randomly in a \(10m\times 10m\times 10m\) space (i.e., the workspace) to initialize the incremental octree. Then 1,00 test operations are conducted on the trees. In each test operation, we randomly sample 200 points in the workspace for KNN search and another 200 points for radius neighbors search. For every 20 test operations, one axis-aligned box is sampled in the workspace with side length of \(1.0m\) and points contained in the box are deleted from the trees. We record the tree size after each delete, the time of box-wise delete, the total time of kNN search for 200 points, and the total time of radius neighbors search for another 200 points at each step.
The time of box-wise delete and the tree size are shown in Table II. The run time performance of _i-Octree_ and _ikd-Tree_ on KNN search and radius neighbors search is shown in Fig. 6. In this experiment, it appears that the box-wise delete operation on the _ikd-Tree_ does not remove all points within the given box as shown in Table II. The results show that our _i-Octree_ runs over 10 times faster than the _ikd-Tree_ on the box-wise delete and approximately 3 times faster on the radius neighbors search. Besides, the _i-Octree_ reduces the run time of KNN search by 58% over the _ikd-Tree_.
### _Real-world Data Experiments_
We test our developed _i-Octree_ in actual robotic applications (e.g., LiDAR-inertial SLAM [8, 22, 23] and pure LiDAR SLAM [5, 6, 24]). In these experiments, we directly replace the static k-d tree in the LiDAR-based
Fig. 5: Dynamic data structure comparison over different tree size.
Fig. 6: The comparison of time consumption.
SLAM by our proposed _i-Octree_ without any refinement and evaluate the time performance and accuracy. To ensure a fair and complete comparison, we test on three datasets: M2DGR [25], Newer College Dataset [26] and NCLT [27] with different sensor setups. The M2DGR is a low-beam LiDAR data, the Newer College Dataset is a high-beam LiDAR data, and the NCLT is a long-term dataset. They all have ground truth trajectories such that we can evaluate the accuracy of the LiDAR trajectories using absolute trajectory error (ATE) [28].
For the sake of simplicity, we will focus on two representative LiDAR-based SLAM algorithms: LIO-SAM [22] that combines LiDAR and inertial measurements (LiDAR-inertial SLAM), and FLOAM [24] that relies solely on LiDAR data (pure LiDAR SLAM). Both are the state-of-the-art algorithms and uses the static k-d tree for the nearest points search, which is crucial to match a point in a new LiDAR scan to its correspondences in the map (or the previous scan). For the LIO-SAM, we choose the LIO_SAM_6AXIS1, which is modified from LIO-SAM to support a wider range of sensors (e.g., 6-axis IMU and low-cost GNSS) without accuracy loss. Besides, the loop closure module of LIO_SAM_6AXIS is deactivated. The parameters setups, as shown in Table III, for every sequence in a dataset are the same. Both algorithms build two independent maps: surface map and edge map, where two k-d trees or _i-Octree_ will be built. The SurfRes and EdgeRes determine the minimal extent of the two _i-Octree_ structures, respectively.
Footnote 1: [https://github.com/JokerJohn/LIO_SAM_6AXIS](https://github.com/JokerJohn/LIO_SAM_6AXIS)
#### Iv-B1 Low-beam LiDAR Data
The first dataset is M2DGR, which is a large-scale dataset collected by an unmanned ground vehicle (UGV) with a full sensor-suite including a Velodyne VLP-32C LiDAR sampled at 10 \(Hz\), a 9-axis Handsfree A9 inertial measurement unit (IMU) sampled at 150 \(Hz\), and other sensors. The dataset comprises 36 sequences captured in diverse scenarios including both indoor and outdoor environments on the university campus. The ground truth trajectories of the outdoor sequences are obtained by the Xsens MTI 680G GNSS-RTK suite while for the indoor environment, a motion-capture system named Vicon Vero 2.2 whose localization accuracy is \(1mm\) is used to collect the ground truth. We choose several sequences from this dataset and the detailed information is summarized in Table IV.
The maximal and average time consumption of incremental update, as well as the maximal and average total time consumption for each scan, are listed in Table V. For the LOAM running in the outdoor environment, the size of map grows gradually making it computational costly when using the static k-d tree. The rebuilding process may cost a lot when the size of map is large, which deteriorates the real-time performance sharply. The time for rebuilding the entire k-d tree fills a large percentage of the total time per scan, as shown in the Fig. 7. After replacing the k-d tree by _i-Octree_, the incremental time reduces to less than 10\(ms\) and the real-time performance is approximately guaranteed. The FLOAM with _i-Octree_ runs over 5 times faster than the original one, with sometimes a slight accuracy loss as shown in Table VI. The LIO_SAM_6AXIS with _i-Octree_ runs over 2 times faster than original one on almost all sequences and the accuracy is usually improved. Besides, the _i-Octree_ reduces the peak time which degrades the real-time ability.
#### Iv-B2 High-beam LiDAR Data
The second dataset is the Newer College Dataset, which is a multi-camera LiDAR inertial dataset of 4.5 \(km\) walking distance. The dataset is collected by a handheld device equipped with an Ouster OS0-128 LiDAR sampled at 10 \(Hz\) with embedded IMU sampled at 100 \(Hz\). Precise centimetre accurate ground truth is provided and calculated by registering each undistorted LiDAR frame to a prior map scanned by 3D imaging laser scanner, Leica BLK360. Table VII shows the 6 sequences selected from the dataset. These sequences contain small and narrow
Fig. 7: The Comparison of time consumption on the _street_02_ sequence.
passages, large scale open spaces, as well as vegetated areas. Besides, challenging situations such as aggressive motion are presented.
In this test, we only record average total time consumption of each scan, as shown in Table VIII, and calculate the RMSE of the absolute translational errors, as shown in Table IX. We find that the _i-Octree_ shows promising compatibility to
with LIO_SAM_6AXIS since it not only reduces the time consumption a lot but also improves the accuracy. For the LOAM, the _i-Octree_ significantly accelerates the processing speed per scan, but sometimes at a cost of degrading accuracy, as it directly replaces the static k-d tree without any refinement.
#### Iii-C3 Long-term LiDAR Data
Finally, we test on a long sequence called _2012-01-08_ form a large scale dataset named NCLT captured by a Velodyne HDL-32E LiDAR sampled at 10 \(Hz\), a Microstrain 3DM-GX3-45 IMU sampled at 100 \(Hz\), and other sensors. The length of the sequence is 6.4\(km\) and the duration is 5633s. The average run time of each scan is added in Table VIII, and the absolute translational errors are added to Table IX. The FLOAM drifts severely on the large scale sequence while LIO_SAM_6AXIS (_i-Octree_) shows surprising performance, except for a slight drift along the z-axis, even without enabling loop closure. LIO_SAM_6AXIS (static k-d tree) can only use the very recent key frames to estimate the poses, whereas LIO_SAM_6AXIS (_i-Octree_) can take advantage of point information from the start, which contributes to its better performance.
## IV Conclusion
In this paper, we propose a novel dynamic octree data structure, _i-Octree_, which supports incrementally point insertion with on-tree down-sampling, box-wise delete, and fast NNS. Besides, a large amount of experiments on both randomized datasets and open datasets show that our _i-Octree_ can achieve the best overall performance among the state-of-the-art tree data structures. |
2309.08301 | RaSpectLoc: RAman SPECTroscopy-dependent robot LOCalisation | This paper presents a new information source for supporting robot
localisation: material composition. The proposed method complements the
existing visual, structural, and semantic cues utilized in the literature.
However, it has a distinct advantage in its ability to differentiate
structurally, visually or categorically similar objects such as different
doors, by using Raman spectrometers. Such devices can identify the material of
objects it probes through the bonds between the material's molecules. Unlike
similar sensors, such as mass spectroscopy, it does so without damaging the
material or environment. In addition to introducing the first material-based
localisation algorithm, this paper supports the future growth of the field by
presenting a gazebo plugin for Raman spectrometers, material sensing
demonstrations, as well as the first-ever localisation data-set with benchmarks
for material-based localisation. This benchmarking shows that the proposed
technique results in a significant improvement over current state-of-the-art
localisation techniques, achieving 16\% more accurate localisation than the
leading baseline. | Christopher Thomas Thirgood, Oscar Alejandro Mendez Maldonado, Chao Ling, Jonathan Storey, Simon J Hadfield | 2023-09-15T10:45:59Z | http://arxiv.org/abs/2309.08301v2 | # RaSpectLoc: Raman SPECTroscopy-dependent robot LOCalisation
###### Abstract
This paper presents a new information source for supporting robot localisation: material composition. The proposed method complements the existing visual, structural, and semantic cues utilized in the literature. However, it has a distinct advantage in its ability to differentiate structurally [23], visually [25] or categorically [1] similar objects such as different doors, by using Raman spectrometers. Such devices can identify the material of objects it probes through the bonds between the material's molecules. Unlike similar sensors, such as mass spectroscopy, it does so without damaging the material or environment. In addition to introducing the first material-based localisation algorithm, this paper supports the future growth of the field by presenting a gazebo plugin for Raman spectrometers, material sensing demonstrations, as well as the first-ever localisation data-set with benchmarks for material-based localisation. This benchmarking shows that the proposed technique results in a significant improvement over current state-of-the-art localisation techniques, achieving 16% more accurate localisation than the leading baseline.
The code and dataset will be released at: [https://github.com/ThirgoodC/RaSpectLoc](https://github.com/ThirgoodC/RaSpectLoc)
## I Introduction
Mobile robots have historically relied on depth sensors for localisation tasks, more recently, visual (RGB) sensors have become ubiquitous. However, these traditional RGB approaches face challenges in urban environments where large planar regions of uniform colour often dominate and distinct visual landmarks are lacking. This is especially problematic in self-similar spaces such as hotels with identical rooms and rotational symmetric floorplans.
A novel solution is to sense the material composition of the environment, which can help distinguish visually similar objects, such as different doors. Additionally, small deviations in material composition, such as impurities in concrete or uneven levels of corrosion, can serve as local landmarks. These observations can be made and the material composition identified using sensors such as mass spectrometers and Raman spectrometers.
In this paper, we propose a new localisation approach that leverages the capabilities of Raman spectroscopy. Raman probes are commonly used in physics as active sensors to analyze at a range, without damaging the subject matter. They work by analyzing molecular interactions and bonds through light scattering. This results in a Raman spectrum featuring peaks corresponding to specific molecular bond vibrations.
We present an approach that uses spectral responses produced by a Raman spectrometer as its central sensing unit. It can work with or without range information and demonstrates superior performance to vision and depth-based methods. Furthermore, it effectively handles inaccuracies introduced in raw responses from the sensor.
Fig. 1 demonstrates a visualisation of RaSpectLoc, where spectra are compared against each other within a material map.
In summary, the contributions of this paper are:
1. A novel approach in mobile robotics that utilizes material-based spectroscopic data for localisation.
2. A plug-in for the popular simulator, Gazebo, to simulate Raman spectrometers.
3. A new dataset containing maps and recordings of material composition from a test environment.
4. A benchmarked performance evaluation of our approach which provides a far greater average trajectory error compared to current state-of-the-art localisation approaches that utilize RGBD sensors.
## II Literature Review
Mobile robot applications have traditionally used scanning approaches to localise within a known map. These approaches have historically been dominated by particle filter-based methods, such as AMCL [23] and GMapping [26] which have been popular solutions due to the integration with the ROS navigation stack [4]. The technique
Fig. 1: Visualisation of the RaSpectLoc system in action. Spectral readings are compared between the environment and Map along a number of bearing vectors.
is range-based (RMLC) and is highly adaptable, adjusting the number of hypotheses generated by the particle filter as confidence in the result improves. However, AMCL may fail if the map and the sourced environment differ. Many 'Simultaneous Localisation and Mapping' (SLAM) systems intend to solve this by generating a map at run-time and identifying 'loop closures' to solve for drift [2, 3] for static environments. For typical dynamic scenes such as industrial work-spaces [5, 19] these approaches have attempted to track the pose using an apriori map and secondary short-term map at two different timescales. The problem with such a method is that it is too computationally complex for mobile robotics to reliably localize for long periods with accumulated errors. Our approach aims to resolve this problem via the use of efficient spectral comparison functions which can provide continuous similarity scores. These are robust to misalignments.
Many RMCL approaches seek to enhance MCL localisation for mobile robots by using 3D point clouds generated from 3D LiDAR systems. Maken et al. [20] utilized Iterative Closest Point (ICP) with LiDAR sensors for pose estimation and localisation with MCL. Although improvements were evident in the results, the accuracy of the covariance estimation of the ICP output poses is dramatically affected by positioning accuracy. In recent deployments, machine learning models have become common in localisation systems that utilize point clouds to improve accuracy and reduce covariance estimation. For example, SegMap proposed by Dube et al. [7] uses a Convolutional Neural Network (CNN) to identify and segment geometric primitives in point clouds. However, the use of local and global descriptors leads to slow performance due to the expensive descriptor similarity networks. As a solution, SegSemMap [6] proposed by Cramariuc et al. enriches point clouds with CNN segmentation overlays from RGB cameras and adds them to the map. These approaches still require a compact and computationally efficient representation of the objects for segmentation, providing a high representation strength. Although SegMap and SemSegMap do not use an MCL approach, many segmentation-based MCL approaches have been used in combination with geometry and range, like Hendrikx et al. [16]. Another example is 3D geometry in BIM models which can be used to estimate the camera pose. The models are then segmented to gain local information about the area using range readings from a 2D range-based sensor. Mendez et al. proposed SeDAR [1] which aims to simplify these models with a more robust approach to RMCL by removing the depth readings altogether. With a semantic understanding of the environment through a simple CNN, the weights of the particles are adjusted based on the likelihood of particle observations in the segmented map. In contrast to the discrete semantic label comparisons used in SeDAR [1], our proposed material comparison offers a much finer level of matching granularity. Furthermore, computing likelihoods directly from the Raman spectra in the environment without visual classification networks significantly reduces computational costs.
There have been attempts to extend the applicability of AMCL to non-visual sensing modalities. One such example is seen with haptic sensors, which can provide touch-based sensing of the environment [32]. Buchanan et al. [8] have proposed to use data segments from the surroundings with a neural network to update MCL from a map of different terrains. However, this approach requires extensive data on the environment's terrain and prolonged pre-training, which can be challenging. Another unconventional sensing approach was proposed by Serrano et al. [9], who used the Wi-Fi signal strength of mobile devices for MCL. Although this approach is dependent on the Wi-Fi signal data strength, it can be unreliable in complex indoor environments where signals are absorbed through many walls before being sensed by the receiver. Visible Light Positioning (VLP) sensors have also been used, with W. Guan et al. [10] being able to identify objects in a room using LED patterns from lights and other devices. Despite these unconventional sensing methods, they still face challenges with even minor changes in the environment, leading to significant inaccuracies in the MCL algorithm. In contrast, Raman Spectrometers have the advantage of treating every part of the environment as an identifying landmark due to its material fingerprint sensing.
Raman spectrometers are remote scanning instruments often used to identify materials or substances. Noise in the sensor comes in three forms, shot, dark and read noise which couples to the Poisson distribution of the data, background noise and electronics of the device, respectively [21]. Such noise can be overwhelmed by gaussian noise from the device in the process of scanning with modern probes. To identify or recognise similar Raman Spectra, most approaches use a large database with neural networks producing long training and inference times [14, 30]. In contrast, our approach simplifies and extends this by using the likelihood between stored spectra in the map and the latest spectra read from the environment. De-noising methods such as those proposed by Lussier et al. and Horgan et al. [11, 14] show the benefits of using deep-learning models to remove noise, but can also be done via other methods suggested by Zhao et al. [17]. Analytical likelihood similarity algorithms have been proposed to identify the type of material from the peaks of the spectra as proposed by Foster et al. [12]. We propose alternative Raman spectral similarity functions in RaSpectLoc by providing new algorithms while still delivering a system with low computational complexity. The most effective similarity functions for Raman spectra necessitate peak classification through algorithmic convolutions [15]. This is infeasible for MCL approaches as they tend to take too long to process each hypothesis.
The main focus of Raman spectrometry in the literature is remote sensing of chemicals using mobile robots [13], whereas we go further and apply this research for a robot localisation approach. This paper proposes a novel approach, utilizing innovative localisation techniques and algorithmic similarity functions for mobile robots equipped with underutilized Raman Spectral sensors.
## III Problem definition
Monte-Carlo Localisation by Dellaert et al. [23] provides a framework which performs matching to generate pose hypotheses in a map. The process is as follows:
1. Particles are sampled, either uniformly around a space or in a Gaussian around a pose hypothesis
2. Particles are propagated through a motion model, typically the robot's odometry with added Gaussian noise.
3. Each particle is given a weight based on the accuracy of its observations against the map, typically matching a range and bearing scan line to an existing map
4. A re-sampling step is performed, proportional to the particle's likelihood, before the process repeats.
The current pose \(x_{t}\in SE(2)\), of the robot, can be estimated as a set of possible samples: \(\mathbb{S}_{t}=\{s_{t}^{i};i=1\dots N\}\), given the wheel odometry measurements: \(\mathbb{U}_{t}=\{u_{j};j=1\dots t\}\) and depth sensor measurements: \(\mathbb{Z}_{t}=\{z_{j};j=1\dots t\}\) and a 2D map \(\mathbb{V}\). If all previous odometry and depth measurements are equally weighted, the posterior probability \(P(s_{t}^{i}|\mathbb{U}_{t},\mathbb{Z}_{t},\mathbb{V})\) can be decomposed into an online sequential process:
\[P(s_{t}^{i}|\mathbb{U}_{t},\mathbb{Z}_{t},\mathbb{V})=P(s_{t}^{i}|u_{t},s_{t-1 }^{i})P(\mathbb{Z}_{t-1}|\mathbb{U}_{t-1},s_{t-1}^{i},\mathbb{V}) \tag{1}\]
The motion model in the localisation process is determined by the odometry measurements received from the robot, represented by \(u_{t}\). This information is used to "shift" the particles, assigning a likelihood based on the probability of the final position given the measured odometry. The particles are propagated based on \(u_{t}\), with Gaussian noise added to model sensor noises, as follows:
\[P\left(s_{t}^{i^{\prime}}|u_{t},s_{t-1}^{i}\right)\sim\mathcal{N}\left(u_{t}+ s_{t-1}^{i},\Upsilon_{t}\right) \tag{2}\]
where \(\Upsilon_{t}\) is the covariance of the odometry and \(\mathcal{N}\) is a normal distribution across the dimensions of SE(2).
The sensor model \(P(\mathbb{Z}_{t-1}|\mathbb{U}_{t-1},s_{t-1}^{i},\mathbb{V})\) measures how well the range scan fits the pose hypothesis. The probability of each range-scan \((r_{t})\) is estimated under the assumption all scans are independent. Two sensor models which are commonly used with AMCL are the "Beam" and "Likelihood-field" models. The Beam model is a raycasting operation where a ray is cast starting from hypothesis \(s_{t}^{i}\) along the bearing \(\theta_{t}^{k}\) and terminates when an occupied cell is reached. The likelihood is estimated as:
\[P_{R}\left(\mathbb{Z}_{t}^{k}|s_{t}^{i^{\prime}}\right)=\exp\left(\frac{- \left(r_{t}^{k}-r_{t}^{k*}\right)^{2}}{2\sigma_{o}^{2}}\right) \tag{3}\]
where \(\sigma\) is the variance of the sensor measurement noise and \(r_{t}=r_{t}^{k}\forall k=0....N\). The likelihood field model uses a field similar to a chamfer map to quickly estimate the distance to the nearest geometry in the floorplan, eliminating the need for costly raycasting operations. The chamfer map is defined as:
\[C_{i,j}=\min_{k,l}|[i-k,j-l]|,\mathbb{V}_{k,l}\neq 0 \tag{4}\]
where \(C_{i,j}\) is the cost value of the chamfer map at position \((i,j)\), while \(\min\) is a minimum value function and points \((k,l)\) which iterates over the neighbors of \((i,j)\). In the likelihood-field model, assuming a Gaussian error distribution, the weight of each particle, \(s^{\prime}\), can be estimated as:
\[P_{r}(\mathbb{Z}_{t}^{k}|s_{t}^{i^{\prime}},\mathbb{V})=\exp\left(-\frac{ \delta_{o}^{2}}{2\sigma_{o}^{2}}\right) \tag{5}\]
where \(\delta_{o}\) is the value obtained from the distance map and \(\sigma_{o}\) is dictated by the noise characteristics of the sensor. During runtime, the endpoint of each bearing-range tuple is computed for each pose hypothesis, and its probability is related to the distance stored in the chamfer map. The likelihood field model is faster and produces more accurate results compared to the beam model as it is more robust to orientation errors.
RaSpectLoc is similar to AMCL but replaces many of the operations listed above for material-based sensing. Our method includes adapting a floorplan for Raman spectra, a novel spectral-based raycasting and a particle weight calculation method for each hypothesis.
## IV Methodology
Fig. 3 illustrates an overview of the RaSpectLoc system. The system requires input odometry of the robot and a materials map with the Raman spectra embedded. After initialisation, section IV-B details how Raman spectra are accumulated around the robot. Section IV-C describes how the weight is calculated based on the spectral input from the mounted probe of the mobile robot. The particles propagate according to the motion model of Eq. 2. The system then repeats this process measuring the materials in the local environment around the robot with the Raman Probe and comparing them against the map.
### _Material Floorplans_
The RaSpectLoc system offers a more innovative solution, utilizing spectral data obtained from a Raman probe. The system compares this data against previously recorded Raman spectra that are incorporated into the map. In this work, each cell on the floorplan is associated with a unique Raman spectrum measurement. This map is used in the ray
Fig. 2: Material Map used in experimentation
casting operation to determine the likelihood between the data and ends of the rays. The materials represented in our floorplan for this work include various spectra for painted walls, laminate wood, painted metals, wood and plastic. These materials are broadly grouped based on similarity for visualization purposes, depicted in Fig. 2.
The map is converted to an occupancy grid when initially uploaded to a map server. If \(\mathbb{M}\) is a set of possible 2D positions, the map can then be defined as \(\mathbb{V}=\{v_{m};m\in\mathbb{M}\}\). Each embedded spectrum is formally described by \(\mathbb{I}_{i}=\{i_{0},\ldots,i_{n}\}\) where \(i\) reflects the intensity of a particular wavelength.
### _Raman Sensor_
At the time \(t\), the sensor on the robot generates a message consisting of a tuple of ranges, bearings, and Raman spectra, \(\mathbb{Z}_{t}\). Modern Raman spectra allow a non-invasive technique with background noise scattering removal for high-accuracy readings with little post-processing for smoothed spectral results. RaSpectLoc can also operate without an explicit range sensor (i.e. only using bearing and spectra tuples) as explained in section IV-D. In this case, raycasting is used as with the "beam" sensor model, and the range component of the likelihood is omitted. The measurements are configured as \(\mathbb{Z}_{t}=[\langle r_{t}^{k},\Theta_{t}^{k},\mathbb{I}_{t}^{k}\rangle;k=1 \ldots k]\), which represents the range from the depth camera, the bearing along the scan line, and the Raman spectrum respectively. The scan line is assumed to be parallel to the ground plane and aligned with the horizontal axis of the mobile robot's sensor mounting point. Raman measurements are taken by rotating the joint around the robot to complete a single scan message with \(k\) measurements. In the gazebo simulator, our gazebo plug-in can be mounted anywhere on a modelled robot and produce a rotating scan line.
Our approach in RaSpectLoc aims to tackle the challenges faced in localisation for modular robots, which are currently dominated by geometry primitive recognition methods that require higher-end hardware. We leverage the similarity between spectra that are read from the environment after normalisation and baseline correction is performed on them. The information we gather is converted into chamfer maps, enabling the RaSpectLoc system to calculate the similarity for each particle. Despite the advanced technology, current Raman probes still have a level of noise that must be accounted for when computing the similarity between observation and map. The shot, readout, thermal background, and baseline noise are Poisson distributed and assumed to be proportional to the square root of the number of photons detected. Most Raman systems require proximity to the target in question. The IS-Instruments RP1000 probe, coupled to a HES2000, provides the capability of making Raman measurements at ranges over 1m. Such a probe is ideal for robot deployment and an ideal candidate for this application.
The noise in low response areas of spectra substantially affects the results using many comparison functions. When there is a significant number of (photonic) events, Poisson noise is indistinguishable from Gaussian noise as described by Larkin [27] and also by Lewis et al. [28]. Therefore,
Fig. 4: RP1000 Raman Probe [31]
Fig. 3: RaSpectLoc system diagram
we use a Squared exponential mapping function to convert spectrum distance to a likelihood, \(P_{s}\), mapped between 0 \(\rightarrow\) 1:
\[P_{s}\left(\mathbb{Z}_{t}|s_{t},\mathbb{V}\right)=\exp\left(\frac{-f\left( \mathbb{I}_{t}^{k},\mathbb{I}^{m}\right)^{2}}{K}\right) \tag{6}\]
where \(f\left(\mathbb{I}_{t}^{k},\mathbb{I}^{m}\right)\) is the distance function of the spectra \(\mathbb{I}_{t}^{k}\) and \(\mathbb{I}^{m}\) which is the spectra at endpoint of a raycasted beam \(m\). While \(K\) is a constant used to scale the different distance metrics into the same range. Our approach supports previous state-of-the-art mathematical similarity functions for spectra [15], in addition to originally proposed methods.
### _Spectral Similarity Functions_
In RaSpectLoc, various similarity functions can be used to compare Raman spectra and calculate the particle weights. We will describe these functions in detail and their advantages and disadvantages. The similarity functions described in this section are the Spectral Linear Kernel, Modified Euclidean distance [15], Wasserstein distance [18], Kullback-Leibler distance and Spectral Angle Mapping (SAM).
#### Iii-C1 Spectral Linear Kernel
The spectral linear kernel (SLK), proposed by Conroy et al. [18], was designed for comparing Raman spectra. It can be expressed by:
\[f_{slk}(\mathbb{I}_{t}^{k},\mathbb{I}^{m})=\sum_{i_{n}^{k}\in\mathbb{I}_{t}^{k }}\left(i_{n}^{k}\cdot i_{n}^{m}+\sum_{j=n-W}^{j=n+W}(i_{n}^{k}-i_{j}^{k})(i_{n }^{m}-i_{j}^{m})\right) \tag{7}\]
where \(\mathbb{I}^{k}\) and \(\mathbb{I}^{m}\) are the input spectra. The kernel considers the original intensity values at each wave number and includes the difference between the intensity of its neighbouring points on the spectrum. The windowed difference product helps compare the relative shapes of the two spectra. However, relative to other functions mentioned in this paper it is slow and results can be poor when calculating the similarity over long windows of non-similar regions in the two spectra.
#### Iii-C2 Modified Euclidian Spectral Similarity metric (Mod. L2)
The approach introduced by Khan et al. [15] is an alternative to the Euclidian similarity calculated between each intensity. This approach gives equal importance to the peaks of each spectrum but also rewards spectra that have shorter distances between the peaks. The squared distance between intensities of the spectra can be calculated via \(D_{\mathbf{i}}^{l}=(\mathbb{I}_{t}^{k}-\mathbb{I}_{t}^{m})^{2}\). This can then be used under the conditions in equation (7) to calculate the Mod. L2 distance:
\[f_{me}(\mathbb{I}_{t}^{k},\mathbb{I}^{m})^{2}=\begin{cases} \sum_{i_{n}^{k}\in\mathbb{I}_{t}^{k}}\frac{1}{w}*D_{\mathbf{i}}^{l},&\text{if }x_{i} \neq 0\text{ AND }z_{i}>x_{i}\\ \sum_{i_{n}^{k}\in\mathbb{I}_{t}^{k}}^{i}w*D_{\mathbf{i}}^{l},&\text{if }x_{i} =0\text{ AND }z_{i}>x_{i}\\ \sum_{i_{n}^{k}\in\mathbb{I}_{t}^{k}}^{i}D_{\mathbf{i}}^{l},&\text{ otherwise}\end{cases} \tag{8}\]
The function above calculates a distance between the spectra using a weight, \(w\), found by:
\[w=\frac{max(\mathbb{I}_{t}^{k})}{1-max(\mathbb{I}_{t}^{k})} \tag{9}\]
The weight \(w\) is utilized as a penalty/reward scheme for relative changes in individual intensities at each wavelength. The drawback of this function is that the weight does not work when the \(maxQ\) value is less than 0.5. To resolve this issue, the weight is inverted to correct the error from the original equation.
#### Iii-C3 Wasserstein Distance
The Wasserstein distance, also known as the Earth Mover's Distance, is a metric that quantifies the effort required to transform one distribution into another. It was suggested in previous literature to be an effective method of evaluating the similarity of spectra by Gao et al. [28]. Despite its accuracy, the Wasserstein distance is complex and slow compared to the other functions mentioned in this section. This is explored in Section V.
\[f_{W}(\mathbb{I}_{t}^{k},\mathbb{I}^{m})=min\{E(t)|t:\mathbb{I}_{t}^{k}\to \mathbb{I}^{m}\} \tag{10}\]
Where \(t\) is a transportation plan that maps the elements of \(\mathbb{I}_{t}^{k}\) to the elements of \(\mathbb{I}^{m}\), and \(E(t)\) is the cost of the transportation plan \(t\), which is typically represented as the sum of the distances between the corresponding vector elements in \(\mathbb{I}_{t}^{k}\) and \(\mathbb{I}^{m}\).
#### Iii-C4 Kullback-Leibler Divergence
The KL-Divergence is a highly efficient yet effective distance to compare two equally binned distributions. Formally it is a generalisation on the L2 Norm and is defined as the
\[f_{kl}(\mathbb{I}_{t}^{k}||\mathbb{I}^{m})=\sum_{i_{n}^{k}\in\mathbb{I}_{t}^{k }}i_{n}^{k}\log\frac{i_{n}^{k}}{i_{n}^{m}} \tag{11}\]
The KL-divergence quantifies the information lost when approximating \(\mathbb{I}_{t}^{k}\) with \(\mathbb{I}^{m}\). It is a non-symmetric metric, meaning that \(f_{KL}(\mathbb{I}_{t}^{k}||\mathbb{I}^{m})\neq f_{KL}(\mathbb{I}_{t}^{k}|| \mathbb{I}^{m})\). The KL divergence is sensitive to tiny differences between distributions, which is desirable in the context of Raman spectra, where little variations in intensity values can indicate the presence of different materials.
#### Iii-C5 Spectral Angle Mapping
The Spectral Angle Mapping (SAM) function is a similarity metric commonly used to calculate the angle between two spectra in a high-dimensional space. Each dimension represents a different Raman shift. The angle between the two spectra shows their similarity, with a smaller angle indicating higher similarity. Formally the function can be expressed by:
\[f_{sam}(\mathbb{I}_{t}^{k}||\mathbb{I}^{m})=\cos^{-1}\left(\frac{\mathbb{I}_{t }^{k}\cdot\mathbb{I}^{m}}{\left\|\mathbb{I}_{t}^{k}\right\|\left\|\mathbb{I} ^{m}\right\|}\right) \tag{12}\]
SAM is founded on the idea that similar spectra should have similar shapes and relative intensities across different Raman shifts benefiting from the similarity angle returned around the shape of each peak. Furthermore, the function is also scale-invariant, meaning it is not affected by differences in the absolute intensity of the spectra. However, SAM is quite
sensitive to noise and variations of the baseline of the Raman spectra since they affect the shape of the peaks. SAM also fails to consider the intensity of the spectra and assumes spectra are linearly mixed. This assumption may not be the case for some materials in the environment.
### _Sensor Models_
We adapt both the "beam" and "likelihood-field" sensor models for use with Raman Spectra.
The Beam model depends on ranges as part of the weight for each particle. This is because the beam model depends on the depth of the scan from the environment and the end of the beam projected into the map, \(\mathbb{V}\). While the probability of an observation given the map and pose for the beam model can be expressed by a weighted sum of the two probabilities:
\[P_{r}(\mathbb{Z}_{t}^{k}|s_{t}^{i},\mathbb{V})=\epsilon_{R}P_{R}(\mathbb{Z}_{t }^{k}|s_{t}^{i},\mathbb{V})+\epsilon_{m}P_{m}(\mathbb{Z}_{t}^{k}|s_{t}^{i}, \mathbb{V}) \tag{13}\]
For the Beam-model of RaSpectLoc, the likelihood of a particle is defined by two user-defined weights, \(\epsilon_{R}\) and \(\epsilon_{m}\). When \(\epsilon_{m}\) is zero, the likelihood is equivalent to the standard RMCL. The optimal weight configuration is evaluated in Section V. Unlike range scanners, the standard deviation \(\sigma^{{}^{\prime}}\) of equation 2 cannot be linked to the sensor's physical properties. Instead, it is estimated from the prior of each material spectra on the map, avoiding the need for tuning. However, the Beam Model has drawbacks to its approach as it lacks smoothness since it depends on the resolution of the map. This is worse for mobile robots as it becomes more memory dependent to increase the number of beams and resolution of the map.
The likelihood-field model provides a smoother approach providing gradients between each cell. Furthermore, it can use ranges, bearing and material likelihood or only bearing and material likelihoods. The likelihood field model in our approach generates a chamfer map to the nearest occupied cell for each material based on equation 2:
\[C_{i,j}\left(\mathbb{I}\right)=\min_{k,l}\left(C_{k,l}\left(\mathbb{I}\right)+ f\left(\mathbb{I}_{i,j},\mathbb{I}_{k,l}\right)\right) \tag{14}\]
Where \(f\) is the chosen spectral likelihood function mentioned in Section III-B. For every \(\mathbb{Z}_{t}=[\langle\Theta_{t}^{k},\mathbb{I}_{t}^{k}\rangle;k=1\ldots k]\), the raycasting is performed. Instead of comparing the range with the map, the spectral likelihood can be used to calculate the cost:
\[P_{r}(\mathbb{Z}_{t}^{k}|s_{t}^{i},\mathbb{V})=P_{m}(\mathbb{Z}_{t}^{k}|s_{t}^{ i},\mathbb{V}) \tag{15}\]
This method is a combination of the beam and likelihood model. Hence, in Eq. 13, when \(\epsilon_{R}\) is zero, the approach solely relies on the Raman spectra information in the floorplan and material similarity likelihood.
The nearest occupied cell is identified through similar AMCL range-based raycasting. The distances obtained are utilized to improve the smoothness of Eq. 13, which suggests that the likelihood of the spectra similarity is proportional to the distribution of material similarity in terms of angle. As a result, this approach is scale-invariant, provided that the aspect ratio of the map is maintained.
## V Experiments and Results
Our research focuses on MCL-based localisation using Raman Spectra from a single probe. This section presents the overall performance improvements of our system over other state-of-the-art localisation approaches. This dataset will be released to the community to support future research. RaSpectLoc can choose between the beam or the likelihood field model which are described in Section IV-D. The ablation and parameter exploration section, V-B, illustrates the difference in the performance of RaSpectLoc due to changes in likelihood functions and configurations of \(\epsilon\). Whereas the Quantitative Results section V-C will compare RaSpectLoc with various state-of-the-art baselines. These include, AMCL [23], SeDAR [1], visual pose recognition (PoseNet [25]), range-based SLAM (GMapping [26]) and Monocular SLAM (ORBSLAM 3 [24]).
### _Experimental Simulation Setup_
The following sections evaluate the performance of RaSpectLoc using a dataset consisting of 5 mobile robot trajectories around a University. The dataset includes Raman Spectra, RGBD images and odometry information. Additionally, a material-embedded floorplan with Raman spectra at each coordinate is provided.
To quantitatively evaluate the performance of RaSpectLoc and state-of-the-art methods, the Absolute Trajectory Error (ATE) and the Relative Pose Error (RPE) metrics presented by Strum et al. [22] are used. The ATE calculates the rigid transformation estimation between the two paths. The RPE evaluates the relative rotation and relative translation for all points in the path, ignoring the effects of drift over time in the trajectory. The RMSE, mean, median, standard deviation, minimum distance and maximum distance values of the residual position error are provided for the resulting ATE and RPE of the experiments. For evaluation in Table I, AMCL, SeDAR [1] and RaSpectLoc is given a coarse initialisation with standard deviations of 2.0m in (x, y) and 2.0 radians in \(\theta\). The system was run with a maximum of 1000 particles placed around the covariance ellipse. The error was recorded for each new set of accumulated Raman spectra into the published tuple \(\mathbb{Z}_{t}\).
### _Ablation and parameter exploration_
This section focuses on the performance of the RaSpectLoc system using various spectral similarity functions and optimal weights, \(\epsilon\), to accurately compare against modern localisation approaches. We evaluate the performance using the ATE and RPE of the produced trajectories with the likelihood-field model. Additionally, we consider the accuracy of determining the correct spectrum after adding
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{Spectular Similarity Function Analysis (m)} \\ \hline Function & RMSE & Mean & Median & SD & Min & Max \\ \hline SLK & 0.18 & 0.17 & 0.16 & 0.07 & 0.03 & 0.38 \\ \hline Mod.L2 & **0.14** & **0.13** & 0.12 & **0.06** & 0.03 & 0.41 \\ \hline Wasserstein & 0.17 & 0.14 & 0.13 & 0.09 & 0.02 & 0.44 \\ \hline KLD & 0.15 & 0.13 & **0.11** & 0.08 & **0.02** & **0.34** \\ \hline SAM & 0.18 & 0.16 & 0.14 & 0.09 & 0.03 & 0.42 \\ \hline \end{tabular}
\end{table} TABLE I: Spectral Similarity Function Experiments
Raman-probe shot noise. This approach should provide a more realistic evaluation of the similarity functions compared to post-processed spectra that do not contain these intensity artefacts or noise. For Eq. 12 the value of \(\beta\) is set to 1. The results in Tables I and II indicate that the Mod. L2 function produces the best accuracy and performance for RaSpectLoc. The results of the Mod. L2 function is because it provides a descriptive way of examining the similarity between peaks with a reward and punishment scheme. In testing, the Wasserstein distance takes significantly longer to process than other functions, often leading to performance issues during periods of high message frequency. This is highlighted in Fig. 5 by the logarithmic axis for the time taken to complete a 5min 55sec rosbag recording. One obvious limitation of RaSpectLoc from this graph is the increased run-time compared to AMCL, although this comes with a significant reduction in error.
As shown in Table III, the performance of RaSpectLoc varies with the weight distribution. The best-performing weights are 0.5 for the material and ranges.
Compared to Table I the results still show that a'spectra-only' approach outweighs the combined range and spectral data approach seen in Table III. This result is likely due to geometric cues in a hallway environment. These are relatively weak compared to the fine-grained spectral data that provide a greater understanding of the current pose for the system. Findings in these experiments are highlighted in Section V-C, where we will compare RaSpectLoc's performance against other state-of-the-art systems.
### _Quantitative Results_
Tables IV and V showcase the results of our tests against various state-of-the-art localisation approaches in terms of Absolute Trajectory Error (ATE) and Relative Pose Error (RPE) metrics. Experiments are compared with \(\epsilon_{R}\) and \(\epsilon_{m}\) set to 0.5 and 0.5, respectively, as per the optimal configuration found in Section V-B for our combined localisation model with the Mod. L2 likelihood function (Section IV-C2). The tests are performed using the same RGB images and depth scan messages along a single trajectory. This separation allows us to compare our results against three non-MCL approaches: ORBSLAM, GMapping and PoseNet.
Our experiments revealed that RaSpectLoc consistently outperforms the other approaches in accuracy, robustness, ATE and RPE metrics. The results showed that our method was able to accurately estimate the 3-DoF pose of the robot even in challenging environments with noise and dynamic obstacles. Furthermore, our experiments highlighted the ability of RaSpectLoc to scale to larger environments and handle noise-riddled Raman Spectra, making it a highly competitive solution for real-world applications.
## VI Conclusion
In conclusion, this paper has presented a novel approach for mobile robot localisation that harnesses the power of Raman spectra. With its ability to recognise subtle material-composition landmarks, RaSpectLoc can outperform traditional RGBD systems. This recognition has significant implications for localisation in challenging environments, evident
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{Average Trajectory Error (m/deg)} \\ \hline Approach & RMSE & Mean & Median & SD & Min & Max \\ \hline AMCL & 0.072/6.08 & 0.067/1.08 & 0.064/1.08 & 0.052/2.09 & 0.371/1.89 \\ \hline SoDAR & 0.062/2.21 & 0.01/1.5 & 0.040/9.03 & 0.037/1.72 & 0.217/2.47 \\ \hline ORBSLAM3 (MOMO) & 0.471/4.74 & 0.418/5.78 & 0.410/3.06 & 0.101/3.13 & 0.771/6.16 \\ \hline CMapping & 0.402/6.55 & 0.131/5.03 & 0.318/5.03 & 0.352/5.23 & 1.499/2.65 \\ \hline RoSpectLoc (combined) & 0.067/2.31 & 0.067/2.32 & 0.036/1.04 & 0.037/1.74 & 0.219/7.77 \\ \hline RaSpectLoc (Materials) & **0.057/2.03** & **0.041/3.34** & **0.041** & **0.021**/**0.52** & **0.161/0.86** \\ \hline \end{tabular}
\end{table} TABLE V: RPE: Baseline localisation comparisons
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Function}} & \multicolumn{4}{c|}{Spectral Similarity Function Analysis (RPE)} \\ \cline{2-10} & \begin{tabular}{c} Translational \\ RMSE(m) \\ \end{tabular} & \begin{tabular}{c} Rotational \\ RMSE(deg) \\ \end{tabular} & \begin{tabular}{c} Translational \\ Mean(m) \\ \end{tabular} & \begin{tabular}{c} Rotational \\ Mean(deg) \\ \end{tabular} & \begin{tabular}{c} Translational \\ Median(m) \\ \end{tabular} & \begin{tabular}{c} Rotational \\ Median(deg) \\ \end{tabular} & \begin{tabular}{c} Translational \\ SD(m) \\ \end{tabular} & \begin{tabular}{c} Rotational \\ SD(deg) \\ \end{tabular} & \begin{tabular}{c} Translational \\ MS(m) \\ \end{tabular} &
\begin{tabular}{c} Rotational \\ Max(deg) \\ \end{tabular} \\ \hline Wasserstein & 0.06 & **2.02** & 0.05 & 1.39 & 0.04 & 0.91 & 0.03 & **1.46** & 0.22 & **9.67** \\ \hline SLK & 0.07 & 2.32 & 0.06 & 1.58 & 0.05 & 0.97 & 0.04 & 1.70 & 0.20 & 11.9 \\ \hline KL-D & **0.05** & 2.03 & **0.04** & **1.34** & 0.04 & **0.81** & **0.02** & 1.52 & **0.16** & 10.86 \\ \hline Mod.L2 & 0.06 & 2.23 & 0.05 & 1.5 & **0.04** & 0.95 & 0.03 & 1.64 & 0.17 & 10.17 \\ \hline SAM & 0.07 & 2.34 & 0.06 & 1.57 & 0.05 & 0.96 & 0.04 & 1.73 & 0.31 & 9.94 \\ \hline \end{tabular}
\end{table} TABLE II: Spectral Similarity Function Analysis (RPE)
Fig. 5: ATE:RMSE error against time taken to complete the trajectory
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{
\begin{tabular}{c} Relative Pose Error (m/deg)} \\ \hline Approach \\ \end{tabular} } & \multicolumn{2}{c|}{RMSE} & \multicolumn{1}{c|}{Mean} & Median & SD & Min & Max \\ \hline AMCL & 0.24 & 0.21 & 0.2 & 0.11 & 0.04 & 0.95 \\ \hline SoDAR & 0.19 & 0.16 & 0.14 & 0.10 & **0.02** & 0.55 \\ \hline ORBSLAM3 (MOMO) & 7.17 & 6.54 & 0.81 & 2.93 & 0.55 & 11.57 \\ \hline GMapping & 0.71 & 0.63 & 0.57 & 0.31 & 0.52 & 1.43 \\ \hline PoseNet & 4.64 & 2.58 & 1.42 & 3.85 & 0.02 & 5.56 \\ \hline RaSpectLoc (combined) & 0.18 & 0.15 & 0.13 & 0.13 & 0.10 & 0.03 & 0.50 \\ \hline RaSpectLoc (Materials only) & **0.14** & **0.13** & **0.12** & **0.06** & 0.03 & **0.41** \\ \hline \end{tabular}
\end{table} TABLE IV: ATE: Baseline localisation comparisons
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Function}} & \multicolumn{4}{c|}{Weight configuration tests (ATE)} \\ \hline \(\epsilon_{R}\) & \(\epsilon_{m}\) & \multirow{2}{*}{RMSE} & \multirow{2}{*}{Mean} & \multirow{2}{*}{Median} & \multirow{2}{*}{SD} & \multirow{2}{*}{Min} & \multirow{2}{*}{Max} \\ \cline{0-1} \(0.2\) & 0.8 & 0.19 & 0.17 & 0.15 & **0.09** & **0.01** & 0.43 \\ \hline
0.4 & 0.6 & 0.19 & 0.17 & 0.16 & 0.09 & 0.04 & **0.38** \\ \hline
0.5 & 0.5 & **0.18** & **0.15** & **0.13** & 0.10 & 0.03 & 0.50 \\ \hline
0.6 & 0.4 & 0.22 & 0.20 & 0.20 & 0.10 & 0.08 & 0.47 \\ \hline
0.8 & 0.2 & 0.21 & 0.19 & 0.19 & 0.11 & 0.08 & 0.45 \\ \hline \end{tabular}
\end{table} TABLE III: Weight experiments ablation tests
from our practical experimental section (V-C). The future of this field is even more promising, as we envision a future where robots can exploit spectral data to navigate and map hazardous environments, such as nuclear decommissioning sites. Such a SLAM system would provide a more detailed understanding of the environment while reducing the risk posed to human workers. Raman spectra could also assist in detecting anomalies in these high-risk locations with the potential to revolutionize the field and greatly benefit society.
## VII Acknowledgements
This work was partially supported by Industrial 3D Robotics (I3D), IS-Instruments and partially funded by the EPSRC under grant agreement EP/S035761/1.
|
2301.13665 | Variational Amplitude Amplification for Solving QUBO Problems | We investigate the use of amplitude amplification on the gate-based model of
quantum computing as a means for solving combinatorial optimization problems.
This study focuses primarily on QUBO (quadratic unconstrained binary
optimization) problems, which are well-suited for qubit superposition states.
Specifically, we demonstrate circuit designs which encode QUBOs as `cost
oracle' operations $U_{\textrm{C}}$, which when combined with the standard
Grover diffusion operator $U_{\textrm{s}}$ lead to high probabilities of
measurement for states corresponding to the optimal and near optimal solutions.
In order to achieve these probabilities, a single scalar parameter
$p_{\textrm{s}}$ is required, which we show can be found through a variational
quantum-classical hybrid approach. | Daniel Koch, Massimiliano Cutugno, Saahil Patel, Laura Wessing, Paul M. Alsing | 2023-01-31T14:33:40Z | http://arxiv.org/abs/2301.13665v2 | # Variational Amplitude Amplification for Solving QUBO Problems
###### Abstract
We investigate the use of amplitude amplification on the gate-based model of quantum computing as a means for solving combinatorial optimization problems. This study focuses primarily on QUBO (quadratic unconstrained binary optimization) problems, which are well-suited for qubit superposition states. Specifically, we demonstrate circuit designs which encode QUBOs as 'cost oracle' operations \(U_{\mathrm{C}}\), which when combined with the standard Grover diffusion operator \(U_{\mathrm{s}}\) lead to high probabilities of measurement for states corresponding to the optimal and near optimal solutions. In order to achieve these probabilities, a single scalar parameter \(p_{\mathrm{s}}\) is required, which we show can be found through a variational quantum-classical hybrid approach.
## I Introduction
Amplitude amplification is a quantum algorithm strategy that is capable of circumventing one of quantum computing's most difficult challenges: probabilistic measurements. Originally proposed by Grover in 1996 [1], and later shown to be optimal [2; 3], the combination of his oracle \(U_{\mathrm{G}}\) and 'diffusion' \(U_{\mathrm{s}}\) operators is able to drive a quantum system to a superposition state where one (or multiple) basis state(s) has nearly 100% probability of being measured. Since then, many researchers have contributed to the study of \(U_{\mathrm{G}}\) and \(U_{\mathrm{s}}\)[4; 5; 6; 7; 8; 9], seeking to better understand how the fundamental nature of amplitude amplification is dependent on these two operators. Similarly, the aim of this study is to further extend the capabilities of amplitude amplification as a means for solving combinatorial optimization problems using gate-based quantum computers.
The results of this paper are a continuation of our previous work [10], in which we demonstrated an oracle design which was capable of encoding and solving a weighted directed graph problem. The motivation for this oracle was to address a common criticism of \(U_{\mathrm{G}}\)[11; 12; 13; 14; 15], namely that the circuit construction of oracles too often hardcodes the solution it aims to find, negating the use of quantum entirely. Similar to other recent studies [16; 17; 18; 19; 20; 21; 22], we showed that this problem can be solved at the circuit depth level by avoiding gates such as control-Z for constructing the oracle, and instead using phase and control-phase gates (\(\mathrm{P}(\theta)\) and \(\mathrm{CP}(\theta)\)). However, simply changing the phase produced from \(U_{\mathrm{G}}\) to something other than \(\pi\) is not enough [23; 24; 25; 26; 27; 28]. Our oracle construction applies phases to not only a desired marked state(s), but _all_ states in the full \(2^{N}\) Hilbert Space. The phase each basis state receives is proportional to the solutions of a weighted combinatorial optimization problem, for which the diffusion operator \(U_{\mathrm{s}}\) can be used to boost the probability of measuring states that correspond to optimal solutions.
The consequence of using an oracle operation that applies phases to every basis state is an interesting double-edged sword. As we show in sections II. - IV., and later in section VII., the use of phase gates allows for amplitude amplification to encode a broad scope of combinatorial optimization problems into oracles, which we call 'cost oracles' \(U_{\mathrm{c}}\). In particular, we demonstrate the robustness of amplitude amplification for solving these kinds of optimization problems with asymmetry and randomness [29; 30; 31]. However, the tradeoff for solving more complex problems is twofold. Firstly, in contrast to Grover's oracle, using \(U_{\mathrm{c}}\) is only able to achieve peak measurement probabilities up to 70-90%. In section VI. we show that these probabilities are still high enough for quantum to reliably find optimal solutions, which notably are achieved using the same O( \(\frac{\pi}{4}\sqrt{N/M}\) ) iterations as standard Grover's [1; 2; 3].
The second, more challenging tradeoff when using \(U_{\mathrm{c}}\) is that the success of amplitude amplification is largely dependant on the correct choice of a single free parameter \(p_{\mathrm{s}}\)[10]. This scalar parameter is multiplied into every phase gate for the construction of \(U_{\mathrm{c}}\) (\(\mathrm{P}(\theta\cdot p_{\mathrm{s}})\) and \(\mathrm{CP}(\theta\cdot p_{\mathrm{s}})\)), and is responsible for transforming the numeric scale of a given optimization problem to values which form a range of approximately \(2\pi\). This in turn is what allows for reflections about the average amplitude via \(U_{\mathrm{s}}\) to iteratively drive the probability of desired solution states up to 70-90%. The significance of \(p_{\mathrm{s}}\), and the challenges in determining it experimentally, are a major motivation for this study. In particular, the results of section V. demonstrate that there is a range of \(p_{\mathrm{s}}\) values for which many optimal solutions can be made to become highly probable. Additionally, our simulations show that there is an observed correlation between the numerical cost function value of these solutions and the \(p_{\mathrm{s}}\) values where they achieve peak probabilities. This underlying correlation supports the idea of using amplitude amplification for a variational model of hybrid quantum-classical computing, which is the core finding of this study.
### Layout
The layout of this study is as follows. Section II. begins with the mathematical formalism for the optimization problem we will seek to solve using amplitude amplification. Sections III. & IV. discuss the construction
of the problem as a quantum circuit, the varying degrees of success one can expect from optimization problems generated using random numbers, and the conditions for which these successes can be experimentally realized. In section V. we explore the role of \(p_{\text{s}}\) from a heuristic perspective, whereby we demonstrate that many near optimal solutions are capable of reaching significant probabilities of measurement. Section VI. is a primarily speculative discussion, theorizing how the collective results of section V. can be coalesced into a hybrid quantum-classical variational algorithm. And finally, section VII. completes the study with additional optimization problems that can be constructed as oracles and solved using amplitude amplification.
## II QUBO definitions
We begin by outlining the optimization problem which will serve as the focus for this study: QUBO (quadratic unconstrained binary optimization). The QUBO problem has many connections to important fields of computer science [32; 33; 34; 35; 36], making it relevant for demonstrating quantum's potential for obtaining solutions. To date, the two most successful quantum approaches to solving QUBOs are annealing [37; 38; 39; 40] and QAOA [41; 42; 43; 44], with a lot of interest in comparing the two [45; 46; 47]. Shown below in equation 1 is the QUBO cost function C(X) which we shall seek to solve using our quantum algorithm.
\[\text{C(X)}=\sum_{i}^{N}W_{i}x_{i}+\sum_{\{i,j\}\in\mathbb{S}}w_{ij}x_{i}x_{j} \tag{1}\]
The function C(X) evaluates a given binary string X of length \(N\), composed of individual binary variables \(x_{i}\). Together, the total number of unique solutions to each QUBO is \(2^{N}\), which is also the number of quantum states producible from \(N\) qubits. Throughout this study we will use subscripts X\({}_{i}\) and C(X\({}_{i}\)) when referring to individual solutions, and C(X) when discussing a cost function more generally.
As shown in equation 1, a QUBO is defined by two separate summations of weighted values. The first summation evaluates weights \(W_{i}\) associated with each individual binary variable, while the second summation accounts for pairs of variables which share a weighted connection \(w_{ij}\). In this study we adopt the typical interpretation of QUBOs as graph problems, whereby each binary variable \(x_{i}\) represents a node. We can then define the connectivity of a QUBO graph using the set S, which itself is a collection of sets that describe each pair of nodes \(x_{i}\) and \(x_{j}\) that share a connection. See figure 1 below for an example.
The interest of this study is to use a quantum algorithm to find either X\({}_{\text{min}}\) or X\({}_{\text{max}}\), which are the solutions which minimize / maximize the cost function C(X) respectively. For all QUBOs analyzed in the coming sections, the weight values \(W_{i}\) and \(w_{ij}\) are restricted to integers, randomly selected from a uniform distribution as shown below in equations 2 and 3.
\[W_{i},w_{ij}\in\mathbb{Z} \tag{2}\] \[W_{i},w_{ij}\in[-100,100] \tag{3}\]
In section V. we discuss the consequences of choosing weight values in this manner and its advantage for quantum. However, nearly all of the results shown throughout this study are applicable to the continuous cases for \(W_{i}\) and \(w_{ij}\) as well, with the one exception being the results of section V.D.
### Linear QUBO
The cost function given in equation 1 is applicable to any graph structure \(\mathbb{S}\), so long as every node and edge is assigned a weight. For this study we will focus on one specific \(\mathbb{S}\), which we refer to as a 'linear QUBO.' The connectivity of these graphs is as follows:
\[\mathbb{S}=\{\{n,n+1\}\mid 1\leq n\leq N-1\} \tag{4}\]
As the name suggests, linear QUBOs are graphs for which every node has connectivity with exactly two neighboring nodes, except for the first and final nodes. The motivation for studying QUBOs of this nature is their efficient realizability as quantum circuits, given in the next section.
## III Amplitude Amplification
The quantum strategy for finding optimal solutions to C(X) investigated in this study is amplitude amplification [4; 5; 6; 7; 8; 9], which is the generalization of Grover's algorithm [1]. The full algorithm is shown below in Alg. 1, which notably is almost identical to Grover's algorithm except for the replacement of Grover's oracle \(U_{\text{G}}\) with our cost oracle \(U_{\text{c}}\).
Figure 1: (top) An example 3-qubit linear QUBO with weighted nodes and edges. (bottom) The set \(\mathbb{S}\) containing the complete connectivity of the QUBO.
By interchanging different oracle operations into the Alg. 1, various problem types can be solved using amplitude amplification. For example, Grover's original oracle solves an unstructured search, whereas here we are interested in optimal solutions to a cost function. Later in section VII. we discuss further oracle adaptations and the problems they solve. For all oracles, we use the standard diffusion operator \(U_{\mathrm{s}}\), given below in equation 5.
\[U_{\mathrm{s}}=2|s\rangle\langle s|-\mathbb{I} \tag{5}\]
This operation achieves a reflection about the average amplitude, whereby every basis state in \(|\Psi\rangle\) is reflected around their collective mean in the complex plane. This operation causes states' distance from the origin to increase or decrease based on their location relative to the mean, which in turn determines their probability of measurement. Therefore, a successful amplitude amplification is able to drive the desired basis state(s) as far from the origin as possible, up to a maximum distance of 1 (measurement probability of 100%).
### Solution Space Distribution
A prerequisite for the success of amplitude amplification as demonstrated in this study is an optimization problem's underlying solution space distribution; that is, the manner in which all possible solutions to the problem are distributed with respect to one another. For QUBOs, these are the \(2^{N}\) possible C(X\({}_{i}\)) cost function values. Shown below in figure 2 is a histogram of one such solution space distribution, for the case of a length 20 linear QUBO according to equations 1 - 4. The x-axis represents all possible cost function evaluations, and the y-axis is the corresponding number of unique X\({}_{i}\) solutions that result in the same C(X\({}_{i}\)) value.
Depicted in figure 2 are all \(2^{20}\) possible solutions to an example linear QUBO. Because this QUBO was generated from randomized weights, the combination of the Law of Large Numbers [48] and Central Limit Theorem [49] predicts that its underlying solution space should be approximately gaussian [50] in shape, given by equation 6.
\[\mathrm{G}(x)=\alpha\mathrm{e}^{\frac{(x-p)^{2}}{2\sigma^{2}}} \tag{6}\]
Indeed, the histogram shown is approximately gaussian, but importantly it has imperfections resulting from the randomized weights. At large enough problem sizes (around \(N\geq 20\)), these imperfections have minimal impact on a problem's aptitude for amplitude amplification, which was a result from our previous study [10]. Similarly, another recent study [18] demonstrated that in addition to symmetric gaussians, solution space distributions for both skewed gaussians and exponential profiles also lead to successful amplitude amplifications. The commonality between these three distribution shapes is that they all possess large clusters of solutions that are sufficiently distanced from the optimal solutions we seek to boost. This can be seen in figure 2 as the location of X\({}_{\mathrm{min}}\) and X\({}_{\mathrm{max}}\) as compared to the central peak of the gaussian. When appropriately encoded as an oracle \(U_{\mathrm{c}}\), these clusters serve to create a mean point in the complex plane which the optimal solution(s) use to reflect about and increase in probability.
### Cost Oracle \(U_{\mathrm{c}}\)
In order to use algorithm 1 for finding the optimal solution to a given cost function, we must construct a cost oracle \(U_{\mathrm{c}}\) which encodes the weighted information and connectivity of the problem. In our previous study we referred to this operation as a 'phase oracle' \(U_{\mathrm{P}}\)[10], and similarly it has also been called a'subdivided phase oracle' SPO [17; 18] or 'non-boolean oracle' [19]. How one constructs \(U_{\mathrm{c}}\) is problem specific, but the general strategy is to primarily use two quantum gates, shown below in equations 7 and 8.
Figure 2: Example of a solution space distribution for a 20 node linear QUBO, with weights according to equations 2 and 3.
\[\mathrm{P}(\theta) = \begin{bmatrix}1&0\\ 0&\mathrm{e}^{i\theta}\end{bmatrix} \tag{7}\] \[\mathrm{CP}(\theta) = \begin{bmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&\mathrm{e}^{i\theta}\end{bmatrix} \tag{8}\]
The single and two-qubit gates \(\mathrm{P}(\theta)\) and \(\mathrm{CP}(\theta)\) are referred to as phase gates, also known as \(\mathrm{R}_{\mathrm{z}}(\theta)\) and \(\mathrm{CR}_{\mathrm{z}}(\theta)\) for their effect of rotating a qubit's state around the z-axis of the Bloch sphere. Mathematically they are capable of applying complex phases as shown below.
\[\mathrm{P}(\theta)|1\rangle = \mathrm{e}^{i\theta}|1\rangle \tag{9}\] \[\mathrm{CP}(\theta)|11\rangle = \mathrm{e}^{i\theta}|11\rangle \tag{10}\]
Applying \(\mathrm{P}(\theta)\) to a qubit only affects the \(|1\rangle\) state, leaving \(|0\rangle\) unchanged, and similarly only \(|11\rangle\) for \(\mathrm{CP}(\theta)\). However, this is exactly what we need in order to construct \(\mathrm{C}(\mathrm{X})\) from equation 1. When evaluating a particular binary string \(\mathrm{X}_{i}\) classically, only instances where the binary values \(x_{i}\) are equal to 1 yield non-zero terms in the summations. For quantum, each binary string \(\mathrm{X}_{i}\) is represented by one of the \(2^{N}\) basis states \(|\mathrm{X}_{i}\rangle\). Thus, our quantum cost oracle \(U_{\mathrm{c}}\) can replicate \(\mathrm{C}(\mathrm{X})\) by using \(\mathrm{P}(\theta)\) and \(\mathrm{CP}(\theta)\) to only effect basis states with qubits in the \(|1\rangle\) and \(|11\rangle\) states.
Shown above in figure 3 is an example of a 4-qubit QUBO cost oracle, where the weighted values \(W_{i}\) and \(w_{ij}\) are used as the \(\theta\) parameters for the various phase gates. Although incomplete, we will use this oracle circuit to demonstrate quantum's ability to encode a cost function \(\mathrm{C}(\mathrm{X})\). For example, consider the binary solution \(\mathrm{X}_{i}=1101\) and the corresponding quantum basis state \(|1101\rangle\). The classical evaluation of this solution is as follows:
\[\mathrm{C}(1101) = -8+18-22-12 \tag{11}\] \[= -24\]
Now let us compare this to the phase of \(|1101\rangle\) after applying \(U_{\mathrm{c}}\):
\[U_{\mathrm{c}}|1101\rangle = \mathrm{e}^{i(-8+18-22-12)}|1101\rangle \tag{12}\] \[= \mathrm{e}^{-24i}|1101\rangle\]
The phase acquired in equation 12 is equivalent to the classical evaluation shown in 11, which means that \(U_{\mathrm{c}}\) is an accurate encoding of \(\mathrm{C}(\mathrm{X})\). If we were to now apply \(U_{\mathrm{c}}\) to the equal superposition state \(|\mathrm{s}\rangle\) (step 2 in Alg. 1), all \(2^{N}\) basis states would receive phases equal to their cost function value. This is the advantage that quantum has to offer: simultaneously evaluating all possible solutions of a cost function through superposition.
### Scaling Parameter \(p_{\mathrm{s}}\)
While the cost oracle shown in figure 3 is capable of reproducing \(\mathrm{C}(\mathrm{X})\), its use in algorithm 1 will not yield the optimal solution \(\mathrm{X}_{\mathrm{min}}\) or \(\mathrm{X}_{\mathrm{max}}\). This is because quantum phases are \(2\pi\) modulo, which is problematic if the numerical scale of \(\mathrm{C}(\mathrm{X})\) exceeds a range of \(2\pi\). Consequently if two quantum states receive phases that differ by a multiple of \(2\pi\), then they will both undergo the amplitude amplification process identically. If this happens unintentionally via \(U_{\mathrm{c}}\), then our cost oracle cannot be used to minimize or maximize \(\mathrm{C}(\mathrm{X})\).
In order to construct \(U_{\mathrm{c}}\) such that it is usable for amplitude amplification, a scalar parameter \(p_{\mathrm{s}}\) must be included in all of the phase gates. The value of \(p_{\mathrm{s}}\) is problem specific, but its role is always the same: scaling the cumulative phases applied by \(U_{\mathrm{c}}\) down (or up) to a range where [\(\mathrm{C}(\mathrm{X}_{\mathrm{min}})\), \(\mathrm{C}(\mathrm{X}_{\mathrm{max}})\)] is approximately [x, x+2\(\pi\)]. This range does not have to be [0, \(2\pi\)] exactly, so long as the phases acquired by \(|\mathrm{X}_{\mathrm{min}}\rangle\) and \(|\mathrm{X}_{\mathrm{max}}\rangle\) are roughly \(2\pi\) different. See figure 4 below for an example of \(p_{\mathrm{s}}\) in \(U_{\mathrm{c}}\)'s construction.
Figure 4: The 4-qubit linear QUBO cost oracle \(U_{\mathrm{c}}\) from figure 3, now scaled by \(p_{\mathrm{s}}\).
Figure 3: (top) Example of a 4-qubit linear QUBO with weighted nodes and edges. (bottom) The same QUBO encoded into a cost oracle \(U_{\mathrm{c}}\) without scaling. Each unitary in the circuit is \(\mathrm{P}(\theta)\) (single qubit gate) or \(\mathrm{CP}(\theta)\) (2-qubit gate).
Using the scaled oracle shown in figure 4 above, let us now show how this new \(U_{\rm c}\) acts on the basis state \(|1101\rangle\) from before:
\[U_{\rm c}|1101\rangle = {\rm e}^{i(-8.p_{\rm s}+18.p_{\rm s}-22.p_{\rm s}-12.p_{\rm s})}|1101\rangle \tag{13}\] \[= {\rm e}^{i(-8+18-22-12).p_{\rm s}}|1101\rangle\] \[= {\rm e}^{-24i.p_{\rm s}}|1101\rangle\]
As shown in equation 13 above, multiplying \(p_{\rm s}\) into every phase gate has the net effect of scaling the cumulative phase applied by \(U_{\rm c}\): \({\rm e}^{-24i}\rightarrow{\rm e}^{-24i.p_{\rm s}}\). Note that this is \(not\) a global phase, which would have an additive effect on all states rather than a multiplicative one like shown above.
Finding the optimal \(p_{\rm s}\) value for boosting \({\rm X}_{\rm min}\) or \({\rm X}_{\rm max}\) is non-trivial, and was a major focus of our previous study [10], as well as this one. In general, the scale of \(p_{\rm s}\) needed for finding the optimal solution can be obtained using equation 14 below, which scales the numerical range of a problem [C(\({\rm X}_{\rm min}\)), C(\({\rm X}_{\rm max}\))] to exactly [x, x+2\(\pi\)].
\[p_{\rm s}=\frac{2\pi}{\rm C(X_{\rm max})-C(X_{\rm min})} \tag{14}\]
Although equation 14 above is guaranteed to solve the \(2\pi\) modulo phase problem mentioned previously, it is almost never the \(p_{\rm s}\) value which can be used to find \({\rm X}_{\rm min}\) or \({\rm X}_{\rm max}\). Only in the case of a perfectly symmetric solution space distribution is equation 14 the optimal \(p_{\rm s}\) value, in which case the states \(|{\rm X}_{\rm min}\rangle\) and \(|{\rm X}_{\rm max}\rangle\) undergo the amplitude amplification process together. However, realistic optimization problems can be assumed to have a certain degree of randomness or asymmetry to their solution space, producing distributions more akin to figure 7. For this reason, equation 14 is better thought of as the starting point for finding the true optimal \(p_{\rm s}\), which we discuss later in section IV.B. For now, equation 14 is sufficient for demonstrating \(p_{\rm s}\)'s role in creating an average amplitude suitable for boosting \(|{\rm X}_{\rm min}\rangle\) or \(|{\rm X}_{\rm max}\rangle\), shown in figure 5.
The bottom plot in figure 5 shows \(|\Psi\rangle\) after the first application of \(U_{\rm c}\) in algorithm 1. Note the location of the average amplitude (red 'x'), which is only made possible by the majority of quantum states which recieve phases near the center of the gaussian in the top plot. Optimal amplitude amplification occurs when the desired state for boosting is exactly \(\pi\) phase different with the mean [2; 3], which is very close to the situation seen in figure 5. However, since this \(U_{\rm c}\) is derived from a QUBO with randomized weights, the \(p_{\rm s}\) value provided from equation 14 does not exactly produce a \(\pi\) phase difference between the optimal states (black star) and the mean amplitude (red 'x'). Consequently, the state(s) which does become highly probable from amplitude amplification for this particular \(p_{\rm s}\) is not \(|{\rm X}_{\rm min}\rangle\) and \(|{\rm X}_{\rm max}\rangle\), which will be the subject of the coming two sections.
## IV Gaussian amplitude amplification
The amplitude space plot depicted at the bottom of figure 5 is useful for visualizing how a gaussian solution space distribution can be used for boosting, but the full amplitude amplification process is far more complicated. This is especially true for the QUBOs of this study, which are generated with randomized weights. Consequently, all of the results which follow throughout the remainder of this study are produced from classical simulations of amplitude amplification using cost oracles derived from linear QUBOs according to equations 1 - 4. For a deeper mathematical insight into these processes, please see [16; 17; 18].
### Achievable Probabilities
Amplitude amplification is an appealing quantum algorithm because it solves one of the most fundamental
Figure 5: (top) The 20-qubit linear QUBO histogram from figure 2, scaled by \(p_{\rm s}\) according to equation 14. (bottom) All \(2^{20}\) quantum states after applying \(U_{\rm c}|{\rm s}\rangle\), plotted in amplitude space (the complex plane). The red-blue color scale shows the density of quantum states in the bottom plot, corresponding to the y-axis of the top histogram. The states \(|{\rm X}_{\rm min}\rangle\) and \(|{\rm X}_{\rm max}\rangle\) are marked with a black star, the origin a black ‘+’, and average amplitude with a red ‘x’.
problems of quantum computing: measurement probability. For example, a single marked state using Grover's oracle with 30 qubits is capable of achieving a final probability that is only less than 100% by one billionth of a percent [1]. Thus, a natural question to ask when using \(U_{\mathrm{c}}\) is what kinds of probabilities can it produce for \(|\mathrm{X}_{\mathrm{min}}\rangle\) or \(|\mathrm{X}_{\mathrm{max}}\rangle\)? To answer this we conducted a statistically study of linear QUBOs ranging from length \(N=17\) to 27. For each \(N\) we generated numerous QUBOs according to equations 1 - 4, totals given in appendix **A**. We then let a classical simulator find the \(p_{\mathrm{s}}\) value which maximized the probability of measuring \(|\mathrm{X}_{\mathrm{min}}\rangle\) for each QUBO (and for certain cases the optimal \(p_{\mathrm{s}}\) for \(|\mathrm{X}_{\mathrm{max}}\rangle\) aswell). Results for each problem size are shown below in figure 6.
\[\mu =\frac{1}{2^{N}}\sum_{i}^{2^{N}}\mathrm{C}(\mathrm{X}_{i}) \tag{15}\] \[\sigma =\sqrt{\frac{\sum_{i}^{2^{N}}(\mathrm{C}(\mathrm{X}_{i})-\mu)^{2 }}{2^{N}}}\] (16) \[\sigma^{\prime} =\sigma\cdot p_{\mathrm{s}} \tag{17}\]
Figure 6 tracks three noteworthy trends found across the various QUBO sizes: the average peak probability achievable for \(|\mathrm{X}_{\mathrm{min}}\rangle\) (black triangle), the highest recorded probability for \(|\mathrm{X}_{\mathrm{min}}\rangle\) (red star), and the average scaled standard deviation \(\sigma^{\prime}\) (blue circle). For clarity, the derivation of \(\sigma^{\prime}\) is given by equations 15 - 17. This quantity is the standard deviation of a QUBO's solution space distribution after being scaled by \(p_{\mathrm{s}}\), making it a comparable metric for all QUBO sizes. In our previous study we demonstrated a result in agreement with figure 6, which is the correlation between higher achievable probabilities for \(|\mathrm{X}_{\mathrm{min}}\rangle\) (red star) and smaller scaled standard deviations \(\sigma^{\prime}\) (blue circle) [10]. The latter is what is responsible for increasing the distance between \(|\mathrm{X}_{\mathrm{min}}\rangle\) and the average amplitude like shown in figure 5.
### Solution Space Skewness
The relation between \(N\), \(\sigma^{\prime}\), and highest prob.(\(|\mathrm{X}_{\mathrm{min}}\rangle\)) from figure 6 can be summarized as follows: larger problem sizes tend to produce smaller standard deviations, which in turn lead to better probabilities produced from amplitude amplification. However, there is a very apparent disconnect between the probabilities capable of each problem size (red stars) versus the average (black triangle). To explain this, we must first introduce the quantity \(\mathrm{X}_{\Delta}\) given in equation 18 below.
\[\mathrm{X}_{\Delta}=2\mu-(\mathrm{C}(\mathrm{X}_{\mathrm{max}})+\mathrm{C}( \mathrm{X}_{\mathrm{min}})) \tag{18}\]
The quantity \(\mathrm{X}_{\Delta}\) from equation 18 is the difference between \(\mathrm{C}(\mathrm{X}_{\mathrm{min}})\) and \(\mu\) (the mean) minus the difference between \(\mu\) and \(\mathrm{C}(\mathrm{X}_{\mathrm{max}})\). A positive value for \(\mathrm{X}_{\Delta}\) indicates that the mean is closer to \(\mathrm{C}(\mathrm{X}_{\mathrm{max}})\), and vice versa for a negative valued \(\mathrm{X}_{\Delta}\). In essence, it is a measure of skewness that describes the asymmetry of a solution space distribution. Figure 7 shows example QUBO distributions for three cases of \(\mathrm{X}_{\Delta}\), for \(N=25\), demonstrating the impact \(\mathrm{X}_{\Delta}\) has on the ability to boost \(|\mathrm{X}_{\mathrm{min}}\rangle\) versus \(|\mathrm{X}_{\mathrm{max}}\rangle\). While \(\sigma^{\prime}\) is a strong indicator of a problem's overall aptitude for amplitude amplification, \(\mathrm{X}_{\Delta}\) determines whether the optimal minimum or maximum solution is boostable, and which is not. Further evidence of this can be seen in figure 8, which shows 1000 randomly generated linear QUBOs of length \(N=23\), and the peak probabilities achievable for \(|\mathrm{X}_{\mathrm{min}}\rangle\) and \(|\mathrm{X}_{\mathrm{max}}\rangle\) as a function of \(\mathrm{X}_{\Delta}\).
If we compare the average peak probabilites for \(|\mathrm{X}_{\mathrm{min}}\rangle\) from figure 6 with the full data of QUBOs shown in figure 8, we can see why the average peak probability is significantly lower than the highest recorded. Across the 1000 QUBOs studied, it is clear that \(\mathrm{X}_{\Delta}=0\) is a dividing point for whether \(|\mathrm{X}_{\mathrm{min}}\rangle\) or \(|\mathrm{X}_{\mathrm{max}}\rangle\) is capable of reaching a significant probability of measurement through amplitude amplification. For \(N=23\), the average prob.(\(|\mathrm{X}_{\mathrm{min}}\rangle\)) reported in figure 6 is approximately 64%. However, if instead we only consider QUBOs with \(\mathrm{X}_{\Delta}>0\) from figure 8, then the average peak probability for \(|\mathrm{X}_{\mathrm{min}}\rangle\) is around 86%, and likewise for \(|\mathrm{X}_{\mathrm{max}}\rangle\) when \(\mathrm{X}_{\Delta}<0\).
Together, figures 7 and 8 demonstrate the significance of knowing \(\mathrm{X}_{\Delta}\) from an experimenter's perspective. Depending on the optimization problem of interest, it is reasonable to assume that an experimenter may be interested in finding only \(\mathrm{X}_{\mathrm{min}}\) or \(\mathrm{X}_{\mathrm{max}}\). But without any
Figure 6: Results from studying randomly generated linear QUBOs of various sizes \(N\). The number of QUBOs studied per \(N\) is provided in appendix **A**. For each QUBO, the optimal \(p_{\mathrm{s}}\) value for producing the highest probability of measurement for \(|\mathrm{X}_{\mathrm{min}}\rangle\) was used to record three trends: average probability of \(|\mathrm{X}_{\mathrm{min}}\rangle\) (black triangle), highest recorded probability (red star), and average scaled standard deviation (blue circle). Error bars showing one standard deviation of each \(\sigma^{\prime}\) are provided aswell.
a priori_ knowledge of a problem's underlying solution space, specifically \(\mathrm{X}_{\Delta}\), the experimenter may unknowingly be searching for a solution which is probabilistically near impossible to find through amplitude amplification. For example, consider the QUBO distribution illustrated in the top plot of figure 7, and the peak probability for boosting \(|\mathrm{X}_{\mathrm{max}}\rangle\): 0.16%. Although it is ideal to have insight into a particular problem's \(\mathrm{X}_{\Delta}\) before using amplitude amplification, as we demonstrate in section V., information about \(\mathrm{X}_{\Delta}\) can be inferred through measurement results.
### Sampling for \(p_{\mathrm{s}}\)
If a particular optimization problem is suitable for amplitude amplification, then the speed of the quantum algorithm outlined in this study is determined by how quickly the optimal \(p_{\mathrm{s}}\) value can be found. Here we shall show that sampling a cost function \(\mathrm{C(X)}\) can provide reliable information for approximating \(p_{\mathrm{s}}\) from equation 14, which can then be used to begin the variational approach outlined in sections V. and VI. Importantly, the number of cost function evaluations needed is significantly less than either a classical or quantum solving speed. The strategy outlined in equations 19 - 29 below can be used for approximating \(p_{\mathrm{s}}\) when the experimenter is expecting an underlying solution space describable by a gaussian function (equation 6). If another type of distribution is expected, then the function used in equation 22 could in principle be modified accordingly (for example, sinusoidal, polynomial, exponential [18]).
Suppose we sample a particular cost function \(\mathrm{C(X)}\)\(M\) times, where \(M<<2^{N}\). We will define the set \(\mathbb{M}\) as the collection of values \(\mathrm{C(X_{i})}\) obtained from these samples.
Figure 8: A total of 1000 randomly generated linear QUBOs of size \(N=23\). For each QUBO, the highest achievable probability for \(|\mathrm{X}_{\mathrm{min}}\rangle\) (black circle) and \(|\mathrm{X}_{\mathrm{max}}\rangle\) (red triangle) are plotted as a function of \(\mathrm{X}_{\Delta}\). The top plot includes both data points per QUBO, while the bottom plot only shows the higher of the two values.
Figure 7: Three randomly generated QUBO distributions for \(N=25\), illustrating \(\mathrm{X}_{\Delta}\) cases for largely positive (top), largely negative (middle), and near zero (bottom). In all three plots the exact \(\mathrm{X}_{\Delta}\) value is reported, as well as the highest achievable probability for \(|\mathrm{X}_{\mathrm{min}}\rangle\) and \(|\mathrm{X}_{\mathrm{max}}\rangle\) (each from a different \(p_{\mathrm{s}}\) value). Also shown in each plot are the values for \(\mathrm{C(X_{\mathrm{min}})}\) and \(\mathrm{C(X_{\mathrm{max}})}\), and their numerical distance to the mean \(\mu\) (red-dashed line).
\[\mathbb{M}=\{\text{C(X}_{1}),\text{C(X}_{2}),...,\text{C(X}_{M})\} \tag{19}\]
Using these \(M\) values, we can compute an approximate mean and standard deviation.
\[\tilde{\mu} =\frac{1}{M}\sum_{c\in\mathbb{M}}c \tag{20}\] \[\tilde{\sigma} =\sqrt{\frac{\sum_{c\in\mathbb{M}}\left(c-\tilde{\mu}\right)^{2}} {M}} \tag{21}\]
In order to use equation 14 for obtaining \(p_{\text{s}}\), we need approximations for C(X\({}_{\text{min}}\)) and C(X\({}_{\text{max}}\)). If we assume an underlying gaussian structure to the problem's solution space, then we can write down the following equation to describe it:
\[2^{N} =\int_{-\infty}^{\infty}\tilde{\alpha}\frac{\left(\frac{\mu- \tilde{\mu}}{2\tilde{\sigma}}\right)^{2}}{2\tilde{\sigma}}dx \tag{22}\] \[=-\tilde{\alpha}\tilde{\sigma}\sqrt{\frac{\pi}{2}}\text{erf} \left(\frac{\tilde{\mu}-x}{\sqrt{2}\tilde{\sigma}}\right)_{-\infty}^{\infty}\] (23) \[=-\tilde{\alpha}\tilde{\sigma}\sqrt{\frac{\pi}{2}}\cdot\left[-1- 1\right] \tag{24}\]
where erf() is the gaussian error function. Using equation 24, we can rearrange terms and solve for an approximation to the height of the gaussian.
\[\tilde{\alpha}=\frac{2^{N-1}}{\tilde{\sigma}\sqrt{\frac{\pi}{2}}} \tag{25}\]
With the values \(\tilde{\mu}\), \(\tilde{\sigma}\), and \(\tilde{\alpha}\) obtained from sampling, we can now approximate C(X\({}_{\text{min}}\)) and C(X\({}_{\text{max}}\)) using equation 26 below.
\[\tilde{\text{G}}(x)=\tilde{\alpha}\text{e}^{\frac{\left(x-\tilde{\mu}\right)^{ 2}}{2\tilde{\sigma}^{2}}}=1 \tag{26}\]
Solving for \(x\) yields the following two values:
\[x_{\pm}=\tilde{\mu}\pm\tilde{\sigma}\sqrt{-2\text{ln}\left(\frac{1}{\tilde{ \alpha}}\right)} \tag{27}\]
which can be expressed in terms of the two quantities originally derived from sampling:
\[x_{\pm}=\tilde{\mu}\pm\tilde{\sigma}\sqrt{-2\text{ln}\left(\frac{\tilde{ \sigma}\sqrt{\pi/2}}{2^{N-1}}\right)} \tag{28}\]
And finally, the solutions \(x_{\pm}\) can be used to obtain \(p_{\text{s}}\).
The significant result from table 1 is that sampling 100 - 500 times, on a cost function of \(2^{23}\) solutions, is accurate enough to produce an approximate \(\tilde{p}_{\text{s}}\) value with an expected error of only 7%. And as we show in the next section, this is enough accuracy to use for either a heuristic or variational approach for finding optimal solutions.
## V Variational amplitude amplification
The results of sections II - IV. demonstrate quantum's aptitude for encoding and solving a QUBO problem using amplitude amplification. In this section we discuss how this potential can be realized from an experimental perspective. In particular, we focus on amplitude amplification's ability to find optimal solutions under realistic circumstances with limited information. The results of this section are then used to motivate section VI., in which we discuss how amplitude amplification can be used in a hybrid classical-quantum model of computing, similar to other successful variational approaches [41, 42, 51].
The results of sections II - IV. demonstrate quantum's aptitude for encoding and solving a QUBO problem using amplitude amplification. In this section we discuss how this potential can be realized from an experimental perspective. In particular, we focus on amplitude amplification's ability to find optimal solutions under realistic circumstances with limited information. The results of this section are then used to motivate section VI., in which we discuss how amplitude amplification can be used in a hybrid classical-quantum model of computing, similar to other successful variational approaches [41, 42, 51].
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(M\) & 100 & 500 & 1000 & 2000 \\ \hline Average \(\tilde{p}_{\text{s}}\) Error & 7.28\% & 6.37\% & 6.31\% & 6.29\% \\ \hline \end{tabular}
\end{table}
Table 1: Average error in approximating \(p_{\text{s}}\) using equations 19 - 29. Each value comes from 50,000 independent sampling trials on linear QUBOs of size \(N\) = 23.
### Boosting Near-Optimal Solutions
The results shown in figures 6 - 8 focus on quantum's potential for finding \(|\mathrm{X}_{\mathrm{min}}\rangle\) and \(|\mathrm{X}_{\mathrm{max}}\rangle\), the optimal solutions which minimize/maximize a given cost function C(X). However, in order to understand how amplitude amplification can be used in a variational model, it is equally as important that non-optimal \(|\mathrm{X}_{i}\rangle\) states are also capable of boosting.
As discussed in the conclusion of our previous study [10], as well as sections III.C and IV.C, the most difficult aspect of using algorithm Alg.1 from an experimental standpoint is finding \(p_{\mathrm{s}}\). More specifically, finding an optimal \(p_{\mathrm{s}}\) for boosting \(|\mathrm{X}_{\mathrm{min}}\rangle\) or \(|\mathrm{X}_{\mathrm{max}}\rangle\) is a challenge due to the limited amount of information that one can learn through measurements alone. An example of this can be seen in figure 9, which shows the peak achievable probabilities of the three lowest \(|\mathrm{X}_{i}\rangle\) states as a function of \(p_{\mathrm{s}}\) (\(|\mathrm{X}_{\mathrm{min}}\rangle\) and the next two minimum solutions), for the QUBO corresponding to \(\mathrm{X}_{\Delta}=331.5\) from figure 7.
The challenge presented in figure 9 is the narrow range of \(p_{\mathrm{s}}\) values for which each \(|\mathrm{X}_{i}\rangle\) state is able to achieve meaningful probabilities of measurement. From an experimental perspective, the \(p_{\mathrm{s}}\) regions outside these bands are only capable of producing quantum superposition states which are slightly better than \(|\mathrm{s}\rangle\), the equal superposition starting state. Thus, an experimenter could use a \(p_{\mathrm{s}}\) value that is incredibly close to optimal, but only see seemingly random measurement results through repeat implementations of Alg.1.
Our proposed solution to the \(p_{\mathrm{s}}\) problem as described above is twofold: 1) We must widen our view of useful \(p_{\mathrm{s}}\) values and see where other \(|\mathrm{X}_{i}\rangle\) states become highly probable, and 2) put less burden on quantum to find optimal solutions alone when an assisting classical approach may be more suitable. In this section we focus on addressing (1), which will then motive (2) in section VI.
Suppose we aren't solely interested in using quantum to find the exact optimal solution C(X\({}_{\mathrm{min}}\)), but instead are content with any X\({}_{i}\) within the best 50 answers (50 lowest C(X) values). In order for amplitude amplification to be viable in this heuristic context, it requires significant probabilities of measurement for these non-optimal solution states, similar to figure 9. Additionally, an experimenter must be able to quickly and reliably find the \(p_{\mathrm{s}}\) values which produce them. Shown below in figure 10 is a plot which provides insight into the feasibility of both of these questions, for the QUBO corresponding to figure 9.
Figure 10 shows the full \(p_{\mathrm{s}}\) range for which an experimenter could find the 50 best solutions for minimizing C(X) via quantum measurements. The black circles indicate on the x-axis the \(p_{\mathrm{s}}\) values where each \(|\mathrm{X}_{i}\rangle\) state (or multiple states) is maximally probable, aligning with its corresponding C(X\({}_{i}\)) value along the y-axis. Numeric values for peak probabilities of the best 20 solutions are provided in the table below the plot, as well as a linear regression best fit (red-dotted line) for the overall 50 data points. The reported R correlation value is given by equation B5 in appendix **B**.
There are several significant results displayed in figure 10, the first of which requires returning to equation 2. By limiting the allowed weighted values for \(W_{i}\) and \(w_{ij}\) to integers, all solutions to C(X) are consequently integers as well. This means that the linear correlation shown in the figure can in principle be used to predict \(p_{\mathrm{s}}\) values where integer C(X\({}_{i}\)) solutions must exist. If \(W_{i}\) and \(w_{ij}\) are instead allowed to take on float values, which is more general of realistic optimization problems, the linearity of solutions like shown still persists but cannot be used for predictions of allowed C(X) values as reliably.
The linear best fit shown in figure 10 is accurate for the top 50 solutions, but extending the \(p_{\mathrm{s}}\) scale further reveals that it is only an approximation applicable to a small percentage of states. This is shown in figure 11 below, which once again is a \(p_{\mathrm{s}}\) vs. C(X) plot for the same QUBO, but now for the best 400 minimizing solutions. It is clear from the data in this figure that the top 400 solutions are in no way linearly aligned, which
Figure 9: Plots of \(|\mathrm{X}_{i}\rangle\) state probability from amplitude amplification as a function of \(p_{\mathrm{s}}\), for \(|\mathrm{X}_{\mathrm{min}}\rangle\) (blue-solid) and the next two minimal solutions (black and red-dashed). Cost function values C(X\({}_{i}\)) are reported next to each state’s plot, corresponding to the QUBO from the top plot in figure 7. The bottom plot is a zoomed in scale of the top plot, depicting the same data points.
is a more expected result given the complicated nature of these imperfect gaussian distributions undergoing amplitude amplification. However, although the data is not linear, there is very clearly a curved structure that could be utilized to predict \(p_{\rm s}\) values in the same manner described above.
It is important to note that in both figures 10 and 11, the manner in which the solution states \(|{\rm X}_{i}\rangle\) are found to be most probable is sequential. This means that if a particular state \(|{\rm X}_{i}\rangle\) is most probable at a certain value \(p_{\rm s}=x\), all solutions \({\rm C}({\rm X}_{j})<{\rm C}({\rm X}_{i})\) will have peak probabilities at values \(p_{\rm s}<x\). However, the bottom plot in figure 11 shows that the further a solution state is from \(|{\rm X}_{\rm min}\rangle\) the lower its achievable peak probability. This means that there is a limit to how many solutions are viable for amplitude amplification to find. As we discuss in the coming subsections, these are the key underlying features that we must consider when constructing a variational amplitude amplification algorithm.
### Constant Iterations
In order to construct an algorithm which capitalizes on the structure and probabilities shown in figure 11, we must consider an additional piece of information not illustrated in the figure: step 3 of Alg. 1, iterations \(k\). The data points in the figure are indeed the \(p_{\rm s}\) values which produce the highest probabilities of measurements, but unfortunately they are achieved using different iteration counts. In principle this means that an experimenter must decide both \(p_{\rm s}\) and \(k\) before each amplitude amplification attempt, further complicating the information learned from measurement results.
Unlike \(p_{\rm s}\), which is difficult to learn because it depends on the collective \(2^{N}\) solutions to \({\rm C}({\rm X})\), approximating a good iteration count \(k\) is easier. It turns out that the standard number of Grover iterations \(k_{\rm G}=\frac{\pi}{4}\sqrt{N/M}\), where \(N\) is the total number of quantum states and \(M\) is the number of marked states, is equally applicable when
Figure 11: (top) A plot of the 400 lowest \({\rm C}({\rm X}_{i})\) values as a function of \(p_{\rm s}\), for the \({\rm X}_{\Delta}=331.5\) QUBO from figure 7. Each data point represents the \(p_{\rm s}\) value where the \(|{\rm X}_{i}\rangle\) state(s) is most probable. The red box in the lower left corner represents the \(p_{\rm s}\) region depicted in figure 10. (bottom) The probabilities achieved for these 400 lowest \(|{\rm X}_{i}\rangle\) states using the \(p_{\rm s}\) values shown in the top plot. Each state is plotted in order of it’s rank from 1 (\({\rm X}_{\rm min}\)) to 400 (\(400^{\rm th}\) lowest \({\rm C}({\rm X}_{i})\) solution).
Figure 10: (top) A plot of the 50 lowest \({\rm C}({\rm X}_{i})\) values as a function of \(p_{\rm s}\), for the \({\rm X}_{\Delta}=331.5\) QUBO from figure 7. Each data point represents the \(p_{\rm s}\) value where the \(|{\rm X}_{i}\rangle\) state(s) is most probable. A linear regression best-fit is shown by the red-dotted line, with its R correlation value reported at the top (equation B5 from appendix **B**). (bottom) A table of values for the 20 best solutions. Each entry reports: the cost function value \({\rm C}({\rm X}_{i})\), the peak probability for the \(|{\rm X}_{i}\rangle\) state(s), and the number of unique \({\rm X}_{i}\) solutions that result in the same \({\rm C}({\rm X}_{i})\) value.
using \(U_{\rm c}\) as well. If an experimentor can use \(k\approx k_{\rm G}\) iterations for a cost oracle \(U_{\rm c}\) and find significant probabilities of measurment, then a variational amplitude amplification strategy can be reduced to a single parameter problem: \(p_{\rm s}\). Figure 12 demonstrates that this is indeed viable, showcasing \(|{\rm X}_{i}\rangle\) solution state probabilities as a function of \(p_{\rm s}\) for three different choices of \(k\).
The QUBO corresponding to figure 12 is the same \(N=25\) example for figures 10 and 11. For instances where multiple states correspond to the same numerical solution (C(X\({}_{i}\)) = C(X\({}_{j}\))), the solid-color line shown represents their joint probability: Prob.( \(|{\rm X}_{i}\rangle\) ) + Prob.( \(|{\rm X}_{j}\rangle\) ) (note that these individual probabilities are always equal). Examples of this can also be seen in the table included in figure 10. Additionally, a black-dashed line is shown in the top three plots, tracking the collective prob
Figure 12: Plots of \(|{\rm X}_{i}\rangle\) state probabilities as a function of \(p_{\rm s}\), for the \(N=25\) QUBO shown in figures 10 and 11. The top three panels show individual state probabilities as solid-colored lines, for three different constant \(k\) iterations (1000, 2000, and 3000) across the \(p_{\rm s}\) region depicted on the x-axis. An additional black-dashed line is also shown, which records the cumulative probability of the five most probable solutions \(|{\rm X}_{i}\rangle\) at any given \(p_{\rm s}\) value. These cumulative probabilities are also replotted in the bottom most panel for comparison.
ability of the five most probable solutions at any given \(p_{\rm s}\). These three lines are then replotted in the bottom panel for comparison.
The \(p_{\rm s}\) region shown in figure 12 was chosen to illustrate a scenario where variational amplitude amplification is most viable. For \(p_{\rm s}>0.00291\), nearly every possible integer solution \({\rm C}({\rm X}_{i})\geq-1497\) exists via some binary combination for this particular QUBO problem. The exceptions where certain integer solutions do not exist can be seen clearly in the \(p_{\rm s}\) regions with very low probability, for example \(0.0029065\leq p_{\rm s}\leq 0.002907\). Contrast to the region shown in this figure, once \(p_{\rm s}\) becomes closer to where \(|{\rm X}_{\rm min}\rangle\) is maximally probable, then measurment probabilities become more akin to figure 9. Thus, it is more strategic for a hybrid algorithm to start in a \(p_{\rm s}\) region like figure 12, where measurement results can consistently yield useful information.
### Information Through Measurements
From an experimental perspective, a significant result from figure 12 are the black-dashed lines shown in the top three plots. At \(k=3000\) (\(k_{\rm G}\approx 4500\) for 25 qubits, \(M=1\)), the black-dashed line is almost entirely composed of the single most probable solution state(s). With probabilities around \(70-80\%\) for many of the states shown, it is realistic that the same \(|{\rm X}_{i}\rangle\) state could be measured twice in only \(2-4\) amplitude amplification attempts. Two measurements yielding the same \({\rm C}({\rm X}_{i})\) value (possibly from different \({\rm X}_{i}\)) is a strong experimental indicator that the \(p_{\rm s}\) value used is very close to optimal for that solution, corresponding to the data points from figures 10 and 11. Confirming \(3-4\) different data points in this manner can then be used to approximate the underlying curved structure of these figures, which in turn could be used to predict \(p_{\rm s}\) values where \(|{\rm X}_{\rm min}\rangle\) may exist.
While using \(k\) closer to \(k_{\rm G}\) is good for getting the maximal probability out of solution states, the \(k=1000\) and \(2000\) plots in figure 12 support a different strategy for quantum. At \(k=2000\), the black-dashed line is still primarily composed of the single most probable \(|{\rm X}_{i}\rangle\) state(s), but critically it does not have the same dips in probability between neighboring solutions. Instead, the cumulative probability stays just as high for these in-between \(p_{\rm s}\) regions, sometimes even higher! If we now look at the \(k=1000\) plot, this trend becomes even more prevelant, whereby the cumulative probability plot is on average \(20-30\%\) higher than any individual \(|{\rm X}_{i}\rangle\) state. Interestingly, the bottom panel of figure 12 shows that cumulative probability plot for \(k=1000\) is higher than the \(k=3000\) line in many regions. Thus, if the role of quantum is to simply provide a heuristic answer [52], not necessarily \(|{\rm X}_{\rm min}\rangle\), then using lower \(k\) values is favorable for a few reasons. Firstly, we can anticipate solutions in a \(p_{\rm s}\) region where multiple states share the same cost function value, so one can expect \(M>1\) more frequently when using \(k_{\rm G}=\frac{\pi}{4}\sqrt{N/M}\). Secondly, the amplitude amplification process itself is faster due to smaller \(k\), which makes it more achievable on noisy qubits due to shallower circuit depths.
The optimal use of \(k\) is a non-trivial challenge to an experiment. However, as illustrated in figure 12, amplitude amplification can still be effective with a wide range of different \(k\) values. To further demonstrate this, figure 13 shows three plots of simulated measurements over the \(p_{\rm s}\) range depicted in figure 12. Using the \(k\) values \(1000\), \(2000\), and \(3000\), each plot shows data points representing probabilistic measurements at regular intervals of \(p_{\rm s}\). In order to compare the \(k\) value's effectiveness more equally, the number of measurements taken per \(p_{\rm s}\) value, \(t\), was chosen such that \(t\cdot k=12000\) is consistent across all three experiments. Thus, each of the three plots in figure 13 represents the same total number of amplitude amplification iterations divided among \(t\) experimental runs.
The data points shown in figure 13 are separated into two categories, which are easily recognizable from an experimental perspective. Measurements which yielded \({\rm C}({\rm X}_{i})<-1350\) are plotted as red circles, while all other measurements are plotted as black triangles. As illustrated for all three values of \(k\), the red data points can be seen as producing near linear slopes, all of which would signal to the experimenter that these measurement results are leading to \({\rm X}_{\rm min}\). The motivation for figure 13 is to demonstrate that the same underlying information can be experimentally realized using different \(k\) values. Thus, when to use \(k=3000\) versus \(k=1000\) is a matter of optimization, which we discuss in section VI. as the role of a classical optimizer for a hybrid model.
### Quantum Verification
The results of the previous subsections demonstrate the capacity for amplitude amplification as a means for finding a range of optimal \({\rm X}_{i}\) solutions. However, regardless of whether these solutions are found via quantum or classical, a separate problem lies in verifying if a given solution is truly the global minimum \({\rm X}_{i}={\rm X}_{\rm min}\). If it is not, then \({\rm X}_{i}\) is refered to as a local minimum. Classically, evolutionary (or genetic) algorithms [53; 54; 55; 56] are one example strategy for overcoming local minima. Similarly, quantum algorithms have also demonstrated success in this area for both annealing [57; 58] and gate-based [59; 60; 61].
The strategy for verifying a local versus global minimum using amplitude amplification can be seen by comparing the region \(0.0029\leq p_{\rm s}\leq 0.00291\) in figures 12 and 13. For the linear QUBO corresponding to these figures, there exists a solution \({\rm C}({\rm X}_{i})=-1497\) which becomes maximally probable at \(p_{\rm s}\approx 0.002914\), followed by the next lowest solution \({\rm C}({\rm X}_{i})=-1491\) at \(p_{\rm s}\approx 0.002892\). Because there are no binary combinations \({\rm X}_{i}\) that can produce values \(-1492\geq{\rm C}({\rm X}_{i})\geq-1496\), the \(p_{\rm s}\) region that _would_ correspond to their solutions instead produces
nothing measurably significant. This can be seen by the low cumulative probabilities in figure 12, as well as experimentally in figure 13 as a gap in red data points for this \(p_{\rm s}\) region across all three simulations.
The ability for quantum to determine if an X\({}_{i}\) solution is locally or globally minimum is achieved by searching past the \(p_{\rm s}\) value corresponding to the solution. Doing so will result in one of two outcomes: either a lower C(X\({}_{j}\)) value will be probabilistically found (confirming X\({}_{i}\) was a local minimum), or the experimenter will only find random measurement results (X\({}_{i}\) was the global minimum). Examples of this can be seen in figure 14, showcasing simulated measurement results as an experimenter searches past the optimal \(p_{\rm s}\) value for \(|\)X\({}_{\rm min}\)\(\rangle\).
The simulated experiments shown in figure 14 were chosen to highlight both favorable (bottom) and unfavorable (top) cases for quantum. The commonality between both experiments is that there is a clear point in \(p_{\rm s}\) (grey line) in which decreasing \(p_{\rm s}\) further results in only noisy random measurements. However, determining this cutoff point using measurement results alone is challenging. The top plot corresponds to the QUBO from figures 10 - 12, which is the non-ideal situation in which there are significant gaps in solutions between the best 20 minimizing C(X\({}_{i}\)). Experimentally this manifests as numerous \(p_{\rm s}\) regions that could be wrongly interpreted as the X\({}_{\rm min}\) cutoff point. Conversely, the bottom plot represents the ideal case where the best minimizing C(X\({}_{i}\)) solutions are all closely clustered together. This leads to a much more consistent correlation of measurement results leading to X\({}_{\rm min}\), followed by an evident switch to randomness.
The significance here is that amplitude amplification has an experimentally verifiable means for identifying the global minimum X\({}_{\rm min}\) of a cost function. Similarly, the same methodology can be in principle used to check for the existence of an X\({}_{i}\) solution corresponding to any given cost function value, which we discuss further in section VII.C. However, the obvious drawback is that this verification technique relies on numerous amplitude amplification measurements finding nothing, which costs further runtime as well as being probabilistic. As we discuss in
Figure 14: Simulated measurement results for \(p_{\rm s}\) regions above and below the optimal point for finding \(|\)X\({}_{\rm min}\)\(\rangle\). Each plot corresponds to a different linear QUBO of size \(N=25\), \(k=4000\), with X\({}_{\Delta}\) values reported for each (top plot corresponds to the QUBO from figures 9 - 13). The point where X\({}_{\rm min}\) is measured is indicated in both plots by the intersection of the blue (horizontal) and grey (vertical) lines. Red-circle data points represent measurement results within the best 30 minimizing solutions to C(X), otherwise as black triangles.
Figure 13: Simulated measurement results corresponding to the probabilities shown in figure 12, produced by amplitude amplification for various values of \(p_{\rm s}\) (x-axis) and \(k\) (1000, 2000, and 3000). At each of the \(p_{\rm s}\) values simulated, the number of measurements per experiment \(t\) was chosen based on \(k\) as follows (\(t\),\(k\)): (4,3000), (6,2000), (12,1000), such that \(t\cdot k=12000\). Measurement results which yielded C(X\({}_{i}\))\(<-1350\) are plotted as red circles, otherwise as black triangles. Blue lines for C(X\({}_{\rm min}\)) and C(X\({}_{\rm max}\)) are plotted as well.
the next section, a more realistic application of this quantum feature is to help steer a classical algorithm past local minima, leaving the veification of \(\mathrm{X}_{\mathrm{min}}\) as joint effort between both quantum and classical.
## VI Hybrid Solving
The results of section V. were all features of amplitude amplification using \(U_{\mathrm{c}}\) that were found through classical simulations of quantum systems. They represent the primary motivation of this study, which is to demonstrate amplitude amplifaction's potential and the conditions for which it can be experimentally realized. By contrast, the discussions of section VI. here are more speculative. Given all of the results from sections III. - V., we now discuss how the strengths and weaknesses of amplitude amplification synergize with a parallel classical computer.
The plots shown in figures 13 and 14 represent a very non-optimal approach to finding \(\mathrm{X}_{\mathrm{min}}\), functionally a quantum version of an exhaustive search. If the ultimate goal is to solve a cost function problem as quickly as possible, then it is in our best interest to use any and all tools available. This means using a quantum computer when it is advantageous, and similarly also recognizing when the use of a classical computer is more appropriate. In this section we discuss this interplay between quantum and classical, and the situations in which an experimenter may favor one or the other. Shown below in figure 15 is the general outline of a variational amplitude amplification model which relies solely on quantum to produce \(\mathrm{X}_{\mathrm{min}}\).
Given the current state of qubit technologies [62, 63, 64], performing one complete amplitude amplification circuit should be considered a scarce resource. As such, it is the role of a classical optimizer to determine the most effective use of this resource, choosing \(p_{\mathrm{s}}\) and \(k\) values which will probabilistically get the most value out of each attempt. Determining optimal values to adjust a quantum circuit is the typical hybrid strategy found among other popular variational models of quantum computing [41, 42, 51]. The majority of the computational workload is placed on the QPU (quantum processing unit), while a classical optimizer is used in between runs to adjust quantum circuit parameters accordingly. As evidenced by figures 13 and 14, this model is possible for amplitude amplification as well. However, there is a different model of hybrid computing which better utilizes both quantum and classical's strengths, shown below in figures 16 and 17.
The advantage to hybrid computing using the model shown in figure 16 is that both processors are working in tandem to solve the same problem, utilizing information gained from one another. Information obtained through amplitude amplification measurements can be used to speedup a classical algorithm, and vice versa. As we discuss further in the next subsection, this pairing of quantum and classical is maximally advantageous when the strengths of both computers compliment each other's weaknesses.
### Supporting Greedy Algorithms
One notable strength of classical computing is 'greedy' algorithms, which oftentimes provide heuristic solutions for use cases ranging from biology and chemistry [52, 65] to finance [66]. Greedy algorithms are particularly viable for problems that possess certain structures which can be exploited [67]. The key feature to these algorithms is that they focus on making locally optimal decisions which yield the maximal gain towards being optimal. Consequently, they are very good at finding near optimal solutions quickly, but are also prone to getting bottlenecked into local minima [68].
The motivation for pairing amplitude amplification with a classical greedy algorithm is best exemplified by figures 12 and 13. The quantum states illustrated in these figures represent \(\mathrm{|X_{i}\rangle}\) states which rank as the \(30^{\mathrm{th}}-80^{\mathrm{th}}\)
Figure 16: Workflow of a hybrid model of computing, utilizing both a quantum and classical computer. Both the QPU and CPU are run in parallel, and the information obtained from both are fed into the same classical optimizer, which in turn determines the most effective use for each processor.
Figure 15: The general outline of a variational amplitude amplification workflow. Information from amplitude amplification in the form of measurements is fed to a classical optimizer between runs. The optimizer then processes this information to supply the quantum computer with the next set of values \(p_{\mathrm{s}}\) and \(k\), repeating this process until \(\mathrm{X}_{\mathrm{min}}\) or another suitable solution is found.
best minimizing solutions to C(X). Under the right conditions it is reasonable to expect that a quantum computer could yield a solution in this range within \(1-5\) amplitude amplification attempts. The question then becomes how quickly a classical greedy algorithm could achieve the same feat? Without problem specific structures to exploit, and as problem sizes scale like \(2^{N}\), it becomes increasingly unlikely that classical can compete heuristically with quantum, which we argue is quantum's first advantage over classical in a hybrid model.
Now, supposing that amplitude amplification does yield a low C(X\({}_{i}\)) solution faster than classical, the problem then flips back to being classically advantageous. This is because the X\({}_{i}\) solution provided by quantum is now new information available to the classical greedy algorithm. As such, beginning the greedy approach from this new binary string is likely to yield even lower C(X\({}_{i}\)) solutions in a time frame faster than amplitude amplification. For example, this is the exact scenario in which genetic algorithms shine [53, 54, 55, 56, 66], where a near-optimal solution is provided from which they can manipulate and produce more solutions. And if a fast heuristic solution is all that is needed, then quantum's job is done, and the best minimal solution found by the classical greedy algorithm completes the hybrid computation.
But if a heuristic solution is not enough, then we can continue to use a hybrid quantum-classical strategy for finding X\({}_{\text{min}}\). Referring back now to figures 13 and 14, the strategy for quantum is to use multiple amplitude amplification trials and measurements to approximate the underlying correlation from figures 10 and 11. The fastest means for achieving this is to work in a \(p_{\text{s}}\) region analogous to figure 12, where experimentally one has the highest probabilities of measuring useful information. Simultaneously, the classical greedy algorithm can also find X\({}_{i}\) solutions in this area as it searches for X\({}_{\text{min}}\). Knowledge of these X\({}_{i}\) solutions can be directly fed back to quantum, which can be used to predict \(p_{\text{s}}\) values where solutions are known to exist, speeding up the process of determining a \(p_{\text{s}}\) vs. C(X) correlation. Thus, after quantum initially aided classical, subsequent information obtained from classical is then used to speed up quantum.
In the time it takes for quantum to experimentally verify enough \(p_{\text{s}}\) and C(X\({}_{i}\)) values to create a predictive correlation, we expect the classical algorithm to find a new lowest C(X\({}_{i}\)) solution, labeled X\({}^{\text{\text{\textdagger}}}\) in figure 17. After investing additional trials into the amplitude amplification side of the computation, it is now time for quantum's second advantage: verifying local versus global minima. Using an approximate \(p_{\text{s}}\) vs C(X) best-fit, the quantum computer can skip directly to the \(p_{\text{s}}\) value corresponding to best currently known X\({}^{\text{\textdagger}}\) solution. As discussed in section V.D, searching past this \(p_{\text{s}}\) value will result in one of two outcomes. Either the quantum computer will find a new best solution C(X\({}_{j}\)) \(<\) C(X\({}^{\text{\textdagger}}\)\({}_{i}\)), or confirm that X\({}^{\text{\textdagger}}\)\({}_{i}\) is indeed the global minimum X\({}_{\text{min}}\). In the former case, the greedy algorithm now starts again from the new lowest solution X\({}_{j}\), repeating this cycle between quantum and classical until X\({}_{\text{min}}\) is found. Figure 17 below shows a workflow outline of this hybrid strategy.
The biggest advantage to using a hybrid model like shown in figure 17 is that it can be adapted to each problem's uniqueness. Problems with known fast heuristic techniques can lean on the classical side of the computation more heavily [69, 70], while classically difficult problems can put more reliance on quantum [71, 72]. But above all else, this model of computation incorporates and synergizes the best known classical algorithms with quantum, rather than competing against them.
## VII More oracle problems
All of the results from sections III. - V. were derived from linear QUBOs according to equations 1 - 4. However, these results can be applied to more challenging and realistic optimization problems provided that 1) all
Figure 17: Workflow for a hybrid model of computing between quantum amplitude amplification and a classical greedy algorithm. The full strategy is broken up into three phases: 1) Amplitude amplification provides the first heuristic solution X\({}_{i}\). 2) A classical greedy algorithm uses X\({}_{i}\) to find a more optimal solution X\({}^{\text{\textdagger}}\). Simultaneously, other near optimal solutions X\({}_{j}\) are used to assist amplitude amplification in determining a \(p_{\text{s}}\) vs. C(X) correlation (see figures 10 - 13). 3) The correlation best-fit is used to predict \(p_{\text{s}}\) values where solutions C(X\({}_{j}\)) \(<\) C(X\({}^{\text{\textdagger}}\)\({}_{i}\)) must exist (or C(X\({}_{j}\)) \(>\) C(X\({}^{\text{\textdagger}}\)\({}_{i}\)) for maximization problems). Amplitude amplification attempts for these \(p_{\text{s}}\) values will either produce a new best X\({}_{j}\) for the greedy classical algorithm to use, or confirm X\({}^{\text{\textdagger}}\)\({}_{i}\) = X\({}_{\text{min}}\).
possible solutions can be encoded via phases by an appropriate oracle operation \(U_{\mathrm{c}}\), and 2) the distribution of all possible answers is suitable for boosting the solution we seek (gaussian, polynomial, exponential, etc. [18]). Here we will briefly note some additional optimization problems which meet both of these criteria.
### Weighted & Unweighted Max-Cut
The Maximum Cut problem ('Max-Cut') is famously NP-Hard [71], where the objective is to partition every vertex in a graph \(\mathbb{S}\) into two subsets \(\mathbb{P}_{1}\) and \(\mathbb{P}_{2}\) such that the number of edges between them is maximized. In the weighted Max-Cut version of the problem, each edge is given a weight \(w_{ij}\), and the goal is to maximize the sum of weights contained on edges between \(\mathbb{P}_{1}\) and \(\mathbb{P}_{2}\). The unweighted Max-Cut problem has already been demonstrated as a viable use for amplitude amplification [17], which we will build upon further here via the weighted version. Equation 31 below is the cost function \(\mathrm{C}(\mathrm{X})\) for the weighted Max-Cut problem, which can be converted to the unweighted case by setting every edge weight \(w_{ij}=1\). The binary variables \(x_{i}\) here represent being partitioned into \(\mathbb{P}_{1}\) or \(\mathbb{P}_{2}\).
\[\mathrm{C}(\mathrm{X})=\sum_{\{i,j\}\in\mathbb{S}}w_{ij}|x_{i}-x_{j}| \tag{31}\]
Shown in figure 18 is an example graph \(\mathbb{S}\) and one of its solutions. This example graph is composed of 10 vertices, labeled 1 - 10, and a total of 15 connecting edges. Encoding this graph requires one qubit per vertex, where the basis states \(|1\rangle\) and \(|0\rangle\) represent belonging to the subsets \(\mathbb{P}_{1}\) and \(\mathbb{P}_{2}\) respectively. See the bottom graph in figure 18 for an example solution state.
The cost oracle \(U_{\mathrm{c}}\) for solving Max-Cut must correctly evaluate all \(2^{N}\) solution states \(|\mathrm{X}_{i}\rangle\) based on the edges of \(\mathbb{S}\) according to equation 31. For example, if vertices 1 and 2 are partitioned into different sets, then \(U_{\mathrm{c}}\) needs to affect their combined states \(|\mathrm{Q}_{1}\mathrm{Q}_{2}\rangle=|01\rangle\) and \(|10\rangle\) with the correct phase, weighted or unweighted. Just like figure 3 from earlier, we can achieve this with a control-phase gate \(\mathrm{CP}(\theta)\),with the intent of scaling by \(p_{\mathrm{s}}\) later (see figure 4). The caveat here is that we need this phase on \(|01\rangle\) and \(|10\rangle\), not \(|11\rangle\), which means that additional \(\mathrm{X}\) gates are required for the contruction of \(U_{\mathrm{c}}\), shown below in equation 32.
\[\mathrm{X}=\begin{bmatrix}0&1\\ 1&0\end{bmatrix} \tag{32}\]
For the complete \(U_{\mathrm{c}}\) quantum circuit which encodes the graph \(\mathbb{S}\) in figure 18, please see appendix **C**. Once properly scaled by \(p_{\mathrm{s}}\), the solutions which are capable of boosting are determined by the underlying solution space distribution of the problem, which can be seen in figure 19 below. The histogram in this figure shows all \(2^{10}\)\(\mathrm{C}(\mathrm{X}_{i})\) solutions to the graph \(\mathbb{S}\) from figure 18. Even for a 10 qubit problem size such as this, it is clear that the underlying solution space distribution shows gaussian-like structure.
One interesting feature of Max-Cut is that all solutions come in equal and opposite pairs. For example, the optimal solutions to \(\mathbb{S}\) from figure 19 are \(|0100101110\rangle\) and \(|1011010001\rangle\), which both yield 13 'cuts'. Mathematically there is no difference between swapping all vertices in \(\mathbb{P}_{1}\)
Figure 19: A histogram of all \(2^{10}\) solutions for an unweighted Max-Cut on graph \(\mathbb{S}\) from figure 18.
Figure 18: (top) A graph \(\mathbb{S}\) composed of 10 nodes and 15 connections. Each node is labeled 1 - 10, corresponding to the qubits \(\mathrm{Q}_{1}\) - \(\mathrm{Q}_{10}\) shown below. (bottom) An example Max-Cut solution \(\mathrm{X}_{i}\), along with its quantum state representation \(|\mathrm{X}_{i}\rangle\). Nodes colored red correspond to the partition \(\mathbb{P}_{1}\), quantum state \(|1\rangle\), while nodes colored white correspond to partition \(\mathbb{P}_{2}\), quantum state \(|0\rangle\). ‘Cuts’ are represented in the graph as dashed lines, totaling 8 for this example.
and \(\mathbb{P}_{2}\), but physically it means that there are always two optimal solution states. Consequently, these states will always share the effect of amplitude amplification together, which is something an experimenter must be aware of when choosing iterations \(k\).
Finally, moving from the unweighted to weighted version of Max-Cut increases the problem's difficulty, but notably does not change the circuit depth of \(U_{\mathrm{c}}\). Rather than using \(\theta=1\) for all of the control-phase gates, each \(\theta\) now corresponds to a weighted edge \(w_{ij}\) of the graph. Similar to the QUBO distributions shown in figure 7, this increase in complexity allows for more distinct C(X\({}_{i}\)) solutions, and consequently more variance in features such as \(\sigma^{\prime}\) and X\({}_{\Delta}\).
### Graph Coloring
A similar optimization problem to Max-Cut is Graph Coloring, also known as Vertex Coloring [71], which extends the number of allowed partition sets \(\mathbb{P}_{i}\) up to any integer number \(k\) (\(k=2\) is equivalent to Max-Cut). Given a graph of vertices and edges \(\mathbb{S}\), the goal is to assign every vertex to a set \(\mathbb{P}_{i}\) such that the number of edges between vertices within the same sets is minimized. Shown below in equation 33 is the cost function C(X) for a \(k\)-coloring problem, where the values of each vertex \(x_{i}\) are no longer binary, but can take on \(k\) different integer values. The quantity inside the parentheses is equal to \(1\) if \(x_{i}=x_{j}\), and \(0\) for all other combinations \(x_{i}\neq x_{j}\). Just like with Max-Cut, setting all \(w_{ij}=1\) is the unweighted version of the problem.
\[\text{C(X)}=\sum_{\{i,j\}\in\mathbb{S}}w_{ij}\left(1-\Big{\lceil}\frac{|x_{i} -x_{j}|}{k}\Big{\rceil}\right) \tag{33}\]
The name 'coloring' is in reference to the problem's origins, whereby the sets \(\mathbb{P}_{i}\) all represent different colors to be applied to a diagram, such as a map. Shown below in figure 20 is an example picture composed of overlapping shapes, where each section must be assigned one of \(k\) colors such that the number of adjacent sections with the same color is minimized. Example solutions for \(k=3\) and \(k=4\) are shown, along with their vertex and quantum state representations of the problem.
In order to encode graph coloring as an oracle \(U_{\mathrm{c}}\), the choice of \(k\) determines whether qubits or another form of quantum computational unit is appropriate. While qubits are capable of producing superposition between two quantum states, qudits are the generalized unit of quantum information capable of achieving superposition between \(d\) states [73; 74; 75; 76]. To see why this is necessary, let us compare the \(k=3\) and \(4\) examples from figure 20, and the quantum states needed to represent partitioning each vertex.
For \(k=4\), we need four distinct quantum states to represent a vertex belonging to one of the \(\mathbb{P}_{i}\) partitions. While a single qubit can't do this, a pair of qubits can. Thus, every vertex in \(\mathbb{S}\) can be encoded as a pair of qubits, letting the basis states \(|00\rangle\), \(|01\rangle\), \(|10\rangle\), and \(|11\rangle\) each represent a different color. Alternatively, we could use a \(d=4\) qudit to represent each vertex, assigning each partition a unique basis state: \(|0\rangle\), \(|1\rangle\), \(|2\rangle\), or \(|3\rangle\), such as the state shown in figure 20. Mathematically the two encodings are identical, so the choice between whether to use qubits or qudits is a matter of experimental realization (i.e. which technology is easier to implement).
For \(k=3\) however, two qubits is too many states, and a single qubit is not enough. So in order to represent three colors exactly in quantum, the appropriate unit is a 'qutrit' (the common name for a \(d=3\) qudit). Similarly, all prime numbers \(d\) can only be encoded as their respective \(d\)-qudit, while all composite values can be built up from combinations of smaller qudits. Once an appropriate mixed-qudit quantum system is determined, con
Figure 20: (top) On the left, a two dimensional bounded picture with overlapping geometric shapes. On the right, a graph \(\mathbb{S}\) representing the \(12\) distinct regions of the picture as nodes. Connections between nodes in \(\mathbb{S}\) represent regions in the picture which share a border, not counting adjacent corners. (middle) A \(k=3\) coloring example, with a corresponding \(d=3\) qudit state representation. (bottom) A \(k=4\) coloring example, with a corresponding \(d=4\) qudit state representation.
structing \(U_{\rm c}\) is the same as the Max-Cut problem from earlier, but now with \(k\) state-state interactions. For an example of qudit quantum circuits and their use for amplitude amplification, please see Wang et al. [73] and our previous work on the Traveling Salesman problem[10].
### Subset Sum
For all of the oracles discussed thus far, the circuit depth and total gate count for \(U_{\rm c}\) is determined by the size and connection complexity of \(\mathbb{S}\), the graphical representation of the problem. By contrast, the simplest possible quantum circuit that can be used as \(U_{\rm c}\) corresponds to the Subset Sum problem [71]. The cost function for this problem is given in equation 34.
\[\rm{C(X)}=\sum_{i}^{N}W_{i}x_{i} \tag{34}\]
Rather than optimizing equation 34, which is trivial, the Subset Sum problem is to determine if there exists a particular combination such that \(\rm{C(X_{i})}=T\), where \(T\) is some target sum value. The boolean variables \(x_{i}\) represent which \(W_{i}\) values to use as contributors to the sum. Figure 21 below shows an \(N=10\) example. Note that this problem is equally applicable to any of the other oracles discussed thus far, whereby we can ask if a target value \(T\) exists for some graph \(\mathbb{S}\).
The reason why equation 34 is the simplest \(U_{\rm c}\) oracle one can construct is because the cost function doesn't contain any weights \(w_{ij}\) that depend on two variables. Consequently, the construction of \(U_{\rm c}\) doesn't use any 2-qubit phase gates \(\rm{CP}(\theta)\), instead only requiring a single qubit phase gate \(\rm{P}(\theta)\) for every qubit. In principle, all of these single qubit operations can be applied in parallel, such as in figure 3, which means that the circuit depth of \(U_{\rm c}\) is exactly one.
Although this is the most gate efficient \(U_{\rm c}\), using it to solve the Subset Sum problem comes with some limitations. Firstly, it can only solve for \(T\) values within a limited range. This is illustrated by the results of figure 11, which demonstrate that amplitude amplification can only produce meaningful probabilities of measurement up to a certain threshold away from \(\rm{X_{min}}\) or \(\rm{X_{max}}\). Consequently, one can only use \(U_{\rm c}\) here if the target sum value \(T\) is within this threshold distance from the extrema.
The second limitation to consider is the discussion from section V.D, whereby the information of whether a state \(\rm{C(X_{i})}=T\) exists or not may rely on measurements finding nothing. Previously we discussed how an experimenter might iteratively decrease \(p_{\rm s}\) and eventually expect to find regions where cost function values do not exist (see figure 14) as one approaches \(\rm{X_{min}}\). Here things are easier, since an experimenter can test for \(p_{\rm s}\) values above and below where \(\rm{C(X_{i})}=T\) (except for the case where \(T\) is the global extrema). Using a \(p_{\rm s}\) vs. \(\rm{C(X)}\) correlation in this manner can confirm exactly where the \(p_{\rm s}\) value for \(\rm{C(X_{i})}=T\) must be. Testing this \(p_{\rm s}\) window will then either confirm the existence of a solution for \(T\) via a measurement, or conversely confirm no solution exists through multiple trials of random measurement results.
## VIII Conclusion
The results of this study demonstrate that the gate-based model of amplitude amplification is a viable means for solving combinatorial optimization problems, particularly QUBOs. The ability to encode information via phases and let the \(2^{N}\) superposition of qubits naturally produce all possible combinations is a feature entirely unique to quantum. Harnessing this ability into a useful algorithmic form was the primary motivation for this study, and as we've shown, is not without its own set of challenges. In particular, the discussions of sections IV.A & IV.B highlight that this algorithm is not a 'one size fits all' strategy that can be blindly applied to any QUBO. Depending on how the numerical values of a given problem form a solution space distribution, it may simply be impossible for amplitude amplification to find one extrema or the other. Figure 8 shows that at least one of the extrema solutions is always viable for quantum to find, it just may not happen to be the one that is of interest to the experimenter.
For cases where the desired solution is well-suited for quantum to find, that is \(\rm{|X_{min}}\) or \(\rm{|X_{max}}\)) is capable of achieving a high probability of measurement, a different challenge lies in finding the correct \(p_{\rm s}\) value to use in order to boost these states. However, the results of section V. illustrate that this challenge is solvable via quantum measurement results. If the best an experimenter could do is simply guess at \(p_{\rm s}\) and hope for success, then amplitude amplification would not be a practical algorithm. But the correlations shown in figures 10 and 11 illustrate that that is not the case, and that information about \(p_{\rm s}\) can be experimentally learned and used to find extrema solutions. How quickly this information can be experimentally produced, analyzed, and used is exactly how quickly quantum can find the optimal solution, which is an open question for further research.
While the free parameter \(p_{\rm s}\) can be considered the bot
Figure 21: (top) A set of 10 integer values, shown in ascending order, from which we are intereted in solving the Subset Sum problem for \(T=22\). (bottom) An example solution state \(\rm{|X_{i}\rangle}\) corresponding to the cost function value \(\rm{C(X)}=22\).
tleneck of our algorithm for finding optimal solutions, there is a second important metric by which we can judge the usefulness of amplitude amplification: as a heuristic algorithm. A major finding of this study is depicted in figure 12, which shows that there is a wide range of \(p_{\rm s}\) values for which quantum can find an answer within the best \(1-5\%\) of all solutions. And as we demonstrated in section IV.C with sampling, it is not unrealistic that a classical computation can estimate this \(p_{\rm s}\) region very quickly. The question then becomes how does this compare to classical greedy algorithms, and how quickly can they achieve the same feat in a timescale compared to quantum's O(\(\frac{\pi}{4}\sqrt{N/M}\)) for problem sizes of \(2^{N}\). The answer to this question will vary from problem to problem, but certainly in some cases such as highly interconnected QUBOs we view this as the first practical use for amplitude amplification.
And finally, there is one important sentiment from section VI that we would like to reiterate again here, namely that amplitude amplification is a technique that benefits tremendously from working in parallel with a classical computer. The information learned through quantum measurements can equally be of use to speeding up quantum as well as a classical algorithm. And vice versa, information learned through a classical greedy algorithm can be used to speed up quantum. The goal of this hybrid computing model is to utilize the advantages both computers have to offer, and ultimately to find optimal solutions faster than either computer can achieve alone. Understanding which optimization problems this scenario may be applicable to is the future direction of our research.
###### Acknowledgements.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of AFRL.
## Data & Code Availability
The data and code files that support the findings of this study are available from the corresponding author upon reasonable request. |
2307.16493 | A Note on the Soft Group Category | The main purpose of this paper is to introduce the structure of soft group
category. In this category, we determine some special objects and morphisms
having a universal structure such as the final object and product. Therefore,
the category of soft groups is a symmetric monodial category. | Nazmiye Alemdar, Hasan Arslan | 2023-07-31T08:38:12Z | http://arxiv.org/abs/2307.16493v1 | ###### Abstract
###### Abstract
The main purpose of this paper is to introduce the structure of soft group category. In this category, we determine some special objects and morphisms having a universal structure such as the final object and product. Therefore, the category of soft groups is a symmetric monoidal category.
**A Note on the Soft Group Category**
Nazmiye Alemdar\({}^{*,1}\), Hasan Arslan\({}^{*,2}\)
\({}^{*}\)_Department of Mathematics, Faculty of Science, Erciyes University, 38039, Kayseri, Turkey_
\({}^{1}\)[email protected]_\({}^{2}\)[email protected]_
**Keywords**: Soft groups, category, split monic, hyperoctahedral group
**2020 Mathematics Subject Classification**: 18A05, 18A10, 18A20, 19D23, 20F55
## 1 Introduction
The real world is too complex for us to understand and interpret it. Therefore, simplified reality models of the real world are tried to be obtained. But these mathematical models are also very complex and it is very difficult to analyze them. Uncertainty in data makes classical methods unsuccessful when trying to shape problems in engineering, physics, computer science, economics, social sciences, health sciences, etc. Therefore, it is not entirely appropriate to use classical set theory based on exact cases when solving problems with such uncertainties. To deal with these problems, many mathematical theories such as fuzzy set theory, intuitionistic fuzzy set theory, indeterminate set theory, mathematical time theory and rough set theory have been defined.These theories are used as tools against uncertain situations. However, it has been seen that all these theories have their own problems.
According to Molodtsov, these difficulties are most likely due to the inadequacy of the theory's parameterization tools. Molodtsov, freed from these troubles, put forward the soft set theory as a mathematical theory in 1999 in [1]. In his works, he successfully applied this new theory and its results to other fields such as probability theory, Perron integration, Riemann integration, research operation, game
theory etc. In most of the games you need to design human behaviors or human models. There are many approaches to explain human behavior in game theory such as payment and selection functions. A selection function is a transformation that relates a set of strategies to a particular situation. Molodsov defined the _soft function_ as a mathematical tool that retains all the good sides of the selection function and eliminates the drawbacks of the pay function and the selection function. Tripathy et al. [7] also defined the basic definitions and concepts for soft sets in the decision-making process and provided the use of soft sets in the decision-making process based on game theory.
Recently, many authors have worked on the algebraic structures of soft sets. Aktas and Cagman [2] introduced the soft group structure depending on definition of soft sets given by Molodsov and showed a way to construct algebraic structures based on concept of soft set.
As for category theory, the category theory was introduced by Samuel Elenberg and Saunders Maclane in the 1940s. The main purpose of category theory is to model and solve some problems in a simpler way by using objects and morphisms.
Category theory is a comprehensive area of study in mathematics that examines in an abstract way the basic common language used to describe structures occuring in different contexts. These developments in computer science plays very important role in programming studies, logic and authentication in computer science.
In category theory, all information about objects is encoded with morphisms between them. In order to examine the internal structure of an object, not only the object itself but also the relations of this object with other objects in the category are considered. The characterization of the relations between any particular type of object and the rest of the universe in which it is located is called a universal structure and this situation is very common in category theory. For this reason, it should be investigated whether the category has special objects and morphisms, if any, in order to create the universal structure associated with the category.
If any category has a finite product and a final object and also both the product is a monoidal product and the final object is unit in this category, then this category is called a _cartesian monoidal category_[3]. Any cartesian monoidal category is a symmetric monoidal category [4]. Open games can be considered as morphisms of a symmetric monoidal category with objects consisting of pairs of sets. However, morphisms in the symmetric monoidal category are also illustrations of Feynman diagrams in quantum field theory.
In this paper, we obtain the category of soft groups and investigate the structures of both special objects (final object and product) and morphisms (epimorphism, monomorphism) in the category of soft groups as an analogue to Mac Lane's study [3].
Preliminaries
**Definition 2.1**.: _Let \(U\), \(P(U)\) and \(E\) be a universal set, the power set of \(U\) and a set of parameters, respectively. If \(F:A\to P(U)\) is a function, then the pair \((F,A)\) is said to be a soft set over \(U\), where \(A\subseteq E\)[1]._
**Definition 2.2**.: _Let \((F,A)\) and \((G,B)\) be any two soft sets over the same universal set \(U\). If the following conditions are satisfied, then \((F,A)\) is called a soft subset of \((G,B)\) and we denote it by \((F,A)\sqsubseteq(G,B)\)[5]:_
1. \(A\subseteq B\)__
2. \(F(a)\) _and_ \(G(a)\) _are two identical predictions for every_ \(a\in A\)_._
See [1], [2], [5] for more detailed information about soft sets.
**Definition 2.3**.: _Let \(G\) be any group and let \((F,A)\) be soft set over \(G\). If \(F(a)\) is a subgroup of \(G\) for each \(a\in A\), then the pair \((F,A)\) is called a soft group over \(G\)[2]._
As a convention, throughout this paper, we denote by \((F,A)_{G}\) the soft group \((F,A)\) over the group \(G\).
**Definition 2.4**.: _Let \((F,A)_{G}\) be a soft group. If \(F(x)=\{e\}\) for all \(x\in A\), then \((F,A)_{G}\) is said to be a trivial or identity soft group, where \(e\) stands for the identity element of \(G\)[2]._
**Definition 2.5**.: _Let \((F,A)_{G}\) be a soft group. If \(F(x)=G\) for all \(x\in A\), then \((F,A)_{G}\) is called completely soft group [2]._
**Definition 2.6**.: _Let \((F,A)_{G}\) and \((H,B)_{K}\) be two soft groups. If there exists a group homomorphism \(f:G\to K\) and a function \(p:A\to B\) such that_
\[\hat{f}\circ F=H\circ p\]
_then the pair \((f,p)\) is said to be a soft group homomorphism from \((F,A)_{G}\) to \((H,B)_{K}\), where \(f\) and \(\hat{f}\) are identical on the power set \(P(G)\)[6]._
In other words, the pair \((f,p)\) is a soft group homomorphism if and only if the following diagram is commutative:
(1)
**Definition 2.7**.: _Let the pair \((f,p)\) be a soft group homomorphism from \((F,A)_{G}\) to \((H,B)_{K}\). If \(f\) is a group isomorphism and \(p\) is a bijection then the soft groups \((F,A)_{G}\) and \((H,B)_{K}\) are called isomorphic and written \((F,A)_{G}\cong(H,B)_{K}\)[6]._
**Theorem 2.8**.: _The composition of two soft group homomorphisms is a soft group homomorphism [6]._
**Definition 2.9** ([2]).: _Let \((F,A)_{G}\) and \((H,B)_{K}\) be two soft groups. Soft product of the soft groups \((F,A)_{G}\) and \((H,B)_{K}\) is defined as_
\[U(x,y)=F(x)\times H(y)\]
_for all \((x,y)\in A\times B\), and it is represented by_
\[(F,A)_{G}\hat{\times}(H,B)_{K}=(U,A\times B)_{G\times K}.\]
This concept can be generalized to three or finitely many soft groups in the following way:
**Definition 2.10**.: _Let \((F_{1},A_{1})_{G_{1}},(F_{2},A_{2})_{G_{2}},\cdots,(F_{n},A_{n})_{G_{n}}\) be soft groups. Soft product of these soft groups is defined as_
\[U(x_{1},x_{2},\cdots,x_{n})=F_{1}(x_{1})\times F_{2}(x_{2})\cdots\times F_{n }(x_{n})\]
_for all \((x_{1},x_{2},\cdots,x_{n})\in A_{1}\times A_{2}\cdots\times A_{n}\), and denoted by_
\[(F_{1},A_{1})_{G_{1}}\hat{\times}(F_{2},A_{2})_{G_{2}}\hat{\times}\cdots\hat{ \times}(F_{n},A_{n})_{G_{n}}=(U,A_{1}\times A_{2}\cdots A_{n})_{G_{1}\times G_ {2}\cdots\times G_{n}}.\]
## 3 Soft Group Category
The objects of the soft group category are soft groups and the morphisms between these objects are soft group homomorphisms. The composition in this category is defined as the composition of soft group homomorphisms.
**Proposition 3.1**.: _Soft groups and soft group homomorphisms between them forms a category._
Proof.: The proof is clear from the equation (1) and Theorem 2.8.
Note here that in this category for each object \((F,A)_{G}\), the unit morphism is defined as the soft group homomorphism \((1_{G},1_{A})\), where \(1_{G}\ :\ G\mapsto G\) is the identity group homomorphism and \(1_{A}\ :\ A\mapsto A\) is an identity map. Throughout this paper, this category is denoted by \(SGp\).
We will give the definition of soft kernel which is not included in the literature and will be needed to prove that a monic morphism is one-to-one in the category of soft groups inspired by group theory.
**Definition 3.2**.: _Let \((F,A)_{G}\) and \((H,B)_{K}\) be any two soft groups. Let \((f,p):(F,A)_{G}\to(H,B)_{K}\) be a soft group homomorphism. Let \(A^{\prime}\) be the set consisting of \(x\in A\) such that \(F(x)=Kerf\). Having fixed_
\[A^{\prime}=\{x\in A:F(x)=Kerf\}, \tag{2}\]
_we define \(F^{\prime}\) as the restriction function of \(F\) to \(A^{\prime}\). Then the soft group \((F^{\prime},A^{\prime})_{G}\) is called the soft kernel of \((f,p)\)._
**Remark 3.3**.: _The soft kernel of a soft group homomorphism \((f,p):(F,A)_{G}\to(H,B)_{K}\) may not always be defined. One can construct a soft group homomorphism with the soft kernel \((f,p):(F,A)_{G}\to(H,B)_{K}\) by defining \(F(x)=Kerf\) for some \(x\in A\)._
**Theorem 3.4**.: _Let \((F,A)_{G}\) and \((H,B)_{K}\) be any two soft groups. Assume that \((f,p):(F,A)_{G}\to(H,B)_{K}\) a soft group homomorphism such that \(F(x)=Kerf\) for some \(x\in A\). Then \(f\) is injective if and only if \((F^{\prime},A^{\prime})_{G}\), which is the soft kernel of \((f,p)\), is the trivial soft subgroup._
Proof.: Let \(f:G\to H\) be a monomorphism and let \(F^{\prime}\) be the restriction of the function \(F\) to \(A^{\prime}\), where \(A^{\prime}\) is defined as in the equation (2). Thus the pair \((F^{\prime},A^{\prime})_{G}\) is the trivial soft subgroup of \((F,A)_{G}\). Conversely, let \((F^{\prime},A^{\prime})_{G}\) be the trivial soft subgroup, where \((F^{\prime},A^{\prime})_{G}\) is the soft kernel of \((f,p)\). We can write \(F^{\prime}(x)=Kerf=\{e\}\) for all \(x\in A^{\prime}\). Therefore, \(f\) is an injective group homomorphism.
**Definition 3.5**.: _Let \((f,p)\) and \((g,q)\) be two soft group homomorphisms. Then \((f,p)\) is equal to \((g,q)\) if and only if \(f=g\) and \(p=q\)._
**Theorem 3.6**.: _Let \((f,p):(F,A)_{G}\to(H,B)_{K}\) be a soft group homomorphism such that \(F(x)=Kerf\) for some \(x\in A\). If \((f,p)\) is monic, then \(f\) and \(p\) are injective._
Proof.:
Suppose that \(f\) is not injective. Thus we have \(Kerf\neq\{e\}\). When \((f,p)\circ(i_{Kerf},i_{A^{\prime}})=(f,p)\circ(f^{\prime},i_{A^{\prime}})\), we get \((i_{Kerf},i_{A^{\prime}})\neq(f^{\prime},i_{A^{\prime}})\), where \(f^{\prime}:Kerf\to G,f^{\prime}(x)=e\). Hence the map \((f,p)\) is not monic.
Now \(p\) is not injective. Then we have two different \(x,y\in A\) such that \(p(x)=p(y)\).
If we define the maps \(p_{1},p_{2}:A\to A\) as
\[p_{1}(a)=\begin{cases}a,&a\in A-\{x,y\}\\ x,&a\notin A-\{x,y\}\end{cases}\quad\text{and}\quad\quad p_{2}(a)=\begin{cases}a,&a\in A-\{x,y\}\\ y,&a\notin A-\{x,y\}\end{cases}\]
then we have \(p\circ p_{1}=p\circ p_{2}\). Now we want to show that the map \((1_{G},p_{1}):(F,A)\to(F,A)\) is a soft group homomorphism. In what follows, we need to prove the diagram
is commutative. Suppose \(a\in A-\{x,y\}\). Then we get
\[(F\circ p_{1})(a)=F(a)=\hat{1_{G}}(F(a))=(\hat{1_{G}}\circ F)(a).\]
Due the fact that \(\hat{f}\circ F=H\circ p\) and \(f\) is injective, we conclude \(\hat{f}(F(x))=H(p(x))=H(p(y))=\hat{f}(F(y))\) and so
\[F(x)=F(y). \tag{3}\]
Now we first take \(a=x\). For any \(x\in A\) we write \(F(p_{1}(x))=F(x)=\hat{1_{G}}(F(x))\) and so \((F\circ p_{1})(x)=(\hat{1_{G}}\circ F)(x)\). Secondly, if \(a=y\), then we get \(F(p_{1}(y))=F(x)=\hat{1_{G}}(F(y))\) and again \((F\circ p_{1})(y)=(\hat{1_{G}}\circ F)(y)\). Therefore \((1_{G},p_{1})\) is a soft group homomorphism.
In the similar way, we can prove that the map \((1_{G},p_{2})\) is a soft group homomorphism. Eventually, we have
\[(f,p)\circ(1_{G},p_{1}) =(f\circ 1_{G},p\circ p_{1})\] \[=(f\circ 1_{G},p\circ p_{2})\] \[=(f,p)\circ(1_{G},p_{2}).\]
However, \((f,p)\) is not monic since \((1_{G}\circ p_{1})\neq(1_{G}\circ p_{2})\). This completes the proof.
**Theorem 3.7**.: _Let \((f,p):(F,A)_{G}\rightarrow(H,B)_{K}\) be a morphism in the soft group category \(SGp\). If both \(f\) and \(p\) are injective, then the morphism \((f,p)\) is monic._
Proof.: Assume that \(f\) and \(p\) are injective. Let \((f_{1},p_{1}),\ (f_{2},p_{2}):(L,C)_{M}\rightarrow(F,A)_{G}\) be any two morphisms in \(SGp\) such that \((f,p)\circ(f_{1},p_{1})=(f,p)\circ(f_{2},p_{2})\). Thus it can been easily seen that \((f\circ f_{1},p\circ p_{1})=(f\circ f_{2},p\circ p_{2})\). Then we get \(f\circ f_{1}=f\circ f_{2}\) and \(p\circ p_{1}=p\circ p_{2}\). Since both \(f\) and \(p\) are one-to-one, we get \(f_{1}=f_{2}\) in \(Gp\) and \(p_{1}=p_{2}\) in \(Set\), where \(Gp\) and \(Set\) represent group category and set category, respectively. Therefore the morphism \((f,p)\) is monic in \(SGp\).
**Theorem 3.8**.: _If a soft group morphism \((f,p)\) in the \(SGp\) is split monic, then both \(f\) anf \(p\) is injective._
Proof.: Let \((F,A)_{G}\) and \((H,B)_{K}\) be any two soft group and let \((f,p):(F,A)_{G}\rightarrow(H,B)_{K}\) be a soft split monic morphism. Since \((f,p)\) is a split monic morphism, then there exists \((g,q):(H,B)_{K}\rightarrow(F,A)_{G}\) such that \((g,q)\circ(f,p)=1_{(F,A)_{G}}=(1_{G},1_{A})\). Because of Definition 3.5, we have \(g\circ f=1_{G}\) and \(q\circ p=1_{A}\). It follows that \(f\) is a split monic morphism in the group category \(Gp\) and \(p\) is a split monic morphism in the set category \(Set\). Thus, both \(f\) anf \(p\) is injective.
**Theorem 3.9**.: _Let \((f,p):(F,A)_{G}\rightarrow(H,B)_{K}\) be a morphism in the soft group category \(SGp\). If \(f\) is an epimorphism in \(Gp\) and \(p\) is an onto function in \(Set\), then the morphism \((f,p)\) is epic._
Proof.: Assume that \((g_{1},q_{1}),\ (g_{2},q_{2}):(H,B)_{K}\rightarrow(T,D)_{N}\) be any two morphisms in \(SGp\) such that \((g_{1},q_{1})\circ(f,p)=(g_{2},q_{2})\circ(f,p)\). Thus we deduce that \((g_{1}\circ f,q_{1}\circ p)=(g_{2}\circ f,q_{2}\circ p)\). From this, we get \(g_{1}\circ f=g_{2}\circ f\) and \(q_{1}\circ p=q_{2}\circ p\). We obtain \(g_{1}=g_{2}\) in \(Gp\) and \(q_{1}=q_{2}\) in \(Set\) due to the facts that \(f\) is an epimorphism and \(q\) is onto. Hence, the morphism \((f,p)\) is epic in \(SGp\).
## 4 Properties of the Soft Group Category
In this section, we will show that the SGp category is a symmetric monoidal category by proving the existence of the universal properties such as the final object and the product.
**Theorem 4.1**.: _Let \(\{e\}\) be the trivial group and let \(A=\{a\}\) be a singleton. Let \((F,\{a\})_{\{e\}}\) be a soft group such that \(F:\{a\}\to P(\{e\}),\ \ F(a)=\{e\}\). Then \((F,\{a\})_{\{e\}}\) is a final object in the soft group category \(SGp\)._
Proof.: Let \((H,B)_{K}\) be an object in the soft group category \(SGp\). Then the following diagram
is commutative, where \(p(b)=a\) for every \(b\in B\) and
states the group homomorphism defined by \(f(k)=e\) for each \(k\in K\). Thus \((f,p)\) is the unique soft group homomorphism that can be defined from \((H,B)_{K}\) to \((F,\{a\})_{\{e\}}\). As a result, any soft group constructed by a parameter set with single-element and a group \(\{e\}\) with one element is a final object in the soft group category \(SGp\).
**Proposition 4.2**.: _Let \((F_{1},A_{1})_{G_{1}}\) and \((F_{2},A_{2})_{G_{2}}\) be soft groups and let \((U,A_{1}\times A_{2})_{G_{1}\times G_{2}}=(F_{1},A_{1})_{G_{1}}\hat{\times}(F_ {2},A_{2})_{G_{2}}\) be the soft product of them. For \(i=1,2\) let \(\Pi_{i}:A_{1}\times A_{2}\to A_{i}\) be \(i\)-th projection function on the parameter sets and let \(p_{i}:G_{1}\times G_{2}\to G_{i}\) be \(i\)-th projection homomorphism on groups. Then the map \((p_{i},\Pi_{i}):(U,A_{1}\times A_{2})_{G_{1}\times G_{2}}\to(F_{i},G_{i})_{G_ {i}}\) for each \(i=1,2\) is a soft group homomorphism._
Proof.: To prove that \((p_{i},\Pi_{i})\) is a soft group homomorphism for each \(i=1,2\) we need to show that the following diagram is commutative:
For this purpose, it is sufficient to show that the equality \(\hat{p_{i}}\circ U=F_{i}\circ\Pi_{i}\) is satisfied. For any pair \((a_{1},a_{2})\), we get
\[(\hat{p_{i}}\circ U)(a_{1},a_{2}) =\hat{p_{i}}(F_{1}(a_{1})\times F_{2}(a_{2}))\] \[=F_{i}(a_{i})\] \[=F_{i}(\Pi_{i}(a_{1},a_{2}))\] \[=(F_{i}\circ\Pi_{i})(a_{1},a_{2}),\]
and so the proof is completed.
We can generalize the above proposition as follows:
**Corollary 4.3**.: _Let \((F_{1},A_{1})_{G_{1}},(F_{2},A_{2})_{G_{2}},\cdots(F_{n},A_{n})_{G_{n}}\) be soft groups and let \((U,A_{1}\times A_{2}\times\cdots\times A_{n})_{G_{1}\times G_{2}\cdots\times G_ {n}}=(F_{1},A_{1})_{G_{1}}\hat{\times}(F_{2},A_{2})_{G_{2}}\hat{\times}\cdots \hat{\times}(F_{n},A_{n})_{G_{n}}\) be the product of these soft groups in the sense of Definition 2.10. For each \(i=1,2,\cdots,n\), let \(\Pi_{i}:A_{1}\times\cdots\times A_{n}\to A_{i}\) be \(i\)-th projection function on the parameter sets and let \(p_{i}:G_{1}\times\cdots\times G_{n}\to G_{i}\) be \(i\)-th projection homomorphism on groups. Then the map \((p_{i},\Pi_{i}):(U,A_{1}\times\cdots\times A_{n})_{G_{1}\times\cdots\times G_ {2}}\rightarrow(F_{i},A_{i})_{G_{i}}\) for each \(i=1,2,\cdots,n\) is a soft group homomorphism._
**Theorem 4.4**.: _Let \((F_{1},A_{1})_{G_{1}}\) and \((F_{2},A_{2})_{G_{2}}\) be any two objects in \(SGp\) and let \((U,A_{1}\times A_{2})_{G_{1}\times G_{2}}=(F_{1},A_{1})_{G_{1}}\hat{\times}F_{ 2},A_{2})_{G_{2}}\) be the soft product of \((F_{1},A_{1})_{G_{1}}\) and \((F_{2},A_{2})_{G_{2}}\). Then the product of \((F_{1},A_{1})_{G_{1}}\) and \((F_{2},A_{2})_{G_{2}}\) in \(SGp\) is no other than \(((U,A_{1}\times A_{2})_{G_{1}\times G_{2}},(p_{i},\Pi_{i}))\), where for each \(i=1,2\), \(\Pi_{i}:A_{1}\times A_{2}\to A_{i}\) is \(i\)-th projection function on the parameter sets and \(p_{i}:G_{1}\times G_{2}\to G_{i}\) is \(i\)-th projection homomorphism on groups._
Proof.: Assume that \((H,B)_{K}\) is an object and each \((g_{i},q_{i}):(H,B)_{K}\rightarrow(F_{i},A_{i})_{G_{i}}\),\(i=1,2\) is a morphism in \(SGp\). To prove that uniqueness of the product of \((F_{1},A_{1})_{G_{1}}\) and \((F_{2},A_{2})_{G_{2}}\), we need to show that there exists a unique soft group homomorphism \((\gamma,\theta):(H,B)_{K}\rightarrow(U,A_{1}\times A_{2})_{G_{1}\times G_{2}}\), where we define \(\theta=(q_{1},q_{2})\) and \(\gamma=(g_{1},g_{2})\). For this reason, we will verify that the diagram below is commutative:
For any element \(b\) of \(B\), we have
\[(\hat{\gamma}\circ H)(b) =\hat{\gamma}(H(b))\] \[=\hat{g_{1}}(H(b))\times\hat{g_{2}}(H(b))\] \[=F_{1}(q_{1}(b))\times F_{2}(q_{2}(b))\] \[=U(q_{1}(b),q_{2}(b))\] \[=(U\circ(q_{1},q_{2}))(b)\] \[=(U\circ\theta)(b),\]
so \((\gamma,\theta)\) is a soft group homomorphism. We illustrated the morphisms in the diagram below for the purpose of a better explanation of the subject.
Now we show that for each \(i=1,2\) the realtion \((p_{i},\Pi)\circ(\gamma,\theta)=(g_{i},q_{i})\). For any \(k\in K\), we have
\[(p_{i}\circ\gamma)(k)=p_{i}(\gamma(k))=p_{i}(g_{1}(k),g_{2}(k))=g_{i}(k),\]
and so \(p_{i}\circ\gamma=g_{i}\). Similarly, we can prove the relation \(\Pi_{i}\circ\theta=q_{i}\).
Finally, let \((\gamma^{{}^{\prime}},\theta^{{}^{\prime}}):(H,B)_{K}\rightarrow(U,A_{1}\times A _{2})_{G_{1}\times G_{2}}\) be another morphism satisfying the condition \((p_{i},\Pi)\circ(\gamma^{{}^{\prime}},\theta^{{}^{\prime}})=(g_{i},q_{i})\). Since
\[(pi\circ\gamma^{{}^{\prime}})(k)=g_{i}(k)=(pi\circ\gamma)(k)\]
for each \(k\in K\), then we have \(pi\circ\gamma^{{}^{\prime}}=pi\circ\gamma\). We conclude that \(\gamma^{{}^{\prime}}=\gamma\) due to the fact that each \(p_{i}\), \(\ i=1,2\) is a group monomorphism. In the similar manner, one can see \(\theta^{{}^{\prime}}=\theta\). Consequently, we obtain \((\gamma^{{}^{\prime}},\theta^{{}^{\prime}})=(\gamma,\theta)\), which means \((\gamma,\theta)\) is unique. Thus we complete the proof.
## 5 Example
In this section, we will give a soft group homomorphism with the soft kernel based on the structure of the hyperoctahedral group, which is a finite real reflection group.
We assume that \([m,n]:=\{m,m+1,\cdots,n\}\) for any \(m,n\in\mathbb{Z}\) such that \(m\leq n\). Let \((W_{n},R_{n})\) be a Weyl group of type \(B_{n}\), which is also called a _hyperoctahedral group_, where \(R_{n}=\{v_{1},r_{1},\cdots,r_{n-1}\}\) is the canonical set of generators of \(W_{n}\). Any element \(w\in W_{n}\) acts as a signed permutation on the set \(I_{n}=[-n,n]\backslash\{0\}\) such that \(w(-i)=-w(i)\) for each \(i\in I_{n}\). The group \(W_{n}\) has a Dynkin diagram with respect to the set of generators \(R_{n}=\{v_{1},r_{1},\cdots,r_{n-1}\}\) as follows:
\(B_{n}:\)\(v_{1}\)\(r_{1}\)\(r_{2}\)\(r_{n-2}\)\(r_{n-1}\)
The subgroup \(W_{K}\) of \(W_{n}\) generated by \(K\) is said to be a _standard parabolic subgroup_ for any subset \(K\) of \(R_{n}\) and a subgroup of \(W_{n}\) conjugate to \(W_{K}\) for some \(K\subset R_{n}\) is called a _parabolic subgroup_. Let \(v_{i}:=r_{i-1}v_{i-1}r_{i-1}\) for each \(2\leq i\leq n\). It is well-known that \(W_{n}=S_{n}\rtimes\mathcal{V}_{n}\), where \(S_{n}\) is the symmetric group generated by \(\{r_{1},\cdots,r_{n-1}\}\) and \(\mathcal{V}_{n}\) is a normal subgroup of \(W_{n}\) generated by reflections in \(V_{n}=\{v_{1},\cdots,v_{n}\}\). That is, \(W_{n}\) is a split group extension of \(\mathcal{V}_{n}\) by \(S_{n}\) and clearly the cardinality of the group \(W_{n}\) is \(2^{n}n!\). Note here that \(r_{i}\ \ (i=1,\cdots,n-1)\) is identified with
\[r_{i}(j)=\begin{cases}i+1,&\text{if }j=i;\\ i,&\text{if }j=i+1;\\ j,&\text{otherwise.}\end{cases}\]
and \(v_{i}\ \ (i=1,\cdots,n)\) is defined by
\[v_{i}(j)=\begin{cases}j,&\text{if }j\neq i;\\ -i,&\text{if }j=i.\end{cases}\]
The representation of \(W_{n}\) is given by the following form (see [7]):
\[W_{n}= \langle r_{1},\cdots,r_{n-1},v_{1},\cdots,v_{n}:r_{i}^{2}=(r_{i}r _{i+1})^{3}=(r_{i}r_{j})^{2}=e,|i-j|>1;\] \[v_{i}^{2}=(v_{1}r_{1})^{4}=e,v_{i}v_{j}=v_{j}v_{i},r_{i}v_{i}r_{i }=v_{i+1},r_{i}v_{j}=v_{j}r_{i},\ j\neq i,i+1)\]
where \(e\) denotes the identity element of \(W_{n}\). For more detailed information about the Weyl group of type \(B_{n}\), one can apply to [7].
A _signed composition_ of \(n\) can be considered of as an expression of \(n\) as an ordered sequence of nonzero integers. More precisely, a signed composition of \(n\) is a finite sequence \(A=(a_{1},\cdots,a_{k})\) of nonzero integers satisfying \(\sum_{i=1}^{k}|a_{i}|=n\)[8]. Put \(|A|=\sum_{i=1}^{k}|a_{i}|\). We will denote the set of all signed compositions of \(n\) by \(\mathcal{SC}(n)\).
A _bi-partition_ of \(n\) is a pair \(\mu=(\mu^{+};\mu^{-})\) where \(\mu^{+}\) and \(\mu^{-}\) are partitions such that \(|\mu|=|\mu^{+}|+|\mu^{-}|=n\). It is well-known from [8] that \(\boldsymbol{\Lambda}:\mathcal{SC}(n)\rightarrow\mathcal{BP}(n),\boldsymbol{ \Lambda}(A)=(\boldsymbol{\Lambda}^{+}(A);\boldsymbol{\Lambda}^{-}(A))\) is a surjective map, where \(\boldsymbol{\Lambda}^{+}(A)\) (resp. \(\boldsymbol{\Lambda}^{-}(A)\)) is rearrangement of the positive components (resp. absolute value of negative components) of \(A\) in decreasing order. For \(\mu=(\mu^{+};\mu^{-})\in\mathcal{BP}(n)\), \(\hat{\mu}:=\mu^{+}\sqcup-\mu^{-}\) is a unique signed composition obtained by concatenating the sequence of components of \(\mu^{+}\) to that of \(-\mu^{-}\) and \(W_{\hat{\mu}}\) is a reflection subgroup of \(W_{n}\) (see [8]).
We will denote by \(\mathcal{BP}(n)\) the set of all bi-partitions of \(n\). In [8], Bonnafe and Hohlweg assigned each signed composition of \(n\) to a reflection subgroup of \(W_{n}\) in the following way: The reflection subgroup \(W_{A}\) of \(W_{n}\) with respect to \(A=(a_{1},\cdots,a_{k})\in\mathcal{SC}(n)\) is generated by the reflection subset \(R_{A}\), which is
defined by
\[R_{A}= \{r_{p}\in S_{n}\ :\ |a_{1}|+\cdots+|a_{i-1}|+1\leq p<|a_{1}|+ \cdots+|a_{i}|\}\] \[\cup\{v_{|a_{1}|+\cdots+|a_{j-1}|+1}\in V_{n}\}\ |\ a_{j}>0\}\subset R_{n}^{{}^{\prime}}\]
where \(R_{n}^{\prime}=\{r_{1}\cdots r_{n-1},v_{1},v_{2},\cdots,v_{n}\}\).
Let \(\mathcal{P}(W_{n})\) denote the power set of \(W_{n}\). If we pick the parameter set as \(\mathcal{BP}(n)\) and accordingly define the map \(G:\mathcal{BP}(n)\rightarrow\mathcal{P}(W_{n})\), \(G(\mu)=W_{\hat{\mu}}\), then the pair \((G,\mathcal{BP}(n))_{W_{n}}\) is a soft group.
Now we give another soft group example. If we take \(\mathcal{SC}(n)\) as the parameter set and define the map \(F\) as \(F(A)=W_{\hat{\mu}}\), where we can write \(\hat{\mu}:=\boldsymbol{\Lambda}^{+}(A)\sqcup-\boldsymbol{\Lambda}^{-}(A)\in \mathcal{SC}(n)\) for every \(A\in\mathcal{SC}(n)\), then the pair \((F,\mathcal{SC}(n))_{W_{n}}\) is a soft group due to the fact that \(W_{\hat{\mu}}\) is a subgroup of \(W_{n}\).
The pair \((f,\boldsymbol{\Lambda}):(F,\mathcal{SC}(n))_{W_{n}}\rightarrow(G,\mathcal{ BP}(n))_{W_{n}}\) is a soft group homomorphism with the soft kernel. Therefore, we say that the following diagram is commutative:
where \(f\ :\ W_{n}\mapsto W_{n}\) is the trivial isomorphism and \(\hat{f}\) is the identity function on \(\mathcal{P}(W_{n})\). In fact, let \(A\in\mathcal{SC}(n)\) and let \(\boldsymbol{\Lambda}(A):=\mu\). Thus we can write \(\hat{\mu}=\boldsymbol{\Lambda}^{+}(A)\sqcup-\boldsymbol{\Lambda}^{-}(A)\) and so it is clear that \(\boldsymbol{\Lambda}(\hat{\mu})=\mu\) for every \(\mu\in\mathcal{BP}(n)\). Therefore, for every \(A\in\mathcal{SC}(n)\), we get
\[(G\circ\boldsymbol{\Lambda})(A) =W_{\hat{\mu}}\] \[=\hat{f}(W_{\hat{\mu}})\] \[=\hat{f}\circ F(A).\]
Hence the above diagram is commutative. The soft kernel of this soft group homomorphism equals to the set \(\{(-1,-1,\cdots,-1)\}\), since \(F((-1,-1,\cdots,-1))=\{e\}=Kerf\) is the trivial subgroup of \(W_{n}\).
|
2309.15436 | Lattice QCD calculation of the invisible decay $J/ψ\rightarrow
γν\barν$ | In this work, we present the first lattice QCD study on the invisible decay
$J/\psi \rightarrow \gamma\nu\bar{\nu}$. The calculation is accomplished using
$N_f=2$ twisted mass fermion ensembles. The excited-state effects are observed
and eliminated using a multi-state fit. The impact of finite-volume effects is
also examined and confirmed to be well-controlled. After a continuous
extrapolation under three lattice spacings, we obtain the branching fraction as
$\operatorname{Br}[J/\psi \rightarrow \gamma\nu\bar{\nu}]=1.00(9)(7)\times
10^{-10}$, where the first error is the statistical error and the second is an
estimate of the systematics. The exact theoretical prediction can be used to
remove the only invisible contamination from the standard model background in
searching for the possible dark matter by the channel $J/\psi \rightarrow
\gamma+\textrm{invisible}$ | Yu Meng | 2023-09-27T06:53:42Z | http://arxiv.org/abs/2309.15436v1 | # Lattice QCD calculation of the invisible decay \(J/\psi\to\gamma\nu\bar{\nu}\)
###### Abstract:
In this work, we present the first lattice QCD study on the invisible decay \(J/\psi\to\gamma\nu\bar{\nu}\). The calculation is accomplished using \(N_{f}=2\) twisted mass fermion ensembles. The excited-state effects are observed and eliminated using a multi-state fit. The impact of finite-volume effects is also examined and confirmed to be well-controlled. After a continuous extrapolation under three lattice spacings, we obtain the branching fraction as \(\text{Br}[J/\psi\to\gamma\nu\bar{\nu}]=1.00(9)(7)\times 10^{-10}\), where the first error is the statistical error and the second is an estimate of the systematics. The exact theoretical prediction can be used to remove the only invisible contamination from the standard model background in searching for the possible dark matter by the channel \(J/\psi\to\gamma+\text{invisible}\).
## 1 Introduction
Searching for dark matter is one of the major goals of contemporary astronomy and particle physics [1, 2]. In recent decades, abundant experimental observations have hinted at the existence of dark matter, which triggered significant theoretical efforts to understand its nature and search for new physics beyond the Standard Model. Among various experimental detections, the heavy quarkonium experiments provide an ideal environment to study the possible dark matter associated with heavy quarks. In contrast to the low-energy dark matter nucleon scattering experiments, the decay of heavy quarkonium into a single photon and invisible particles can probe arbitrarily small dark matter masses. Therefore, it is widely used to search for light sterile neutrino or sub-GeV dark matter.
The CLEO [3], BaBar [4], Belle [5], and BESIII [6] experiments have performed the searches for \(J/\psi\) or \(\Upsilon\) radiative decays into invisible particles, and no signal was observed. The latest upper limits on the branching fraction of \(J/\psi\to\gamma+\text{invisible}\) is reported ranging from \(8.3\times 10^{-8}\) to \(1.8\times 10^{-6}\) by the BESIII experiment using \((2708.1\pm 14.5)\times 10^{6}\psi(3686)\) events collected by the detector [7]. In this analysis, the invisible particle is interpreted as an axion-like particle(ALP), and the most stringent constraints on the ALP-photon coupling are presented. Not long before, the BESIII experiment also searches for a CP-odd light Higgs boson (\(A^{0}\)) in \(J/\psi\to\gamma A^{0}\)[8]. Among these searches, the standard model decay \(J/\psi\to\gamma\nu\bar{\nu}\) is involved since the neutrinos are also invisible particles in the standard model. In Ref. [9], the author analyzes the process \(J/\psi\to\gamma\nu\bar{\nu}\) based on certain phenomenological assumptions and estimates the branching fraction as \(\text{Br}(J/\psi\to\gamma\nu\bar{\nu})=0.7\times 10^{-10}\), thereby leaving a substantial room for new physics in the process. At present, several futural experiments are under planning or construction, such as Super Tau Charm Facility [10], Belle II [11], and LHCb [12], have the great potential to significantly improve the upper limit on the branching fraction of \(J/\psi\to\gamma+\text{invisible}\).
At the present stage, a genuine non-perturbative calculation can not only provide a model-independent comparison with previous phenomenological studies but also provide a potential theoretical assist for experiments in the search for dark matter and new physics beyond the standard model. In this paper, we present the first lattice calculation of the invisible decay \(J/\psi\to\gamma+\text{invisible}\). The aim of the work is to non-perturbatively determine the branching fraction with various systematic effects under well control.
Figure 1: The diagram for the decay \(J/\psi\to\gamma\nu\bar{\nu}\), where the shaded region denotes a weak neutral current.
### Foundation
## 2 Approach to the decay width on the lattice
We start our discussion from the amplitude of \(J/\psi\to\gamma\nu\bar{\nu}\), the lowest-order contribution of which is expressed by
\[{\cal M}=H_{\mu\nu\alpha}(q,p)\epsilon^{\alpha}_{J/\psi}(p)(-ie\epsilon^{\nu*} (q))(-\frac{i}{2}gz)^{2}\times\bar{u}(q_{1})\frac{\gamma^{\mu}}{2}(1-\gamma_{5 })v(q_{2})\frac{-i}{(k^{2}-m_{Z}^{2})} \tag{1}\]
where the nonperturbative hadronic interaction between the \(J/\psi\), photon and \(Z\) boson is encoded in a hadronic function \(H_{\mu\nu\alpha}(q,p)\),
\[H_{\mu\nu\alpha}(q,p)=\int\,d^{4}x\mathrm{e}^{iq\alpha x}{\cal H}_{\mu\nu \alpha}(x,p) \tag{2}\]
where the hadronic function \({\cal H}_{\mu\nu\alpha}(x,p)\) is defined as
\[{\cal H}_{\mu\nu\alpha}(x,p)=\langle 0|T\{J_{\mu}^{\rm em}(x)J_{\nu}^{Z}(0)\}|J/ \psi(p)_{\alpha}\rangle \tag{3}\]
with \(J/\psi\) four-momentum \(p=(m_{J/\psi},\vec{0})\), photon \(q=(|\vec{q}|,\vec{q})\) and the neutrino \(q_{i}=(|\vec{q}_{i}|,\vec{q}_{i}),i=1,2\). Both the photons and neutrinos satisfy the on-shell conditions and are viewed as massless. The electromagnetic and weak currents are defined as \(J_{\mu}^{\rm em}=\sum_{q}e_{q}\,\vec{q}\,\gamma_{\mu}q(e_{q}=2/3,-1/3,-1/3,2/3\) for \(q=u,d,s,c)\), \(J_{\nu}^{Z}=\sum_{q}\vec{q}\gamma_{\nu}(g_{\nu}^{q}-g_{A}^{q}\gamma_{5})q\), \(g_{V}^{q}=T_{3}^{q}-2e_{q}\sin^{2}\theta_{W}\) and \(g_{A}^{q}=T_{3}^{q}\), where \(T_{3}^{q}\) is the third component of the weak isospin of the fermion. In the case of the charm quark, we know \(g_{A}^{c}=1/2\) and \(g_{V}^{c}=1/2-4/3\sin^{2}\theta_{W}\). The \(\epsilon^{\alpha}_{J/\psi}(p)\) is the polarization vector of \(J/\psi\) and \(\epsilon^{\nu}(q)\) for the photon. The \(e\) is the coupling constant of electromagnetic interaction, and \(g_{Z}\) depicts the coupling of \(Z\) boson to the fermions. The \(Z\) boson mass is \(m_{Z}\) and the four-momentum is given by \(k=q_{1}+q_{2}\).
For the virtual \(Z\) boson, \(k^{2}\ll m_{Z}^{2}\), it is natural to make an replacement for the \(Z\) boson propagator
\[\frac{1}{k^{2}-m_{Z}^{2}}\to-\frac{1}{m_{Z}^{2}} \tag{4}\]
Also considering the following notations,
\[\frac{G_{F}}{\sqrt{2}}=\frac{g_{W}^{2}}{8m_{W}^{2}},g_{Z}=\frac{g_{W}}{\cos \theta_{W}},\cos\theta_{W}=\frac{m_{W}}{m_{Z}} \tag{5}\]
The amplitude in Eq. (1) thereby reduces to
\[{\cal M}=-e\frac{G_{F}}{\sqrt{2}}H_{\mu\nu\alpha}(q,p)\epsilon^{\alpha}_{J/ \psi}(p)\epsilon^{\nu*}(q)\times\bar{u}(q_{1})\gamma^{\mu}(1-\gamma_{5})v(q_{ 2}) \tag{6}\]
With consideration of the gauge symmetry and parity, the hadronic function \(H_{\mu\nu\alpha}(q,p)\) can be parameterized as [9]
\[H_{\mu\nu\alpha}(q,p)\equiv\epsilon_{\mu\nu\alpha\beta}q_{\beta}F_{\gamma\nu \bar{\nu}} \tag{7}\]
The direct calculation on the decay width of \(J/\psi\to\gamma\nu\bar{\nu}\) in the rest frame of \(J/\psi\), by employing Eq. (6) and (7), leads to
\[\Gamma(J/\psi\to\gamma\nu\bar{\nu}) = \frac{1}{2m_{J/\psi}}\int\frac{d^{3}\vec{q}}{(2\pi)^{3}2|\vec{q}|} \int\frac{d^{3}\vec{q}_{1}}{(2\pi)^{3}2|\vec{q}_{1}|}\int\frac{d^{3}\vec{q}_{2} }{(2\pi)^{3}2|\vec{q}_{2}|} \tag{8}\] \[\times (2\pi)^{4}\delta^{4}(p-q-q_{1}-q_{2})\times\frac{1}{3}|\mathcal{M }|^{2}\times 3\] \[= \frac{\alpha G_{F}^{2}}{3\pi^{2}}\int_{0}^{\frac{m_{J/\psi}}{2}}| \vec{q}|^{3}(m_{J/\psi}-|\vec{q}|)|F_{\gamma\nu\bar{\nu}}|^{2}d|\vec{q}|\]
where \(\alpha\equiv e^{2}/4\pi\). Factor 1/3 in the third line denotes the average of three polarizations of \(J/\psi\) in its rest frame and factor 3 for the three flavors of neutrinos.
### Relationship of hadronic function in Minkowski and Euclidean space
In this section, we present the relation between the hadronic functions in Minkowski and Euclidean spacetime, which can be established by inserting a complete set of intermediate states into the respective hadronic functions.
In the Minkowski spacetime, the hadronic function has the following decomposition
\[H_{\mu\nu\alpha}(q,p) = i\sum_{n,\vec{q}}\frac{1}{E_{\gamma}-E_{n}+i\epsilon}\langle 0|J_{ \mu}^{\rm em}(0)|n(\vec{q})\rangle\langle n(\vec{q})|J_{\nu}^{Z}(0)|J/\psi(p)_ {\alpha}\rangle\] \[- i\sum_{n^{\prime},\vec{q}}\frac{1}{E_{\gamma}+E_{n^{\prime}}-m_ {J/\psi}-i\epsilon}\langle 0|J_{\nu}^{Z}(0)|n^{\prime}(-\vec{q})\rangle \langle n^{\prime}(-\vec{q})|J_{\mu}^{\rm em}(0)|J/\psi(p)_{\alpha}\rangle\]
where the first line corresponds to the time-ordering \(t>0\) and second line for \(t<0\) in Eq. (2). The intermediate states \(|n\rangle\) and \(|n^{\prime}\rangle\) represent all possible states with the allowed quantum numbers. As far as the connected contribution is concerned in this work, the low-lying states are given by \(|n^{\prime}\rangle=|J/\psi\rangle\) and \(|n\rangle=|\eta_{c}\rangle\), respectively.
In the Euclidean spacetime, the hadronic function in Eq. (2) is replaced by \(H_{\mu\nu\alpha}^{E}(q,p)\), which is obtained by making a naive Wick rotation \(t\to-it\)
\[H_{\mu\nu\alpha}^{E}(q,p) = -i\int_{-T/2}^{T/2}dt\int d^{3}\vec{x}{\rm e}^{E_{\gamma}t-i\vec {q}\cdot\vec{x}}\mathcal{H}_{\mu\nu\alpha}(x,p)\]
with the Euclidean momenta \(q=(iE_{\gamma},\vec{q}),p=(im_{J/\psi},0)\). As before, after inserting a complete set of intermediate states into the Euclidean hadronic function above, we obtain
\[H_{\mu\nu\alpha}^{E}(q,p) = i\sum_{n,\vec{q}}\frac{1-{\rm e}^{-(E_{n}-E_{\gamma})T/2}}{E_{ \gamma}-E_{n}+i\epsilon}\langle 0|J_{\mu}^{\rm em}(0)|n(\vec{q})\rangle \langle n(\vec{q})|J_{\nu}^{Z}(0)|J/\psi(p)_{\alpha}\rangle\] \[- i\sum_{n^{\prime},\vec{q}}\frac{1-{\rm e}^{-(E_{\gamma}+E_{n^{ \prime}}-m_{J/\psi})T/2}}{E_{\gamma}+E_{n^{\prime}}-m_{J/\psi}-i\epsilon} \langle 0|J_{\nu}^{Z}(0)|n^{\prime}(-\vec{q})\rangle\langle n^{\prime}(- \vec{q})|J_{\mu}^{\rm em}(0)|J/\psi(p)_{\alpha}\rangle\]
where the finite time integral \([-T/2,T/2]\) is introduced to define the Euclidean hadronic function.
Whether the Minkowski hadronic function can be obtained from the Euclidean hadronic function by naive Wick rotation usually depends on whether all the \(T\)-dependence terms converge in the limit \(T\to\infty\). If it does, the Wick rotation will leave the hadronic function unchanged and the lattice calculation produces the physical results without particular difficulties. In this study, it requires the conditions
\[E_{n}-E_{\gamma}>0 \tag{12}\]
\[E_{\gamma}+E_{n^{\prime}}-m_{J/\psi}>0 \tag{13}\]
must be satisfied for \(E_{\gamma}\in[0,m_{J/\psi}/2]\).
For the time ordering \(t>0\), where the weak current is inserted before the electromagnetic current, the low-lying state is \(J/\psi\) particle with momentum \(\vec{q}\) and the condition (12) is satisfied readily. However, the situation is quite different for time ordering \(t<0\), where the electromagnetic current is inserted before the weak current. In this case, the low-lying state is \(\eta_{c}\) particle whose mass is slightly less than the initial state \(J/\psi\), resulting in a violation of condition (13) for very small \(E_{\gamma}\), for example, \(E_{\gamma}=0\). For all ensembles used in this work, we find there exists only one momentum \(\vec{q}=0\) for intermediate state \(|\eta_{c}(\vec{q})\rangle\) that violates the condition (13), leading to an exponentially growing factor \(e^{-(E_{\gamma}+E_{n^{\prime}}-m_{J/\psi})T/2}\) as \(T\) increases. One can check it numerically using the discrete energy levels of \(\eta_{c}\) summarized in Table 2. Moreover, for \(\vec{q}=0\) it has
\[\langle 0|J_{\nu}^{Z}(0)|\eta_{c}(\vec{0})\rangle\langle\eta_{c}(\vec{0})|J_{ \mu}^{\rm em}(0)|J/\psi(\vec{0})_{\alpha}\rangle=0 \tag{14}\]
which still protects the Euclidean hadronic function from an exponentially growing factor \(e^{-(m_{\eta_{c}}-m_{J/\psi})T/2}\). In other words, all the intermediate states with discrete momenta \(\vec{q}=2\pi\vec{n}/L\) are independent of the \(T\)-dependence factor as \(T\to\infty\). We conclude that for the time ordering \(t<0\), the condition (12) is also satisfied in our calculations. Thus, we have proved that one can extract the Minkowski hadronic function from the Euclidean hadronic function directly with naive Wick rotation, and the \(i\epsilon\) in Eq.(9) and Eq.(11) are unnecessary.
### Extraction of the hadronic function from lattice data
In the above section, we have established the direct connection between the Minkowski hadronic function and the Euclidean hadronic function. In the following, we will provide the details of constructing the Euclidean hadronic function using the lattice data.
The hadronic function \({\cal H}_{\mu\nu\alpha}(x,p)\) defined in Eq.(3) can be extracted from a three-point function \(C^{(3)}_{\mu\nu\alpha}(x;\Delta t)\)
\[C^{(3)}_{\mu\nu\alpha}(x;\Delta t)=\left\{\begin{array}{ll}\langle J_{\mu}^ {\rm em}(x)J_{\nu}^{Z}(0)\phi_{J/\psi,\alpha}^{\dagger}(-\Delta t)\rangle,&t \geq 0\\ \langle J_{\mu}^{Z}(0)J_{\nu}^{\rm em}(x)\phi_{J/\psi,\alpha}^{\dagger}(t- \Delta t)\rangle,&t<0\end{array}\right. \tag{15}\]
where \(\phi_{J/\psi,\alpha}\) is the \(J/\psi\) interpolating operator. A sufficient large \(\Delta t\) should be chosen to guarantee \(J/\psi\) ground-state dominance. For a finite \(\Delta t\), the hadronic function has a \(\Delta t\) dependence, we thereby denote the hadronic function \({\cal H}_{\mu\nu\alpha}(x,p)\) as \({\cal H}_{\mu\nu\alpha}(x,\Delta t)\) where the initial momentum \(p\) is omitted since our calculation is limited to the rest frame. So, it has
\[{\cal H}_{\mu\nu\alpha}(x,\Delta t)=\left\{\begin{array}{ll}\frac{2m_{J/ \psi}}{Z_{0}}{\rm e}^{m_{J/\psi}\Delta t}C^{(3)}_{\mu\nu\alpha}(x;\Delta t),&t \geq 0\\ \frac{2m_{J/\psi}}{Z_{0}}{\rm e}^{m_{J/\psi}(\Delta t-t)}C^{(3)}_{\mu\nu\alpha }(x;\Delta t),&t<0\end{array}\right. \tag{16}\]
with \(Z_{0}=\langle J/\psi|\phi^{\dagger}_{J/\psi}|0\rangle\) the overlap amplitude for the \(J/\psi\) ground state. Both \(Z_{0}\) and \(m_{J/\psi}\) can be calculated from the two-point function \(C^{(2)}\left(t\right)=\langle\phi_{J/\psi}(t)\phi^{\dagger}_{J/\psi}(0)\rangle\), which has the following expression
\[C^{(2)}\left(t\right)=\sum_{i=0,1}\frac{Z_{i}^{2}}{2E_{i}}\left(\mathrm{e}^{-E _{i}t}+\mathrm{e}^{-E_{i}\left(T-t\right)}\right) \tag{17}\]
We adopt a two-state fit form for the two-point function \(C^{(2)}(t)\) to extract \(Z_{i},E_{i}(i=1,2)\), with \(E_{0}=m_{J/\psi}\) the ground state energy, \(E_{1}\) the energy of the first excited state and \(Z_{1}\) the overlap amplitude for the first excited state. As is pointed out in the previous paper, when the precision reaches a few percent in our calculation, the excited-state effects are statistically significant unless \(t\gtrsim 1.6\) fm as far as \(C^{(2)}\left(t\right)\) is concerned. Such systematic effects also affect the three-point function \(C^{(3)}_{\mu\nu\alpha}\), leading to an obvious \(\Delta t\) dependence. In a realistic lattice calculation, a series of \(\Delta t\) are utilized to perform an infinite extrapolation \(\Delta t\rightarrow\infty\).
### Form factor and decay width
To compute the \(F_{\gamma\nu\bar{\nu}}\), the traditional way is to choose a series of lattice momenta \(\vec{q}=2\pi\vec{n}/L\) with \(\vec{n}=[001]\), \([011]\), \([111]\) \(\cdots\), and the phase space integral is finally completed by interpolating or fitting this discrete \(F_{\gamma\nu\bar{\nu}}(|\vec{q}|)\), leading to a model-dependent systematic effect. In this work, we will proceed to another way, which is widely called the scalar function method. The method has been widely applied to various processes [13, 14, 15, 16, 17]. The key point of the method is to construct the appropriate scalar function method to extract the relevant form factors with particular momenta. The decay width, which is related to the form factors directly by the phase-space integral, can be calculated using the Monte-Carlo method.
According to the parameterization of the hadronic function \(H_{\mu\nu\alpha}(q,p)\) in Eq. (7), we construct the scalar function \(\mathcal{I}\) by multiplying \(\epsilon_{\mu\nu\alpha\beta}p_{\beta}\) to both sides. After averaging over the direction of \(\vec{q}\), it arrives at
\[\mathcal{I}\left(E_{\gamma},\Delta t\right)=im_{J/\psi}\int e^{E _{\gamma}t}dt\int d^{3}\vec{x}j_{0}(E_{\gamma}|\vec{x}|)\epsilon_{\mu\nu\alpha 0}\mathcal{H}_{\mu\nu\alpha}(x,\Delta t)\]
where \(E_{\gamma}\equiv|\vec{q}|\). Then the form factor is extracted through
\[F_{\gamma\nu\bar{\nu}}(E_{\gamma},\Delta t)=-\frac{1}{6m_{J/ \psi}E_{\gamma}}\mathcal{I}\left(E_{\gamma},\Delta t\right) \tag{19}\]
Using the form factor \(F_{\gamma\nu\bar{\nu}}(E_{\gamma},\Delta t)\) as input, the decay width of \(J/\psi\rightarrow\gamma\nu\bar{\nu}\) can be obtained by the Monte-Carlo phase-integral in the region \(E_{\gamma}\in[0,m_{J/\psi}/2]\)
\[\Gamma_{\gamma\nu\bar{\nu}}(\Delta t)=\frac{\alpha G_{F}^{2}}{3 \pi^{2}}\frac{m_{J/\psi}}{2N_{MC}}\sum_{i=1}^{N_{MC}}\left(E_{\gamma}^{3}(m_{J/ \psi}-E_{\gamma})|F_{\gamma\nu\bar{\nu}}(E_{\gamma},\Delta t)|^{2}\right)_{i}\]
where \(N_{MC}\) is the number of Monte-Carlo simulations, which is chosen to guarantee the Monte-Carlo error is much less than the statistical error.
To further reduce the lattice discretization effect, we define a dimensionless quantity \(R_{f}\equiv\Gamma_{\gamma\nu\bar{\nu}}/f_{J/\psi}\), where \(f_{J/\psi}\) is the decay constant of \(J/\psi\). The \(\Delta t\) dependence can be parameterized using a relatively simple two-state form
\[R_{f}(\Delta t)=R_{f}+\zeta\cdot\mathrm{e}^{-(E_{1}-E_{0})\Delta t} \tag{21}\]
with two unknown parameters \(R_{f}\) and \(\zeta\). After the continuous extrapolations for the dimensionless \(R_{f}\) and the decay constant \(f_{J/\psi}\), we obtain the physical results as \(R_{f}^{\mathrm{Cont.Limit}}\) and \(f_{J/\psi}^{\mathrm{Cont.Limit}}\). The physical decay width can be therefore obtained by rescaling \(R_{f}^{\mathrm{Cont.Limit}}\) after multiplying \(f_{J/\psi}^{\mathrm{Cont.Limit}}\). Finally, the branching fraction is presented as follows
\[\mathrm{Br}[J/\psi\to\gamma\nu\bar{\nu}]=R_{f}^{\mathrm{Cont.Limit}}\times \frac{f_{J/\psi}^{\mathrm{Cont.Limit}}}{\Gamma_{J/\psi}} \tag{22}\]
where \(\Gamma_{J/\psi}=92.6\) keV is the \(J/\psi\) decay width from the Particle Data Group.
## 3 Numerical setup
We use three two-flavor twisted mass gauge ensembles generated by the Extended Twisted Mass Collaboration (ETMC) [19, 20] with lattice spacing \(a\simeq 0.0667,0.085,0.098\) fm, respectively. For convenience, we name them a67, a85, and a98 in this work. The ensemble parameters are shown in Table. 1. The valence charm quark mass is tuned by setting the lattice result of \(J/\psi\) mass to the physical one. The detailed information on the tuning is referred to Ref. [21].
In this work, we calculate the three-point correlation function \(C_{\mu\nu\sigma}^{(3)}(\vec{x},t)\) using \(Z_{4}\)-stochastic wall-source \(J/\psi\) interpolating operator \(\phi_{J/\psi,\,\alpha}=\bar{c}\gamma_{\alpha}c\). For time ordering \(t\geq 0\), we place the point source propagator on \(J_{\nu}^{Z}\) and treat the electromagnetic current \(J_{\mu}^{\mathrm{em}}\) as the sink. For the time ordering \(t<0\), after considering the space-time translation invariance of the correlation function, i.e. \(\langle J_{\mu}^{Z}(0)J_{\nu}^{\mathrm{em}}(x)\phi_{J/\psi,\,\alpha}^{\dagger} (t-\Delta t)\rangle=\langle J_{\mu}^{Z}(-\vec{x},-t)J_{\nu}^{\mathrm{em}}(0) \phi_{J/\psi,\,\alpha}^{\dagger}(-\Delta t)\rangle\), we place the point source propagator on \(J_{\nu}^{\mathrm{em}}\) and treat the weak current \(J_{\mu}^{Z}\) as the sink. The wall-source propagator used here can able to reduce the uncertainty of the mass spectrum by nearly half. All the propagators are produced on all time slices by average to increase the statistics based on time translation invariance. We also apply the APE [22] and Gaussian smearing [23] to the \(J/\psi\) field to efficiently reduce the excited-state effects.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Ensemble & \(a\) (fm) & \(L^{3}\times T\) & \(N_{\mathrm{conf}}\times T\) & \(m_{\pi}\)(MeV) & \(t\) \\ \hline a67 & 0.0667(20) & \(32^{3}\times 64\) & \(197\times 64\) & 300 & 12-18 \\ a85 & 0.085(2) & \(24^{3}\times 48\) & \(200\times 48\) & 315 & 10-14 \\ a98 & 0.098(3) & \(24^{3}\times 48\) & \(236\times 48\) & 365 & 9-13 \\ \hline \end{tabular}
\end{table}
Table 1: Parameters of gauge ensembles are used in this work. From left to right, we list the ensemble name, the lattice spacing \(a\), the spatial and temporal lattice size \(L\) and \(T\), the number of the measurements of the correlation function for each ensemble \(N_{\mathrm{conf}}\times T\) with \(N_{\mathrm{conf}}\) the number of the configurations used, the pion mass \(m_{\pi}\) and the range of the time separation \(t\) between the initial hadron and the electromagnetic current. Here, both \(L\), \(T\) and \(t\) are given in lattice units.
To compute the \(f_{J/\psi}\), we calculate the two-point function \(C^{(2)}_{ii}(t)=\langle\mathcal{O}_{i}(t)\mathcal{O}_{i}^{\dagger}(0)\rangle\) using a point source \(J/\psi\) interpolating operator \(\mathcal{O}_{i}=Z_{A}\bar{c}\gamma_{i}c\). The overlap amplitude \(Z_{0i}=\langle 0|\bar{c}\gamma_{i}c(0)|J/\psi(\bar{0},\lambda)\rangle\) can be extracted from a simple single-state fit
\[C^{(2)}_{ii}(t)=\frac{Z_{A}^{2}Z_{0i}^{2}}{2m_{J/\psi}}\left( \mathrm{e}^{-m_{J/\psi}t}+\mathrm{e}^{-m_{J/\psi}(T-t)}\right) \tag{23}\]
then the \(J/\psi\) decay constant is obtained immediately by \(f_{J/\psi}=Z_{A}Z_{0i}/m_{J/\psi}\).
In our calculations, we choose the local vector current \(J_{\nu}^{\mathrm{em}}(x)=Z_{V}e_{c}\bar{c}\gamma_{\nu}c\) and weak current \(J_{\mu}^{Z}=\bar{c}\gamma_{\mu}(Z_{V}g_{V}^{c}-Z_{A}g_{A}^{c}\gamma_{5})c\), where the renormalization constants \(Z_{V}\) and \(Z_{A}\) are introduced. The detailed determination of \(Z_{V}\) has been presented in our previous paper [21]. In this study, we just quote the values directly, which are shown as 0.6047(19), 0.6257(21), and 0.6516(15) for \(a=0.098,\,0.085,\,0.0667\) fm, respectively. The values of \(Z_{A}\) are referred to the paper [24], which are calculated by the RI-MOM scheme, and the results are given as 0.746(11),0.746(06) and 0.772(06) for \(a=0.098,\,0.085,\,0.0667\) fm, respectively.
## 4 Numerical results
### Check of condition (13)
In Sec 2.1, we have declared that \(\delta E(\vec{p})=|\vec{p}|+E_{\eta_{c}}(\vec{p})-m_{J/\psi}>0\) is valid for any non-zero lattice momentum \(\vec{p}=2\pi\vec{n}/L\), so the condition (13) is satisfied in our work. Using a point-source propagator, we extract a series of discrete energy levels of \(\eta_{c}\) from the two-point function calculated by the interpolating operator \(\mathcal{O}_{\eta_{c}}=\bar{c}\gamma_{5}c\). The numerical values of \(E_{\eta_{c}}(\vec{p})\) and \(\delta E(\vec{p})\) are also summarized in Table 2. It is shown readily that \(\delta E(\vec{p})>0\) for \(|\vec{n}|^{2}\neq 0\), hence leading to a guarantee of condition (13).
### \(f_{J/\psi}\)
We present the lattice results of the decay constant \(f_{J/\psi}\) in different lattice spacings in Fig. 2. The continuous extrapolation which is linear in \(a^{2}\) is performed due to the so-called automatic \(\mathcal{O}(a)\)
improvement for the twisted mass configuration. After the continuous extrapolation, we obtain
\[f_{J/\psi}^{\rm Cont.Limit}=406(26)\ {\rm MeV} \tag{24}\]
Our lattice result is consistent with the experimental result \(f_{J/\psi}^{\rm exp}=406.5(3.7)\ {\rm MeV}\) but with a larger statistical error. The experimental value is obtained using the experimental average of \(\Gamma_{e^{+}e^{-}}\) and \(\alpha_{QED}(m_{J/\psi})\) through
\[\Gamma_{e^{+}e^{-}}=\frac{4\pi}{3}\alpha_{QED}^{2}(M_{J/\psi}^{2})e_{c}^{2} \frac{f_{J/\psi}^{2}}{M_{J/\psi}} \tag{25}\]
where \(\alpha_{QED}(M_{J/\psi}^{2})\) is evaluated at the scale of \(M_{J/\psi}=3096.9\ {\rm MeV}\). Note that the latest lattice QCD calculation from HPQCD [25] gives a value \(f_{J/\psi,QCD}=409.6(1.6)\) with a much smaller statistical error than this work.
### Finite-volume effects
The decay width is calculated by a Monte-Carlo phase-integral as showed in Eq. (20), where \(N_{MC}=200\) is chosen and examined to guarantee the phase-integral error is much less than the statistical error. In our calculations, the integral energy \(E_{\gamma}\in[0,m_{J/\psi}/2]\) is picked randomly. The non-lattice values (\(E_{\gamma}\neq 2\pi|\vec{n}|/L\)) will inevitably introduce the systematic effects. These effects are essentially the finite-volume effects, since all the random values \(E_{\gamma}\) become the lattice ones as the volume \(L\) goes to infinity.
To examine the finite-volume effects, we introduce a spatial integral truncation parameter \(R\) in Eq. (18). As the hadronic function \({\cal H}_{\mu\nu\alpha}(x)\) is dominated by the \(\eta_{c}\) state at large \(|\vec{x}|\), the size of the integrand is exponentially suppressed when \(|\vec{x}|\) becomes large. In Fig. 3 the ratio \(\Gamma_{\gamma\nu\bar{\nu}}/f_{J/\psi}\) is shown as a function of \(R\). It is clearly seen that there exists a plateau for \(R\gtrsim 0.8\ {\rm fm}\), indicating t
Figure 2: Lattice results of \(f_{J/\psi}\) as a function of lattice spacing. The errors of lattice spacing are included in the fitting and presented by the horizontal error bars. The symbol of the red circle denotes the lattice results from ensemble a67,a85, and a98 from left to right. The black triangle is the result in continuous limit \(a^{2}\to 0\) and circle blue is obtained using the experimental average of \(\Gamma_{e+e^{-}}\) and \(\alpha_{QED}(m_{J/\psi}^{2})=1/134.02\).
that the hadronic function \({\cal H}_{\mu\nu\alpha}(x)\) at \(|\vec{x}|\gtrsim 0.8\) fm has negligible contribution to \(\Gamma_{\gamma\nu\bar{\nu}}/f_{J/\psi}\). All the ensembles have the lattice size \(L>2\) fm which is sufficiently large to accommodate the hadron. We thus conclude that finite-volume effects are well under control in our calculation.
### Decay width
The lattice results of \(\Gamma_{\gamma\nu\bar{\nu}}/f_{J/\psi}\) as a function of \(t\) with different seperation \(\Delta t\) are shown in Fig. 4. We find that for all the separation \(\Delta t\) and all ensembles used in this work, a temporal truncation \(t\simeq 1.2\) fm is a conservative choice for the ground-state saturation. With this choice, the results for \(\Gamma_{\gamma\nu\bar{\nu}}/f_{J/\psi}\) as a function of \(\Delta t\) are shown in Fig. 5. It shows that \(\Gamma_{\gamma\nu\bar{\nu}}/f_{J/\psi}\) has an obvious \(\Delta t\) dependence, indicating nonnegligible excited-state effects associated with \(\phi_{J/\psi}^{\dagger}\) operator as we have pointed out before. Using a two-state fit described by Eq. (21) we can extract the ground-state contribution to the ratio at \(\Delta t\to\infty\). The results are listed in Table 3.
In Fig. 6, the lattice results for \(\Gamma_{\eta_{c}\gamma\gamma}/f_{J/\psi}\) at different lattice spacings are shown together with an extrapolation that is linear in \(a^{2}\). We expect this linear behavior since the twisted mass configuration has the so-called automatic \(O(a)\) improvement. It is also seen that the fitting curves describe the lattice data well. After the continuous extrapolation, we obtain \(R_{f}^{\rm Cont.Limit}=2.29(14)\times 10^{-14}\). For a convenient comparison with the experimental branching fraction in the future, we rescale \(R_{f}^{\rm Cont.Limit}\) to physical branching fraction by multipling the \(J/\psi\) decay constant \(f_{J/\psi}^{\rm Cont.Limit}\) and dividing the total decay width \(\Gamma_{J/\psi}=92.6\) keV. Then, the branching f
Figure 3: For ensemble a67, \(\Gamma_{\gamma\nu\bar{\nu}}/f_{J/\psi}\) with \(t\simeq 1.2\) fm and \(\Delta t\simeq 1.2\) fm as a function of the spatial range truncation \(R\).
Nevertheless, the Ref. [19] claims the ensemble a98 might not be optimally tuned and possibly contain some \(\mathcal{O}(a)\) discretization errors. To examine this effect, we also perform our continuum extrapolation without the coarsest lattice, a98. And then, we get the result \(1.07(13)\times 10^{-10}\), which is consistent with the value \(1.00(9)\times 10^{-10}\), but with a larger error. The consistency suggests there is no residual \(\mathcal{O}(a)\) effect on ensemble \(a98\). This conclusion has also been demonstrated in our recent works on charmonium radiative decay [16, 21] and other lattice studies [19, 26, 27]. In this paper, we will quote the result with a98 included as the final report and take the difference between these two central values as our estimation of the systematic error. Our final prediction for the continuum extrapolation is consistent with the results of the previous section.
Figure 4: The lattice results of \(\Gamma_{\gamma\nu\bar{\nu}}/f_{J/\psi}\) for ensemble a67, a85 and a98, which are shown as a function of \(t\) with various choices of \(\Delta t\). The vertical dashed line denotes a conservative choice of \(t\simeq 1.2\) fm, where the ground-state saturation is realized. The statistical error of \(Z_{A}\) is not included here.
Figure 5: The lattice results of \(\Gamma_{\gamma\nu\bar{\nu}}/f_{J/\psi}\) with the cut \(t\simeq 1.2\) fm in Fig.4 are shown as a function of \(\Delta t\) together with a fit to the form (21).
the branching fraction of \(J/\psi\to\gamma\nu\bar{\nu}\) is
\[\mathrm{Br}[J/\psi\to\gamma\nu\bar{\nu}]=1.00(9)(7)\times 10^{-10} \tag{26}\]
where the first error is a statistical error obtained with the spacing error included in the extrapolation and the second is an estimate for the systematic error.
We remark that the relevant phenomenological study in the standard model gives a prediction \(\mathrm{Br}[J/\psi\to\gamma\nu\bar{\nu}]=0.7\times 10^{-10}\)[9], which is in the same order of magnitude with our result. Our calculation is performed using three different lattice spacings for the continuous extrapolation, thus the lattice discretization effect is well-controlled. We have also used multiple \(\Delta t\) to control the excited-state effects by a multi-state fit. For the neglected disconnected diagrams, they are believed to only give a small contribution in the charmonium system [25, 28, 29, 30] due to the Okubo-Zweiglizuka (OZI) suppression.
## 5 Conclusion
In this paper, we present a lattice QCD calculation on the invisible decay \(J/\psi\to\gamma\nu\bar{\nu}\) for the first time. Our calculation is accomplished using three \(N_{f}=2\) twisted mass fermion ensembles. The excited-state effects are observed and eliminated using a multi-state fit. After a controlled continuous extrapolation, we obtain the first lattice QCD prediction for the branching fraction of \(J/\psi\to\gamma\nu\bar{\nu}\) as \(\mathrm{Br}[J/\psi\to\gamma\nu\bar{\nu}]=1.00(9)(7)\times 10^{-10}\), where the first error is the statistical error that already takes into account the \(a^{2}\)-error in the continuous extrapolation, and the second is an estimate of the systematics. The method can also be applied for other processes which involve the leptonic or radiative particles in the final states, for example, \(\pi_{0}\to 2\gamma\)[31], \(J/\psi\to 3\gamma\)[32] and \(K_{L}\to\mu^{+}\mu^{-}\)[33].
Figure 6: Lattice values of \(\Gamma_{\gamma\nu\bar{\nu}}/f_{J/\psi}\) as a function of lattice spacing together with a continuous extrapolation with a linear behavior \(a^{2}\). The errors of lattice spacing are included in the fitting and presented by the horizontal error bars. The symbol of the red circle denotes the lattice results from ensemble a67,a85, and a98 from left to right. The statistical error of \(Z_{A}\) is included by error propagation.
Our first-principle calculation provides a precise prediction for the decay of \(J/\psi\to\gamma\nu\bar{\nu}\). It also confirms the previous phenomenological conclusion that the branching fraction of \(J/\psi\to\gamma\nu\bar{\nu}\) is about \(10^{-10}\)[9]. If the future experiments can achieve a precision of \(10^{-10}\), the search for new physics scenarios beyond the standard model by the channel \(J/\psi\to\gamma+\text{invisible}\) needs to consider the exact contribution of \(J/\psi\to\gamma\nu\bar{\nu}\) from the standard model background.
###### Acknowledgments.
We thank ETM Collaboration for sharing the gauge configurations with us. We gratefully acknowledge the helpful discussions with Dao-Neng Gao. Y.M. acknowledges support by NSFC of China under Grant No.12047505 and No.12305094. The main calculation was carried out on the Tianhe-1A supercomputer at Tianjin National Supercomputing Center and partly supported by SongShan supercomputer at the National Supercomputing Center in Zhengzhou.
|
2303.18149 | Can AI Chatbots Pass the Fundamentals of Engineering (FE) and Principles
and Practice of Engineering (PE) Structural Exams? | The engineering community has recently witnessed the emergence of chatbot
technology with the release of OpenAI ChatGPT-4 and Google Bard. While these
chatbots have been reported to perform well and even pass various standardized
tests, including medical and law exams, this forum paper explores whether these
chatbots can also pass the Fundamentals of Engineering (FE) and Principles and
Practice of Engineering (PE) exams. A diverse range of civil and environmental
engineering questions and scenarios are used to evaluate the chatbots'
performance, as commonly present in the FE and PE exams. The chatbots'
responses were analyzed based on their relevance, accuracy, and clarity and
then compared against the recommendations of the National Council of Examiners
for Engineering and Surveying (NCEES). Our report shows that ChatGPT-4 and
Bard, respectively scored 70.9% and 39.2% in the FE exam and 46.2% and 41% in
the PE exam. It is evident that the current version of ChatGPT-4 could
potentially pass the FE exam. While future editions are much more likely to
pass both exams, this study also highlights the potential of using chatbots as
teaching assistants and guiding engineers. | M. Z. Naser, Brandon Ross, Jennier Ogle, Venkatesh Kodur, Rami Hawileh, Jamal Abdalla, Huu-Tai Thai | 2023-03-31T15:37:17Z | http://arxiv.org/abs/2303.18149v2 | Can AI Chatbots Pass the Fundamentals of Engineering (FE) and Principles and Practice of Engineering (PE) Structural Exams?
###### Abstract
The engineering community has recently witnessed the emergence of chatbot technology with the release of OpenAI _ChatGPT-4_ and Google _Bard_. While these chatbots have been reported to perform well and even pass various standardized tests, including medical and law exams, this forum paper explores whether these chatbots can also pass the Fundamentals of Engineering (FE) and Principles and Practice of Engineering (PE) exams. A diverse range of civil and environmental engineering questions and scenarios are used to evaluate the chatbots' performance, as commonly present in the FE and PE exams. The chatbots' responses were analyzed based on their relevance, accuracy, and clarity and then compared against the recommendations of the National Council of Examiners for Engineering and Surveying (NCEES). Our report shows that _ChatGPT-4_ and _Bard_, respectively scored 70.9% and 39.2% in the FE exam and 46.2% and 41% in the PE exam. It is evident that the current version of _ChatGPT-4_ could potentially pass the FE exam. While future editions are much more likely to pass both exams, this study also highlights the potential of using chatbots as teaching assistants and guiding engineers.
## What is a chatbot?
A Chatbot is a computer program that is designed to be capable of simulating conversation with humans [1]. _ChatGPT-4_ and _Bard_ were created and recently released by OpenAI [2] and Google [3]. When asked to define what a chatbot is, _ChatGPT-4_ and _Bard_ answered:
A chatbot is an artificial intelligence (AI) software program designed to simulate human-like conversations with users, usually through text or voice interactions. Chatbots can understand and process natural language inputs, such as text messages or spoken words, and generate appropriate responses based on their underlying algorithms and knowledge base. They are typically used to provide customer support, answer frequently asked questions, assist with various tasks, or offer personalized recommendations.
Chatbots can be built using different natural language processing (NLP) techniques and machine learning algorithms, which allow them to understand and interpret the user's intent and context. Over time, chatbots can learn and improve their responses by analyzing user interactions and incorporating new knowledge into their systems. They are commonly integrated into messaging platforms, mobile applications, websites, and virtual assistants, providing users with a convenient and efficient way to access information and services.
## And according to _Bard_,
A chatbot is a computer program that simulates human conversation. It can be used to answer questions, provide information, or even complete tasks. Chatbots are often used in customer service, where they can answer questions about products or services without the need for a human representative. They can also be used in marketing, where they can generate leads or provide information about products or services.
Chatbots are typically programmed using a natural language processing (NLP) system. This allows them to understand and respond to human language. NLP systems are trained on large amounts of data, such as text |
2309.14728 | Engineering mixing properties of fluids by spatial modulations | We propose a method to change the effective interaction between two fluids by
modulation of their local density distributions with external periodic
potentials, whereby the mixing properties can be controlled. This method is
applied to a mixture of dilute bosonic gases, and binodal and spinodal curves
emerge in the phase diagram. Spinodal decomposition into a mixed-bubble state
becomes possible, in which one of the coexisting phases has a finite mixing
ratio. A metastable mixture is also realized, which undergoes phase separation
via nucleation. | Abid Ali, Hiroki Saito | 2023-09-26T07:38:39Z | http://arxiv.org/abs/2309.14728v1 | # Engineering mixing properties of fluids by spatial modulations
###### Abstract
We propose a method to change the effective interaction between two fluids by modulation of their local density distributions with external periodic potentials, whereby the mixing properties can be controlled. This method is applied to a mixture of dilute bosonic gases, and binodal and spinodal curves emerge in the phase diagram. Spinodal decomposition into a mixed-bubble state becomes possible, in which one of the coexisting phases has a finite mixing ratio. A metastable mixture is also realized, which undergoes phase separation via nucleation.
A binary mixture becomes thermodynamically unstable and separates into two stable phases, when a control parameter, such as temperature, is quenched across the critical point. Such spontaneous phase separation of mixtures is known as spinodal decomposition [1; 2; 3; 4]. On the other hand, if the binary mixture is prepared in a metastable state, finite perturbation is required for nucleation and growth to proceed to phase separation [5; 6; 7]. Spinodal decomposition and nucleation are the two major mechanisms responsible for phase separation in multicomponent systems. The boundary between the spinodal and nucleation regions in the phase diagram is called a spinodal curve, and that between the nucleation and stable regions is called a binodal curve. These three regions appear when the free energy has both concave and convex shapes as a function of the mixing ratio.
The separated and mixed fluids have different free energies, since the energy and entropy are changed by mixing. Here we focus on the mixing energy, which is the energy difference between separated and mixed fluids. The mixing energy is determined by the interaction between constituent particles of two fluids and, which is generally difficult to control. The purpose of this Letter is to alter the mixing energy by a simple method -- modulation of the local density distributions by external periodic potentials, whereby mixing properties of fluids are controlled. Let us consider a mixing energy \(E_{\rm mix}(n_{1}(\mathbf{r}),n_{2}(\mathbf{r}))\), which is dependent on the density distributions \(n_{1}(\mathbf{r})\) and \(n_{2}(\mathbf{r})\) of components 1 and 2. If we apply external potentials that locally modulate the density distributions, the local overlap between \(n_{1}(\mathbf{r})\) and \(n_{2}(\mathbf{r})\) is modulated. On a scale much larger than the modulation wavelength, the effective mixing energy is given by the spatial average \(\langle E_{\rm mix}(n_{1}(\mathbf{r}),n_{2}(\mathbf{r}))\rangle_{\mathbf{r}}\). Therefore, the global mixing energy can be changed, which alters the global mixing properties.
This method is applied to a binary mixture of Bose-Einstein condensates (BECs) of ultracold gases [8; 9; 10; 11; 12; 13; 14; 15], which is a clean and highly controllable system. In this system, external periodic potentials can be easily generated and precisely controlled using optical lattices [16], making the system suitable for the present purpose. In a mixture of dilute BECs, in which simple mean-field theory is applicable, the mixing property for a homogeneous system is trivial, i.e., the mixture is either stable or unstable against phase separation, irrespective of the mixing ratio, and there are no binodal and spinodal curves in the phase diagram (with respect to the mixing ratio). It was recently predicted that a beyond-mean-field effect [17] can modify the energy curve and an interesting separated phase (mixed-bubble state) is possible [18; 19], in which one of coexisting phases has a finite mixing ratio. However, the beyond-mean-field effect is a higher-order effect of the density and emerges in the high-density regime. Experimentally, atomic loss due to three-body recombination is significant for such a density, especially near Feshbach resonance, which limits the lifetime of the system [20; 21; 22].
Here we show that binodal and spinodal physics emerge in a binary mixture of dilute BECs simply by application of a component-dependent periodic external potential, which modulates the mixing energy depending on the mixing ratio. As a result, spinodal decomposition into the mixed-bubble state becomes possible without beyond-mean-field effects. Moreover, the mixture can be brought to a metastable state, in which a finite perturbation is required to cross the energy barrier against phase separation, which leads to nucleation and growth toward the separated phase.
We consider a binary mixture of dilute Bose gases at zero temperature, which can be described by the macroscopic wave functions \(\Psi_{1}\) and \(\Psi_{2}\) in the mean-field approximation. The energy of the system can be written as [23; 24]
\[E = \int d\mathbf{r}\Biggl{[}\sum_{j=1}^{2}\Biggl{(}-\frac{\hbar^{2}}{2 m}\Psi_{j}^{*}\nabla^{2}\Psi_{j}+V_{j}|\Psi_{j}|^{2}+\frac{g_{jj}}{2}|\Psi_{j}|^{4} \Biggr{)} \tag{1}\] \[+g_{12}|\Psi_{1}|^{2}|\Psi_{2}|^{2}\Biggr{]},\]
where \(m\) is the atomic mass. The wave function \(\Psi_{j}(\mathbf{r},t)\) satisfies \(\int d\mathbf{r}|\Psi_{j}(\mathbf{r},t)|^{2}=N_{j}\), where \(N_{j}\) is the number of atoms for component \(j\). The interaction coefficients are defined as \(g_{jj^{\prime}}=4\pi\hbar^{2}a_{jj^{\prime}}/m\), where \(a_{jj^{\prime}}\) is the \(s\)-wave scattering length between components \(j\) and \(j^{\prime}\). In the present work, we consider the repulsive atomic
interactions with positive scattering lengths, \(g_{ij}>0\). We consider a situation in which a periodic potential \(V_{j}(\mathbf{r})=U_{j}\cos^{2}(kx)\) is applied to the system, where \(U_{1}>0\) and \(U_{2}<0\) (see Fig. 1(a)). Such a component-dependent periodic potential can be produced by laser beams with a selected wave number \(k\) and polarization [25; 26; 27; 28; 29; 30; 31; 32].
For a homogeneous system without an optical lattice, the energy density is given by \(\varepsilon=(g_{11}n_{1}^{2}+g_{22}n_{2}^{2})/2+g_{12}n_{1}n_{2}\), which has a constant curvature \(g_{11}g_{22}-g_{12}^{2}\) with respect to the uniform densities \(n_{1}\) and \(n_{2}\). Therefore, there are only two ways to minimize \(\omega\equiv\varepsilon-\mu_{1}n_{1}-\mu_{2}n_{2}\) for chemical potentials \(\mu_{j}\). For a positive curvature, there can be a single minimum with nonzero \(n_{1}\) and \(n_{2}\), which corresponds to the uniformly mixed state. For a negative curvature and appropriate \(\mu_{j}\), \(\omega\) can be simultaneously minimized at the two points \(n_{1}=0\) (with \(n_{2}\neq 0\)) and \(n_{2}=0\) (with \(n_{1}\neq 0\)), which corresponds to the separated state. Thus, the phase diagram is trivial: the ground state is either the uniformly mixed state or the totally separated state, which is determined only by the sign of the curvature \(g_{11}g_{22}-g_{12}^{2}\), and is independent of the mixing ratio \(N_{1}/N_{2}\). In the presence of component-dependent periodic external potentials, as shown in Fig. 1(a), the density distributions \(n_{1}(\mathbf{r})\) and \(n_{2}(\mathbf{r})\) are modulated (see Fig. 1(d)), and their local overlap can be decreased. As a result, the global mixing energy of the system is changed, and the phase diagram will be altered.
To present this clearly, we employ a simple variational method. The wave function is approximated as
\[\Psi_{j}(\mathbf{r})=\sqrt{n_{j}[1+a_{j}\cos(2kx)]}, \tag{2}\]
where \(a_{j}\) is the real variational parameter satisfying \(|a_{j}|<1\). Substituting the variational wave function \(\Psi_{j}(\mathbf{r})\) into the energy of the system in Eq. (1) gives
\[\begin{split}\varepsilon\equiv\frac{E}{V}=&\sum_{j= 1}^{2}\Big{[}\frac{n_{j}}{2}\big{(}1-\sqrt{1-a_{j}^{2}}\big{)}+\frac{U_{j}}{4 }n_{j}a_{j}\Big{]}\\ &+\frac{g}{2}\Big{[}n_{1}^{2}\big{(}1+\frac{a_{1}^{2}}{2}\big{)} +n_{2}^{2}\big{(}1+\frac{a_{2}^{2}}{2}\big{)}\\ &+g_{12}n_{1}n_{2}\big{(}1+\frac{a_{1}a_{2}}{2}\big{)}\Big{]}, \end{split} \tag{3}\]
where \(g_{11}=g_{22}\equiv g\) is assumed, \(V\) is the volume of the system, and the length, energy, and density are normalized by \(k^{-1}\), \(\hbar^{2}k^{2}/m\), and \((N_{1}+N_{2})/V\), respectively. It is also assumed that \(|g_{12}-g|\ll g\), which maintains the total density \(|\Psi_{1}|^{2}+|\Psi_{2}|^{2}\) to be almost uniform. In this case, the densities can be parametrized by the composition \(C\) as \(n_{1}=C\) and \(n_{2}=1-C\) with \(0\leq C\leq 1\). In the following, we also assume \(U_{1}=-U_{2}\equiv U\). These assumptions are only to reduce the number of parameters and are not crucial for the main results obtained later.
Figures 1(b) and 1(c) show the variational energy \(\varepsilon\) as a function of the potential strength \(U\) and composition \(C\), where \(\varepsilon\) has been minimized with respect to the variational parameters \(a_{j}\). For a small value of \(U\), the energy curve is concave (curve I), since the interaction parameters satisfy \(g_{11}g_{22}-g_{12}^{2}<0\). Therefore, \(C=0\) and \(1\) minimize the energy, which indicates that the mixture is energetically unstable against phase separation. As the value of \(U\) is increased, the energy curve is modified, and the metastable state appears for \(U=1\) (curve II). For \(U\gtrsim 1.1\), the energy around \(C=0.5\) decreases and
Figure 1: (a) Schematic illustration of external periodic potentials \(V_{1}=U\cos^{2}(kx)\) for component \(1\) and \(V_{2}=-U\cos^{2}(kx)\) for component \(2\). (b) Variational energy \(\varepsilon\) in Eq. (3) as a function of potential strength \(U\) and composition \(C\), where variational parameters \(a_{1}\) and \(a_{2}\) are selected to minimize \(\varepsilon\). The curves I, II, III, and IV correspond to \(U=0.9\), \(U=1\), \(U=1.1\), and \(U=1.2\), respectively, which are shown in (c) as a function of \(C\). The density distributions \(n_{1}\) and \(n_{2}\) are shown in (d), where panels A, B, and C correspond to the points marked in (b). The tangential line and circles on curves II and IV in (c) correspond to the mixed-bubble state and metastable state presented in Figs. 2 and 3, respectively. The interaction parameters are \(g_{11}=g_{22}\equiv g=15\) and \(g_{12}=15.3\).
goes below those for \(C=0\) and \(1\) (curves III and IV). It should be noted that such a concave-convex shape of \(\varepsilon\) is due to the nonlinear dependence of Eq. (3) on \(C\) and \(U\) through the optimization of \(a_{j}\), i.e., the local density modulation depends on \(C\) and \(U\), which yields the non-trivial mixing properties. This effect therefore cannot be described by the tight-binding model, which has been used in previous studies on a two-component BEC in an optical lattice [33; 34; 35; 36; 37; 38; 39; 40; 41; 42].
From the energy curves in Fig. 1(c), the energetic stability of the state against phase separation can be understood in a diagrammatic manner [43]. Let us consider a point on an energy curve \((C_{0},\varepsilon(C_{0}))\), and consider a situation in which the entire system is occupied by this state. Although this state is alternately modulated in the \(x\) direction, as shown in Fig. 1(d), with coarse-graining on a scale much larger than the modulation wavelength, the two components are uniformly mixed on average, and we refer to this state as a "globally mixed state". Suppose that this phase with \(C_{0}\) separates into two phases with \(C_{+}(>C_{0})\) and \(C_{-}(<C_{0})\). It can be shown that the energy of the separated state is given by \(\varepsilon_{\rm sep}=[\varepsilon(C_{-})(C_{+}-C_{0})+\varepsilon(C_{+})(C_ {0}-C_{-})]/(C_{+}-C_{-})\) (See Supplemental Material), which corresponds to the intersection point between the vertical line \(C=C_{0}\) and the line connecting the two points \((C_{+},\varepsilon(C_{+}))\) and \((C_{-},\varepsilon(C_{-}))\). When this energy \(\varepsilon_{\rm sep}\) is larger (smaller) than \(\varepsilon(C_{0})\), the globally mixed state with \(C_{0}\) is energetically stable (unstable) against phase separation into two phases with \(C_{\pm}\). Thus, within the region of \(\partial^{2}\varepsilon/\partial C^{2}<0\), the globally mixed state is always unstable against phase separation. If \(\partial^{2}\varepsilon/\partial C^{2}>0\) and there exist \(C_{\pm}\) such that \(\varepsilon_{\rm sep}<\varepsilon(C_{0})\), then the globally mixed state is metastable.
For \(U=1.2\), the energy curve with respect to \(C\) acquires a concave-convex shape, as shown by energy curve IV in Fig. 1(c). In this case, the globally mixed state for, e.g., \(C=0.1\) is unstable against phase separation. From the above consideration, the most stable (lowest-energy) separated pair of phases is given by the tangential line shown in Fig. 1(c), which gives \(C_{-}=0\) and \(C_{+}\simeq 0.426\) (two circles on the line). This indicates that if the globally mixed state with \(C=0.1\) is prepared, it separates into two phases; one phase is occupied by only component \(2\) (\(C_{-}=0\)) and the other phase is occupied by both components (\(C_{+}\simeq 0.426\)). This separated state is referred to as a mixed-bubble state, which was first predicted in a system with beyond-mean-field effects [18]. Here, the mixed-bubble state emerged even in a dilute system, in which simple mean-field theory is applicable.
To confirm the results of the variational analysis, we numerically solve the coupled Gross-Pitaevskii (GP) equation,
\[i\frac{\partial\Psi_{j}}{\partial t}=\left(-\frac{\nabla^{2}}{2}+V_{j}+g| \Psi_{j}|^{2}+g_{12}|\Psi_{j^{\prime}}|^{2}\right)\Psi_{j}, \tag{4}\]
where \((j,j^{\prime})=(1,2)\) and \((2,1)\). Note that the results in Fig. 1 are not dependent on the dimensionality, and here we consider a two-dimensional system. The system is discretized into a mesh with \(dx=dy=2\pi/64\) and the time step is typically \(dt=0.001\). The GP equation is integrated using the pseudospectral method [44] with the periodic boundary condition.
First, we solve the imaginary-time evolution of Eq. (4) to obtain the ground state, in which \(i\) on the left-hand side is replaced by \(-1\). Figure 2(a) shows the density distributions, \(|\Psi_{1}|^{2}\) and \(|\Psi_{2}|^{2}\), of the ground state for \(U=1.2\) and \(C=0.1\). As expected from the variational results, the ground state is the mixed-bubble state, which contains a single bubble of a mixed phase surrounded by component \(2\). The bubble in Fig. 2(a) is slightly elongated in the \(y\) direction, which implies that the interfacial tension between the two phases is anisotropic due to the periodic potential in the \(x\) direction. Figure 2(b) shows the real-time evolution, in which the initial state is the ground state for \(g=g_{12}=15\) and \(C=0.1\) with small random noise. This state is a globally mixed state, as shown in the leftmost panels of Fig. 2(b). At \(t=0\), \(g_{12}\) is suddenly changed to \(15.3\) (the same condition as in Fig. 2(a)), and the mixed bubbles are dynamically formed by spinodal decomposition.
For \(U=1\), the energy curve acquires the shape shown in Fig. 1(c) (curve II). Around \(C=0.5\), the energy curve is convex and therefore the globally mixed state is sta
Figure 2: Mixed-bubble state for \(C=0.1\), \(g=15\), \(g_{12}=15.3\), and \(U=1.2\), which corresponds to the energy curve IV in Fig. 1(c). (a) Ground state obtained by solving the GP equation. (b) Dynamics of the mixed-bubble formation. The initial state is the ground state for \(g=g_{12}=15\). At \(t=0\), \(g_{12}\) is suddenly changed to \(15.3\). The size of each panel is \((32\pi)^{2}\).
ble against a small change in \(C\) (\(|C_{\pm}-C_{0}|\ll 1\)). However, the true ground state is the totally separated phases with \(C=0\) and \(C=1\), and the globally mixed state with \(C\simeq 0.5\) is a metastable state. Figures 3(a) and 3(b) show the dynamics that start from the metastable state with \(C=0.5\), where a local perturbation potential \(Ae^{-(x^{2}+y^{2})}\) is added at \(t=0\) to trigger the nucleation. Figure 3(a) shows the case of \(A=1\); the phase separation is triggered around the center by the perturbation potential, and the concentric phase separation extends outward, which is the phase separation by nucleation. In the case of a smaller perturbation (\(A=0.05\)), as shown in Fig. 3(b), the density distributions around the center are slightly modified by the perturbation potential, which is insufficient to trigger the phase separation. This corroborates the existence of an energy barrier against nucleation.
Figure 4(a) depicts the stability of the globally mixed state with composition \(C\) for the potential strength \(U\). For the region in which the energy curve \(\varepsilon(C)\) is concave, \(\partial^{2}\varepsilon/\partial C^{2}<0\), the globally mixed state is unstable against phase separation, and the inflection points (circles in Fig. 4(b) for \(U=1.2\)) trace the spinodal curve in Fig. 4(a). For \(U\lesssim 0.94\), \(\varepsilon(C)\) is concave everywhere and there are no inflection points. The tangential lines, as shown in Fig. 4(b), give the mixed-bubble states, and a globally mixed state for \(C\) between the square and triangle in Fig. 4(b) has higher energy than the mixed-bubble state. Therefore, the region between the circle and triangle is metastable. For \(U\simeq 1.1\), the states with \(C=0\), \(C=0.5\), and \(C=1\) become degenerate (curve III in Fig. 1(c)), and these three phases (occupied only by component 0 or 1, or equally mixed) can coexist.
Experimentally, the phenomena presented here can be realized by a two-component BEC with scattering lengths that satisfy the immiscible condition \(a_{12}^{2}-a_{11}a_{22}>0\). (See Supplemental Material for an example of an experimental system.) A box-like potential rather than a harmonic potential is suitable to avoid the complexity that arises from inhomogeneous density. A quasi-two-dimensional (or one-dimensional) system is convenient to observe the spatial density pattern, and also to suppress the total number of atoms; however, the dimensionality is not crucial for the present phenomena.
In conclusion, we have proposed a method to control the mixing properties of two fluids. The mixing energy can be changed by modulating the densities on a small scale using component-dependent external potentials, which alters the global mixing properties. This method was applied to a two-component BEC of dilute gases. Although this system originally has a trivial mixing property, the energy curve acquires a concave-convex shape with respect to the composition \(C\) by the present method, and spinodal and binodal physics emerge. As a result, the mixed-bubble state (Fig. 2), which has only been predicted for a system with large quantum fluc
Figure 3: Dynamics for \(C=0.5\), \(g=15\), \(g_{12}=15.3\), and \(U=1\). The energy curve for these parameters is shown in Fig. 1(c) (curve II). The initial state is the metastable state with \(C=0.5\) (circle in Fig. 1(c)). At \(t=0\), a local perturbation potential \(Ae^{-(x^{2}+y^{2})}\) is added with (a) \(A=1\) and (b) \(A=0.05\). The size of each panel is \((64\pi)^{2}\) with the origin at the center.
Figure 4: (a) Stability of the globally mixed state with the composition \(C\) for the potential strength \(U\), \(g=15\), and \(g_{12}=15.3\). The spinodal (binodal) curve divides the unstable (stable) and metastable regions. The parameters used in Figs. 2 and 3 are marked by the open circles. (b) Energy curve \(\varepsilon(C)\) for \(U=1.2\). The inflection points (filled circles) trace the spinodal curve in (a). The tangent points (squares and triangles) for the solid lines give the mixed bubble states. The points marked by the triangles trace the binodal curve in (a).
tuation, becomes possible for a simple dilute system. The modification of the energy curve also results in a metastable state that undergoes phase separation via nucleation due to a finite local perturbation (Fig. 3). The present method is not restricted to quantum fluids and may also be applied to classical immiscible fluids, such as oil and water.
This work was supported by JSPS KAKENHI Grant No. JP23K03276.
|
2309.03532 | A few misfits can Change the World | Rising inequality is a critical concern for societies worldwide, to the
extent that emerging high-growth economies such as China have identified common
prosperity as a central goal. However, the mechanisms by which digital
disruptions contribute to inequality and the efficacy of existing remedies such
as taxation, must be better understood. This is particularly true for the
implications of the complex process of technological adoption that requires
extensive social validation beyond weak ties and, how to trigger it in the
hyperconnected world of the 21st century.
This study aims to shed light on the implications of market evolutionary
mechanism from the lenses of technological adoption as a social process. Our
findings underscore the pivotal importance of connectivity in this process
while also revealing the limited effectiveness of taxation as a counterbalance
for inequality. Our research reveals that widespread cultural change is not a
prerequisite for technological disruption. The injection of a small cohort of
entrepreneurs - a few misfits - can expedite technology adoption even in
conservative, moderately connected societies and, change the world. | Esteve Almirall, Steve Willmott, Ulises Cortés | 2023-09-07T07:35:02Z | http://arxiv.org/abs/2309.03532v3 | # A few misfits can Change the World
###### Abstract
Rising inequality is a critical concern for societies worldwide, to the extent that emerging high-growth economies such as China have identified _common prosperity_ as a central goal. However, the mechanisms by which digital disruptions contribute to inequality and the efficacy of existing remedies such as taxation, must be better understood. This is particularly true for the implications of the complex process of technological adoption that requires extensive social validation beyond weak ties and, how to trigger it in the hyperconnected world of the 21st century.
This study aims to shed light on the implications of market evolutionary mechanism through the lenses of technological adoption as a social process. Our findings underscore the pivotal importance of connectivity in this process while also revealing the limited effectiveness of taxation as a counterbalance for inequality. Our research reveals that widespread cultural change is not a prerequisite for technological disruption. The injection of a small cohort of entrepreneur - a few misfits - can expedite technology adoption even in conservative, moderately connected societies and, change the world.
I
Inequality, Technological Disruption, Economic Growth, Agent-Based Model
## Introduction
The increasing disparities in economic wealth and opportunities (Piketty 2019; Acemoglu and Robinson, 2012; Milanovic 2016; Atkinson 2015; Stiglitz 2012 ), represent a pressing challenge for contemporary societies. This challenge is accentuated by the ascendance of digital multinationals, whose extensive influence transcends traditional boundaries fostering globalization. Concurrently, the ongoing digital transformation threatens to potentially exacerbate these disparities. The question if inequality is an avoidable outcome of digital disruptions and if so, potential remedies, becomes increasingly pertinent. As new disruptions such as generative A.I. loom on the horizon, understanding the underlying mechanisms assumes an even greater significance.
This manuscript aims to explore the processes through which digital disruptions engender social inequality through a process of technological adoption. In particular, technological adoption that is more realistic than typical models generally used and requires extensive social validation for technology adoption rather than just simple contact (Centola 2007).
Inequality's roots are undoubtedly complex, resulting from a myriad of factors, including power conflicts among varying societal factions, inherent benefits such as oil resources, economies of agglomeration, and several others. Yet, there has been a long-standing pursuit to identify internal factors and broad principles that could elucidate the origins of inequality. The initial effort to do so was made by Pareto in 1897, who suggested that inequality follows a ubiquitous power law applicable across all time periods and countries. This hypothesis, however, faced contention (Shirras 1935), and subsequently, Mandelbrot proposed a modified Pareto law (1960), which was primarily applicable to high-income brackets. Over time, numerous other distributions have been proposed (Kakwani 1980).
The foundational reasoning for these proposals diverges into two distinct schools of thought. One school promotes socio-economic justifications (Levy 1987), while the other interprets it as a stochastic or random process. For instance, Gribrat (1931; Montroll and Shlesinger 1983) asserted that inequality is a product of a multiplicative random process. In contrast, Kalecki (1945) postulated that the variance expands over time, and Levy and Solomon proposed a lower income cut-off point, thereby stabilizing the distribution into a power law (1996).
However, some posit that the core issue lies in the dynamics of wealth creation and market structure, tailored for an economic landscape starkly different from our current digital era (Piketty 2019; Acemoglu and Robinson 2012; Milanovic 2016),
In the past, markets typically followed a Gaussian distribution, where the majority of businesses experienced average success, with a few outliers at the extremes of success and failure. Traditional enterprises, such as bakeries, restaurants, and conventional grocery stores, typically exhibit this type of distribution. The reason lies in inherent constraints - the best bakery in the world can only cater to nearby residents, with expansion possible only by launching another bakery, thus resulting in linear growth. These limitations exist not only in the business model but also in the operational model. Physical products usually demonstrate decreasing returns to scale, and while the threshold for these returns has expanded with the advent of mega-factories like Tesla's, they remain present.
On the contrary, digital markets are free from these traditional boundaries. If one can develop the world's leading search engine, it becomes instantly accessible to everyone, everywhere (Shapiro & Varian 1999). Moreover, digital products do not display decreasing returns; rather, they often exhibit increasing returns (Arthur 1996), facilitated, in part, by the persistent relevance of Moore's law (Moore 1965).
These conditions have cultivated a distinctive culture among digital startups, characterized by high growth multipliers substantially exceeding those of traditional firms. Where S&P 500 companies demonstrate an average growth rate of approximately 10%-15%, startups are expected to achieve minimum growth rates ranging from 40% to nearly 200%, resulting in earning multipliers of 20, 50, or even 150. This dynamic has engendered a shift in market portrayal from Gaussian to power law distributions, alongside a culture of rapid growth, pivoting, and the 'J-curve' phenomenon (Kaplan and Schoar, A. 2005; Phalippou and Gottschalg, 2009; Robinson and Sensoy 2016)
Inequality is multifactorial and often emerges from power dynamics, differential access to scarce resources, or even, sheer serendipity. This paper aims to examine inequality from an evolutionary standpoint as a result of technological disruptions in the context of increasing globalization. We aim to validate the following postulates:
H1.- Disparities in growth trajectories between digital-centric and traditional enterprises are huge drivers of inequality. Evolutionary amplification heavily compounds this difference. H2.- Progressive taxation is not an effective counterbalance to the inequality caused by such digital disruptions unless taken to unrealistic extremes. H3.- High Levels of Network Connectivity Are Essential for Technological Disruptions. H4.- A minimal entrepreneurial seeding can catalyze extensive technological adoption.
In our exploration we initially concentrate on exploring how markets, functioning as evolutionary mechanisms, amplify existing growth disparities and assess the capability of progressive taxation as a counterbalance against inequality. Subsequently, we shift our focus to technological adoption, considering it as a socially-mediated process where inequality emerges not as an exogenous factor but as a property of the system, contingent upon its level of connectivity.
As our inquiry progresses, we find that full adoption of a technological disruption largely hinges on societal validation (Centola 2007), thus being dependent on a community's proclivity for technology adoption and risk-taking. Remarkably, our analysis shows that inducing wide-scale adoption does not require wholesale cultural shifts; rather, the introduction of a small cohort of risk-prone entrepreneurs - 'a few misfits' - can suffice to catalyze broad technological uptake.
By distilling these insights, this study aims to enrich the current discourse surrounding digital adoption, technological transformation, societal inequality, and agent-based modeling of technology diffusion. Additionally, we hope that our findings will provide nuanced perspectives that can inform more effective policymaking, therefore, better policies.
## Markets as an evolutionary mechanism
To address our first hypothesis, we will first use a highly stylized simulation model to explore the effects of diverse growth rates in an evolutionary context. By isolating this specific aspect of the problsem, our objective is to discern the mechanisms through which, and the extent to which, diversity in growth rates becomes amplified and its impact on inequality.
In this initial model, our primary objective is to model the role played by the evolutionary dynamics of markets. As such, allocating digital and traditional companies is predetermined and exogenous to the model's design. The simulation encompasses a series of agents denoted as
A={a1,..., an}. These agents epitomize individual firms, each endowed with a specific valuation that undergoes evolution as time progresses. Notably, these firms are classified into one of two growth trajectories: traditional or digital companies.
A distinguishing factor between these two company types is their respective growth rates. Traditional companies manifest a growth rate characterized by a Gaussian distribution with a mean of 0.15 (or 15%) and a standard deviation (\(\sigma\) ) of 0.1. This rate is analogous to the enduring growth rate of the SP500. In contrast, digital companies are modeled to mirror the growth patterns of digital startups, commencing with a robust growth rate of 150%, which then depreciates over a decade to align with the growth rates exhibited by traditional companies.
It is imperative to note that once companies have their valuation sink below a single unit, they are officially deemed bankrupt, paving the way for their replacement by a nascent entity.
### Agent Behavior
Agents are initialized with a minimal initial valuation. Their classification - whether traditional or digital - is determined through a random selection, influenced by a predetermined probability scale ranging in the interval [0... 1]. Here, a probability of 0 implies all agents are traditional, whereas a probability of 1 denotes all agents as digital.
During each iteration of the simulation:
1. Agents are activated in a stochastic manner, and their valuation either appreciates or depreciates based on their designated type.
2. Traditional agents experience fluctuations in their valuation derived from profits or losses, which are determined by a percentage drawn from a normal distribution with a mean of 0.15 and a standard deviation (\(\sigma\)) spanning from 0.1 to 1.
3. For digital agents, their growth coefficient is procured randomly from a normal distribution that has both a mean and \(\sigma\) set at one.
4. Digital companies experience a consistent decline in their growth rate which is depreciated at every simulation iteration. Their eventual growth rate is discerned by deducting twice the growth rate of traditional companies from the digital companies' rate.
Should an agent's valuation be entirely depleted, it is considered _defunct_ and subsequently substituted by a _fresh_ agent. This new agent type (traditional or digital) is allocated through through random selection, guided by the specific probability set for that simulation run. Consequently, the total number of agents remains unaltered.
Agents must to pay taxes on their accrued profits on the financial front. The tax model is versatile: it can be depicted as a uniform rate or adopt a progressive structure with varying rates
set per quantile. These quantiles are determined based on the valuation distribution of the entire agent population during each simulation step.
## Computational experiments
Our computational experiments involved simulating 10,000 agents across ten periods, conducting a thousand iterations for each period. The results we discuss are the averaged outcomes of these iterations.
All simulations were coded in Julia 1.9, harnessing the functionality of the agents.jl library (Datseris, Vahdati, and DuBois, 2021).
For a detailed breakdown of the parameters governing our simulations, readers are directed to Table 1.
## Results
In our initial experiments, we see that a low standard deviation (0.1) in the percentage of growth (profit and loss) across traditional organizations results in a distribution of gains that resembles a Gaussian distribution, as might be anticipated. As expected, as the standard deviation (\(\sigma\)) increases, the distribution shifts. It may come as a bit of a surprise, that even if just traditional companies, it becomes one more reminiscent of a power-law distribution, increasing inequality.
Therefore, we already see that greater diversity in gains and the selection market mechanism drive this transformation, increasing inequality steadily.
\begin{table}
\begin{tabular}{l c}
**PARAMETER** & **VALUE** \\ \hline Number of agents & 10,000 \\ Number of Periods & 10 \\ Number of experiments & 1,000 \\ Taxes & 15\% flat \\ & 10\% 15\% 25\% 30\% per quartile \\ & 15\% 25\% 50\% 70\% per quartile \\ Profits/loses per period & a random number from a normal distribution of \\ & \(\mu=0.15\) and \(\sigma=\{0.1..1\}\) for traditional \\ & companies and \(\mu=\)1.5 and \(\sigma=1\) for digital \\ & companies \\ Depreciation for digital companies & 10 periods at a rate of 1/10 per period \\ & Final growth rate = growth rate of digital \\ & companies – two times the growth rate of \\ & traditional companies \\ Base growth rate of digital companies after & Two times the one of normal companies \\ depreciation & Ten units \\ Initial valuation & Percentage of digital companies & 0..1 \\ \end{tabular}
\end{table}
Table 1: Parameters of the simulation
The amplifying effect of markets' evolutionary nature on existing disparities is vividly illustrated in Figure 2. Here, it becomes evident that within a limited number of iterations, a modest Gini coefficient escalates to significant levels.
Figure 1: Agents’ distribution for a \(\sigma\) of growth rate for gains/losses of 0.1, 05 and 1
Figure 2: Increase of inequality as diversity in the distribution of gains increases
For our second round of experiments, we explore this mix of traditional and digital companies, ranging from a very small percentage of digital companies (1%) to 30% and 100%.
In our experiments, we have thus far considered a static framework encompassing two distinct categories of companies: traditional businesses, with a growth rate randomly drawn from a normal distribution \(N\)(0.15, 0.1), and digital businesses, where the growth rate is randomly sampled from \(N\)(1.5,1), corresponding to the average growth of SP500 companies and the common expected multiplicator for digital startups in accelerators. However, it is imperative to recognize that this static portrayal will undergo transformations as digital elements progressively infiltrate traditional enterprises. To capture this evolution, we adjusted the standard deviation \(\sigma\) for traditional companies by randomly sampling it from the positive segment of a normal distribution ranging from _N(0, 0.1)_ to _N(0, 1)_. Consequently, each company now exhibits a distinct \(\sigma\) value for its growth rate. In contrast, the mean \(\mu\) of these distributions (from which the standard deviation of the growth rate is derived) progresses in tandem with their digital metamorphosis.
During the initial phase, when the standard deviation of growth is extracted from _N(0, 0.1)_, our analysis reveals that inequality reaches its zenith around a composition of 30% digital companies and subsequently declines, albeit gradually.
Nevertheless, as the digital transformation progresses, so too does the inequality. Our observations indicate that the inequality's contribution from traditional companies, which increasingly adopt digital characteristics, rises in proportion to the standard distribution. An
Figure 3: Inequality in a mix of digital and traditional companies (0.01, 0.3, 1)
unmistakable inverted U-curve emerges in this process. Specifically, for cases where \(\sigma\) is randomly chosen from _N(0, 0.5)_ or less, digital firms are the primary contributors to this inequality.
Once again, the data shows how the evolutionary characteristics of markets exacerbate modest levels of inequality, reaching heightened proportions and thereby refuting our initial first hypothesis.
### Can progressive taxation solve inequality?
Progressive taxation is the conventional tool that societies utilize to mitigate inequality. In our study, we sought to evaluate the effectiveness of this tool concerning two sources of inequality: one stemming from an increased standard deviation in the percentage of gains from traditional companies and the other from the dichotomy between two different types of organizations in terms of growth, namely traditional and digital.
To gauge the impact of progressive taxation, we established two taxation scales, moderate and high, and applied them to the inter-quantile space of the gains experienced by agents in the preceding period (for the first period, we set quantiles at 10, 20, and 30). We used standard quantiles of 0.25, 0.5, and 0.75 in the process.
In the context of traditional companies, our observations reveal that progressive taxation effectively curtails inequality, which rises only up to a standard deviation of gains of \(\sigma=0.5\) and remains stable beyond that point. The only discernible difference between the two taxation scales is the resultant level of inequality, which diminishes as taxation rates escalate.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**quantiles** & **0.25** & **0.5** & **0.75** & \\ \hline \hline \multicolumn{1}{|l|}{Taxation – moderate} & 10\% & 15\% & 25\% & 30\% \\ \hline \multicolumn{1}{|l|}{Taxation – high} & 15\% & 25\% & 50\% & 70\% \\ \hline \multicolumn{1}{|l|}{Taxation – extra high} & 10\% & 15\% & 30\% & 90\% \\ \hline \end{tabular}
\end{table}
Table 2: Taxation rates moderate and high
Figure 4: Gini for a mix of digital and traditional organizations
This finding is hardly surprising, as progressive taxation has evolved as an instrument to combat inequality in societies where traditional companies, constrained by decreasing returns on scale, have been the norm.
In contrast, when examining a blend of traditional and digital organizations with different types of growth, we can identify notable distinctions.
With a moderate taxation rate, the impact on inequality is somewhat subdued. While it manages to decelerate the advancement of inequality when the percentage of digital companies is minuscule, and slightly diminishes the overall inequality in all scenarios, the curve's shape and progression stay largely consistent.
However, when the taxation rate is high, the effects become more pronounced. High taxation succeeds in tempering the progression of inequality, flattening it to levels that are analogous to those seen in traditional companies. Yet, when taxation in the top quartile becomes extremely and unrealistically high in an attempt to prevent inequality, we observe that the evolutionary nature of markets triumphs, letting only the top survive and abruptly and unexpectedly increasing the level of inequality as the number of digital high-risk, high-growth agents increases.
We can further extend our analysis by examining the impact of taxation, particularly focusing on the escalated standard deviations in the distributions of growth across individual agents.
Figure 5: Gini across different \(\sigma\) for taxes 10%,15%,25%, 30% and 15%, 25%, 50%, 70%
Figure 6: Gini for a mix of traditional and digital companies for taxations of 10%,15%,25%, 30%, 15%, 25%, 50%, 70% and 10%, 15%, 30%, 90%.
These simulations reveal the interplay between the escalation in inequality, driven by increasing values of the standard deviation of growth, and the dual influences of the evolutionary dynamics of markets and progressive taxation. While these effects resonate with our previous observations, there is an added layer of complexity due to the amplification of inequality resulting from the elevated standard deviation of growth.
In both scenarios, whether assessing the effects of digital transformation or progressive taxation, the emergence of the inverted U-shaped curve remains consistent, punctuated by a tipping point of inequality. This tipping point represents a critical juncture in the model, highlighting a transitional phase where various factors may weigh differently on the resulting inequality dynamics. It emphasizes the subtle balance and interdependence of factors in the complex system of digital transformation, potentially offering pivotal insights for policy interventions and strategic considerations.
Figure 7: _Gini for a mix of traditional and digital companies \(\&\) different of for taxations of 10%,15%,25%, 30%, 15%, 25%, 50%, 70% and 10%, 15%, 30% and 90%_
Progressive taxation serves as an effective tool for curbing inequality in societies primarily composed of traditional companies, where the magnifying effect of a selection process that values moderate individual contributions is itself progressive and moderate. However, when two distinct types of growth are introduced--particularly when one type is marked by the high individual growth of certain agents--only truly elevated levels of taxation make a noticeable difference. As we can observe only when taxation levels in the last tax bracket reach 70% or 90%, we can appreciate a significant impact. These rates would be hard to implement for any state, but certainly difficult to imagine globally.
This observation aligns with the historical context in which progressive taxation emerged. Originally developed in a world where the distribution of companies leaned towards a normal distribution due to diminishing returns on scale, the progressive taxation system was well-suited to that environment. However, the advent of digital companies has inaugurated a new chapter in growth dynamics, challenging the efficacy of progressive taxation.
Consequently, our findings do not negate our second hypothesis. However, they do underscore the need for policy measures aimed at combating inequality stemming from digital transformations to consider the potential of widespread digital technology adoption. By doing so, it is possible to attenuate the diversity in growth trajectories and, at least partially, address the fundamental causes of pronounced disparities.
### Tech Disruption, a Process of Social Adoption where Inequality is an emergent property
Up to this point, we concentrated on understanding the consequences of market dynamics amplifying inequality and the use of progressive taxation as a moderator. However, technological adoption emerges from a complex social process demanding societal validation. Organizations are willing to undertake the risk of technology adoption based largely on observed advantages among a significant number of peers and competitors. This societal mechanism for adopting technology is increasingly influenced by enhanced connectivity, a byproduct of globalization. We simulate this phenomenon using a very small cohort of digital agents that act as catalysts for technological adoption within increasingly interconnected networks.
The foundational structure of our model remains largely consistent. It incorporates a variety of agents categorized into two distinct groups: the conventional agents whose growth per cycle is derived from a normal distribution (mean 0.15, \(\sigma\) 0.1), and the digital organizations with growth rates also stemming from a normal distribution, albeit with a mean of 1.5 and \(\sigma\) of 1. Notably, in our initial setup, the digital organizations account for a mere 1% of the total--a foundational seed.
These agents operate within a Newman-Watts-Strogratz network continuum, transitioning from a regular to a small-world setup and ultimately to a random configuration, modulated by the parameter \(\beta\). One primary objective is to determine the starkly contrasting behaviors the model exhibits based on the underlying network topology, with inequality manifesting as an emergent property of the model.Yet, it is crucial to understand that technology adoption is not a simplistic, linear process. It is inherently social, steered by societal confirmation. In every iteration, agents assess their network peers with higher valuations. If a significant number of them, defined by a threshold, have integrated digital technologies, the agent in question follows suit; otherwise, they remain unchanged.
Moreover, an evolutionary dynamic is at play. Agents that deplete their capital are phased out and replaced by new entrants.
### Agent Behavior
Again, agents are initialized with a minimal valuation, but this time they are situated in a node of a Newman-Watts-Strogratz network of \(\beta\) ranging from 0.001 to 1, therefore covering the space from regular to small-world to random topologies.
Only a seed of digital agents which are situated randomly in the network, concretely 0.01% of the agents.
During each iteration of the simulation:
1. Agents are activated in a stochastic manner, and their valuation either appreciates or depreciates based on their designated type.
2. As in the previous case agents obtain gains or loses following the same rules and pay the corresponding taxes if gains are positive as before.
3. Agents assess their rank in the neighborhood. If they are below the average valuation of their neighborhood, they consider a process of digital transformation.
4. If they are considering a process of digital transformation and the number of digital agents in the neighborhood is higher or equal than a threshold, then they become digital conserving their actual age and therefore subjected to the depreciation corresponding to their age.
Again, if the valuation of any agent is depleted, it is considered defunct and a new agent is created according to the same rules than in the initialization. Therefore, the number of agents remains constant.
Once more, agents commence with a minimal valuation. This round, however, positions them within nodes of a Newman-Watts-Strogatz network, with a \(\beta\) value spanning from 0.001 to 1. This spectrum encompasses topologies from regular to small-world, extending to random configurations. A mere seed, constituting 0.01% of the agents and designated as digital, is randomly dispersed across the network.
During each simulation iteration the following steps are followed:
1. **Activation**: Agents are stochastically activated, with their valuations either increasing or shrinking based on their assigned type.
2. **Financial Fluctuations**: Analogous to the previous model, agents witness gains or losses adhering to established rules. Positive gains incur tax liabilities, as previously delineated.
3. **Neighborhood Assessment**: Agents gauge their financial standing within their proximate network of distance one. Should their valuation lag behind their neighbors' average, they contemplate embarking on a digital transformation journey.
4. **Transformation Decision**: If an agent is mulling over a digital overhaul and the count of neighboring digital agents meets or exceeds a set threshold, the agent transitions to a digital one. While their digital status is revamped, their age remains unchanged, and thus, the accompanying age-related depreciation applies.
As with the prior model, should an agent's valuation run dry, it is rendered "defunct." A replacement agent is subsequently birthed, adhering to the original initialization parameters, ensuring a consistent agent count throughout.
### Computational experiments
In this simulation scenario, our model involves a pool of 1,000 agents. The results we present are derived from the average outcomes of 1,000 individual experiments. Given the diffusion process in play, we extended the number of periods to 100, making appropriate adjustments to the depreciation rate. Each agent is connected to 100 neighbors within a Newman-Watts-Strogatz network, modulated by varying values of \(\beta\)--a parameter pivotal in shaping the network topology.
The entire simulation framework has been programmed using Julia 1.9. This foundation is further bolstered by the integration of the agents.jl library (Dduseris, Vahdati, and DuBois, 2021) in conjunction with the graphs.jl library to facilitate network functionalities.
For readers seeking an in-depth understanding of the simulation parameters, the subsequent table provides a comprehensive overview.
\begin{tabular}{l c}
**PARAMETER** & **VALUE** \\ \hline Number of agents & 1,000 \\ Number of experiments & 1,000 \\ Number of Periods & 100 \\ Taxes & 15\% flat \\ & 10\% 15\% 25\% 30\% per quartile \\ & 15\% 25\% 50\% 70\% per quartile \\ Profits/loses per period & a random number from a normal distribution of \\ & \(\mu=0.15\) and \(\sigma=0.1\) for traditional companies \\ & and \(\mu=\)1.5 and \(\sigma=1\) for digital companies \\ Depreciation for digital companies & 30 periods at a rate of 1/30 per period \\ Base growth rate of digital companies after & The difference between digital companies' \\ depreciation & growth and two times the one of traditional companies \\ Initial valuation & Ten units \\ Initial percentage of digital companies & 0..01 (100 digital organizations) \\ Evolutionary mechanism & Agents iterate randomly for the established number of periods. At each period they age 1 and in the case of digital agents depreciate their growth rate. They earn gains/losses according to their different growth rates. If they exhaust their capital, they die and are replaced by a new agent \\ Network where agents are situated & Newman - Watts - Strogatz with \\ & \(\beta\)={0.001, 0.01, 0.05, 0.1, 0.15, 0.2, 0.3, 0.4, \\ & 0.5, 0.7, 1\} \\ Number of neighbors & 100 \\ Mechanism of adoption & Agents examine all neighbors at distance 1 \\ & with a higher valuation than themselves, if \\ & \\ \end{tabular}
among them digital are equal or higher than the aspiration level threshold, they adopt digital technologies
Aspiration level threshold
the closes integer of a random number from a normal distribution with \(\mu=5\) and \(\sigma=0.1\)
also \(\mu=\{2,\,7\}\) and \(\sigma=\{0.\,0.5,\,1,\,1.5,\,2,\,2.5\}\)
are considered
Table 3. Parameters of the dynamic model
## Results
The extent and nature of connectivity play a decisive role in determining patterns of technology adoption and, subsequently, emergent inequality as we can observe in the results of our experiments presented in Fig. 8. Specifically, the connectivity within a network exerts influence over how technologies permeate across agents, impacting the adoption rate and consequent inequalities. Results correspond to a threshold of a normal distribution with a mean of five agents and a standard deviation of 0.1. It's important to note that the stability of these findings is observed after 30-50 iterations, even though the results presented here are based on 100 iterations.
Technology adoption is subdued when connectivity remains limited and agents predominantly cluster within their inherent communities. The resulting economic landscape is one where inequality is tempered, mainly due to the restricted diffusion of technological advantages within insular clusters. Essentially, these close-knit communities exhibit homogeneity in growth rates, thereby keeping disparity in check.
However, as connectivity evolves, transitioning from limited connections to the more intertwined structure of small worlds, there is an observable uptick in the rate of digital adoption. As agents become more interlinked, information and trends disseminate more rapidly. This increasing interconnectivity facilitates the spread of digital technology and, in tandem, amplifies inequality. The expanded connections lead to a more diverse distribution of growth rates, with agents that adopt digital technologies pulling ahead of their peers.
The real tipping point arrives when networks veer towards randomness, specifically when the parameter \(\beta\) surpasses 0.2. At this juncture, the adoption rate not only increases but engulfs the entirety of the network, causing a widespread shift. The technology percolates across all agents, irrespective of their initial stance or community. Such random networks break down the barriers of insular clusters, making it conducive for sweeping technological adoption and, consequently, pronounced inequality.
In essence, the structure and level of connectivity - average path length - within a network serve as determinants for technology adoption, with cascading effects on emergent economic inequalities. Therefore, we support our third hypothesis.
### Impact of progressive taxation
Once more, our focus shifts to assessing the repercussions of progressive taxation within this model. We adopt an identical scale to the one utilized in the preceding model, as detailed in Table 2.
Initial observations indicate marginal differences between a flat tax rate of 15% and a progressive scheme, starting at 10% in the first quartile and peaking at 30% in the last. These variations manifest minimally in terms of inequality and the proliferation of digital technologies. A discernible deviation emerges only when network topology exhibits clustering with considerable path lengths. Nevertheless, inequality levels harmonize as network connectivity intensifies and average path lengths diminish.
A more pronounced distinction becomes apparent with an escalated tax regimen: an initiation of 15% in the first quartile, culminating at an aggressive 70% in the final quartile. Under this scheme:
1. The uptake of digital technologies remains impervious to the heightened tax slabs, mirroring previous adoption patterns.
2. The Gini coefficient, indicative of inequality, evidences a pronounced decline, especially when \(\beta\) is below 0.2.
Figure 8: Dynamic model
The stark reduction in inequality, concurrent with widespread digital tech adoption, defies intuitive expectations. While critics might opine that a top-tier tax rate of 70% could be politically unfeasible in democratic contexts, the underlying implication is significant: it appears feasible to curtail inequality without stifling the adoption of ground breaking technologies. These results shows that taxation certainly has an impact, but limited, adding some nuances to our second hypothesis, however in the same direction.
### How a Few Misfits can Change the World
Drucker's well-noted assertion that "Culture eats strategy for breakfast" underscores the significance of cultural underpinnings in sociotechnical processes, including technology adoption. In our simulations, cultural dynamics are parameterized by the aspiration level threshold. So far, our models employ a broadly consistent threshold, displaying minor deviations among the agent population, specifically with a standard deviation of 0.1. This exemplifies a homogeneous cultural milieu with minimal diversity.
While such homogeneity might have been prevalent in earlier epochs, contemporary societies of the 21 century exhibit pronounced diversity. Nevertheless, certain societies manifest heightened entrepreneurial tendencies - not necessarily because the majority embodies these characteristics, but due to a fraction of audacious entrepreneurs who embrace greater risks.
The importance of this in terms of policy cannot be underscored enough. While effecting widespread cultural change may be beyond the scope of any policy intervention, the possibility of introducing a small cohort of entrepreneurs - a few misfits - and connecting them to global technological hubs is achievable. In our simulations, we model these types of interventions by expanding the \(\sigma\), leading to a broader distribution of aspiration-level thresholds. Within the constraints of the model, these entrepreneurial agents are offset by a corresponding number of more risk-averse agents, accounting for the worst-case scenario, a counter-reaction, and thereby maintaining the overall population average.
Figure 9: Impact of progressive taxation on the dynamic model
Figure 10 delineates the repercussions of this cultural evolution on our agent-based community. Agents transition from a highly homogeneous aspiration level threshold to a more diverse one, specifically shifting from a standard deviation of 0 to 2.5. With minimal standard deviation, technological shock adoption is scant, representing more conservative societies. However, as the distribution flattens, adoption intensifies, resembling patterns seen with lower thresholds. Therefore, we can support our fourth hypothesis.
A pivotal distinction exists between merely reducing the threshold and amplifying the standard deviation. The latter augments both inequality and adoption across all network topologies, leading to a more uniform adoption curve.
The average path length of the network influences the intricate interplay of adoption and ensuing inequality. However, the diversity in adoption thresholds introduces intriguing dynamics.
This counterintuitive result holds profound implications for innovation policy formulation. While engineering a comprehensive societal cultural shift is challenging, our findings suggest an alternative: Introducing small numbers of change agents. Despite the potential conservative backlash, appears sufficient to galvanize widespread technological adoption.
Figure 10: Gini coeff. & diffusion for aspiration threshold of9 with \(\sigma\) ={0, 1.5, 2, 2.5}
## Conclusions
In this paper, we have concentrated on the dual mechanisms through which technological disruptions give rise to inequality. First, we consider the evolutionary dynamics of markets, in which survival of the fittest prevails, often resulting in a polarized population comprising established market leaders and emerging--hence smaller--organizations. Second, we delve into the phenomena of technological diffusion and disruption, conceptualizing them as social adoption processes.
We then examined historical remedies aimed at facilitating wealth redistribution, specifically the role of progressive taxation, assessing its effectiveness in today's hyperconnected global landscape. This environment is dominated by high-growth, high-risk companies characterized by near-zero marginal costs and exceptional scalability. Our analysis reveals the limited impact of progressive taxation, which would need to be significantly escalated as disparities in growth trajectories widen. Such a move is not only challenging to justify ex-ante in democratic societies but also fraught with unintended consequences that could negatively affect the broader economy.
In relation to the influence of market evolutionary dynamics in shaping agent distributions, particularly in contexts of significant growth trajectory disparities, we have elaborated on how these dynamics magnify existing disparities, leading to Pareto-like distributions. Our findings show that just a few iterations are sufficient to significantly alter the market landscape, exacerbating existing inequalities.
These outcomes imply that while progressive taxation may serve as a viable mitigating measure, addressing the root issue--namely the disparities in growth trajectories--is a more effective solution.
Digital disruptions don't emerge ex-ante but evolve through a diffusion process deeply rooted in social construction, where social validation and ensuing bandwagon effects play a pivotal role. Our model reveals that this process is critically influenced by two key factors: system connectivity and an entrepreneurial culture willing to embrace new technologies.
Concerning connectivity, our stable model demonstrates that entering the "small world' zone of the network, characterized by low path lengths, is essential for triggering cascading diffusion. Only then do we attain saturation points that invariably lead to heightened inequality. Our findings indicate that complex contagions, akin to those involved in technology adoption, require network topologies that are conducive to diffusion and adoption. Mere weak ties or simple links are insufficient; wide bridges connecting disparate communities must be established for new technologies to permeate these groups effectively.
While connectivity is a crucial factor, it alone is insufficient for the diffusion of new technologies. A cultural willingness to adopt such technologies is equally indispensable. Changing the prevailing culture, particularly shifting the mean level of risk acceptance, has proven to be a daunting challenge for many societies. However, our research suggests that such
sweeping cultural change may not be necessary. Instead, flattening the distribution--by introducing a small cohort of entrepreneurs - a few misfits - willing to take risks, can generate the necessary social validation that facilitates widespread adoption.
This type of policy intervention is well within the grasp of many societies, achievable through acceleration programs, business schools, and initiatives geared towards innovation and entrepreneurship, often manifested in the form of specialized 'factories' or 'programs.' While such interventions may not fully compensate for limited network connectivity, they can significantly accelerate technology adoption within societies, leading to bandwagon effects in technology uptake.
In summary, while loading the economy with high levels of taxation on largely diverse growth trajectories may mitigate inequality, it comes at the substantial cost of dampening overall economic growth and offers limited long-term efficacy. Tackling the root issue--the disparity in these growth trajectories--by accelerating technology adoption provides a more enduring and sustainable impact. Our research shows that targeted interventions can be transformative; introducing just a few risk-takers - a few misfits - into the system can indeed change the world. |
2310.20288 | The Cohen--Lyndon property in non-metric small-cancellation | We show that the Cohen--Lyndon property holds for classical $C(6)$
small-cancellation quotients, thus generalising the analogous result from the
$C'(\frac{1}{6})$ setting, and answering a 1966 question of Lyndon. | Macarena Arenas | 2023-10-31T08:59:48Z | http://arxiv.org/abs/2310.20288v1 | # The Cohen-Lyndon property in non-metric small-cancellation
###### Abstract.
We show that the Cohen-Lyndon property holds for classical \(C(6)\) small-cancellation quotients, thus generalising the analogous result from the \(C^{\prime}(\frac{1}{6})\) setting, and answering a 1966 question of Lyndon.
Key words and phrases:Small-cancellation, Non-positive curvature, Cohen-Lyndon property 2010 Mathematics Subject Classification: 20F06, 20F67, 20F65 The author was supported by a Cambridge Trust & Newnham College Scholarship, and by the Denman Baynes Junior Research Fellowship at Clare College, Cambridge.
in Definition 2.5. The smallest numbers for which the \(C^{\prime}(\frac{1}{n})\) and \(C(n)\) conditions produce a useful theory are \(n=\frac{1}{6},n=6\), and \(n=7\), but there are significant differences between the properties that they are known to satisfy in each case, and between the methods that can be used to approach them.
The \(C^{\prime}(\frac{1}{n})\) condition implies the \(C(n)\) condition, but in general, the converse is not true: the "purely non-metric" \(C(n)\) condition does not imply the \(C^{\prime}(\frac{1}{n^{\prime}})\) condition for any choices of \(n\geq 2\) and \(n^{\prime}\geq 2\). As mentioned earlier in this introduction, while the \(C^{\prime}(\frac{1}{6})\) and \(C(7)\) conditions imply hyperbolicity in the finitely presented case, one can easily produce examples of finitely presented \(C(6)\) groups which are not hyperbolic. Another striking difference between \(C(6)\) groups and their metric \(C^{\prime}(\frac{1}{6})\) cousins is that the \(C^{\prime}(\frac{1}{6})\) condition implies cocompact cubulability, while the question of whether there is any \(n\) for which all \(C(n)\) groups are cubulated is still open in general [20].
The Cohen-Lyndon property can be defined for any pair \((G,\mathcal{H})\), or equivalently, for any quotient \(G/\langle\!\langle\mathcal{H}\rangle\!\rangle\), where \(G\) is an arbitrary group and \(\mathcal{H}\) is a collection of subgroups of \(G\). Informally, it encodes when the subgroups in the collection \(\mathcal{H}\) and their conjugates are "as independent as possible". It was first shown to hold for one-relator quotients and \(C^{\prime}(\frac{1}{6})\) small-cancellation quotients of free groups in [1], and was later extended to the setting of free products of locally indicable groups in [1], and to one-relator free products of high-exponent in [1]. More recently, a version of the Cohen-Lyndon property for Dehn fillings was obtained in [21].
The Cohen-Lyndon property is useful for many reasons. Since it constitutes an explicit description of the normal closure of a collection of subgroups \(\{H_{i}<G\}\), it provides concrete information about the quotient \(G^{\prime}=G/\langle\!\langle\cup_{i\in I}H_{i}\rangle\!\rangle\) - for instance, it immediately yields lower bounds for the cohomological dimension of \(G^{\prime}\) in terms of the cohomological dimensions of \(G\) and of the \(H_{i}\in\mathcal{H}\), and, when \(G\) is a free group, it implies that the relation module \(\langle\!\langle\mathcal{H}\rangle\!\rangle/\langle\!\langle\mathcal{H} \rangle\!\rangle^{\prime}\) is a sum of cyclic modules [1, 2].
We derive the Cohen-Lyndon property from a homotopical statement that is slightly stronger when the quotient in question has torsion - namely that the Cayley graph associated to the \(C(6)\) presentation is homotopy equivalent to a wedge of cycles representing the relators and their translates, ranging over left transversals as in the statement of Theorem 1.1. See Theorem 3.14 for the precise formulation. This strengthening of the group-theoretical Cohen-Lyndon property is, to the best of our knowledge, a new result even in the \(C^{\prime}(\frac{1}{n})\) case.
Thus, this text serves a trifold purpose: it extends the results in [1] to the setting of non-metric small-cancellation, it introduces a topological viewpoint to proving that the Cohen-Lyndon property holds - a viewpoint which, we hope, can be used in other settings -, and it functions as a'model case' for the proof of the main theorem in [1], which illuminates the connection between the homotopical
form of the Cohen-Lyndon property and asphericity for sufficiently good cubical small-cancellation quotients of cubulated groups.
_Remark 1.2_.: The results in the present work extend almost verbatim to the more general setting of graphical \(C(6)\) quotients (see [10]), by replacing the bouquet \(B_{n}\) with the defining graph in the graphical presentation. To avoid (excessive) technicalities, we have chosen not to structure our exposition around that version of the theory. In [1], we prove versions of these results for sufficiently good cubical small-cancellation quotients (in the sense of [21]); we note that the graphical and classical versions of the theory are special cases of the cubical version.
### Structure and strategy:
In Section 2 we present the necessary background regarding the Cohen-Lyndon property and small-cancellation theory. In Section 3 we describe an ordering on the cycles of the Cayley graph that takes into account the structure of a certain simplicial complex associated to the presentation. A key property of this ordering is that it respects (graph-theoretical) distance with respects to a fixed "origin"; this is proven in Lemma 3.11. The main technical result is Lemma 3.12 which inductively "rebuilds" the Cayley graph in a manner consistent with the ordering. These results, together with a couple more lemmas, are then assembled to deduce Theorem 3.14, and hence Theorem 1.1.
### Acknowledgements
I am grateful to Henry Wilton for many suggestions, to Daniel Wise for pointing out a reference, and to Sami Douba for providing stylistic guidance.
## 2. Background
### Small-cancellation notions
We adopt a topological, rather than combinatorial, viewpoint for defining classical small-cancellation theory. This is mostly a matter of convenience - the topological viewpoint is more suitable for the method or our proof, and generalises more naturally to other versions of the theory. We emphasize that in the classical setting, both viewpoints are equivalent.
Let \(P=\langle S|R\rangle\) be a presentation for a group \(G\), and let \(\mathcal{X}(P)\) denote its presentation complex. This is a 2-complex that has a single vertex, an edge for each \(s\in S\), and a 2-cell for each \(r\in R\), so that \(\pi_{1}\mathcal{X}(P)=G\). The Cayley graph \(Cay(G,S)\) is the 1-skeleton of the universal cover \(\widetilde{\mathcal{X}(P)}:=\widetilde{\mathcal{X}}(P)\). The definitions below are stated in terms of arbitrary 2-complexes, but the reader may take \(X=\widetilde{\mathcal{X}}(P)\) for the remainder of this section.
A map \(f:X\longrightarrow Y\) between 2-complexes is _combinatorial_ if it maps open cells homeomorphically to open cells. A complex is _combinatorial_ if all attaching maps are combinatorial (possibly after subdividing).
**Definition 2.1** (Pieces).: Let \(X\) be a combinatorial 2-complex. A non-trivial combinatorial path \(p\to X\) is a _piece_ if there are 2-cells \(C_{1},C_{2}\) such that
\(X\) factors as \(p\to\partial C_{1}\to X\) and \(p\to\partial C_{2}\to X\) but there does not exist a homeomorphism \(\partial C_{1}\to\partial C_{2}\) such that the following diagram commutes
**Definition 2.2** (Disc diagram).: A _disc diagram_\(D\) is a compact contractible combinatorial 2-complex, together with an embedding \(D\hookrightarrow S^{2}\) that induces a cell structure on \(S^{2}\). Viewing the sphere as the 1-point compactification of \(\mathbb{R}^{2}\), this cellular structure consists of the 2-cells of \(D\) together with an additional 2-cell containing the point at infinity. The _boundary path_\(\partial D\) is the attaching map of the 2-cell at infinity. A _disc diagram in a complex X_ is a combinatorial map \(D\to X\). The _area_ of a disc diagram \(D\) is the number of 2-cells in \(D\).
A disc diagram \(D\) might map to \(X\) in an 'ineffective' way: it might, for instance, be quite far from an immersion. Sometimes it is possible to replace a given diagram with a simpler diagram \(D^{\prime}\) having the same boundary path as \(D\), as we now explain.
**Definition 2.3**.: A _cancellable pair_ in \(D\) is a pair of 2-cells \(C_{1},C_{2}\) meeting along a path \(e\) such that the following diagram commutes:
A cancellable pair leads to a new disc diagram by removing \(e\cup Int(C_{1})\cup Int(C_{2})\) and then glueing together the paths \(\partial C_{1}-e\) and \(\partial C_{2}-e\). This procedure results in a diagram \(D^{\prime}\) with \(\mathsf{Area}(D^{\prime})=\mathsf{Area}(D)-2\) and \(\partial D^{\prime}=\partial D\). A diagram is _reduced_ if it has no cancellable pairs.
**Definition 2.4** (Annular diagrams and collared diagrams).: An _annular diagram_\(A\) is a compact combinatorial 2-complex homotopy equivalent to \(S^{1}\), together with an embedding \(A\hookrightarrow S^{2}\), which induces a cellular structure on \(S^{2}\). The _boundary paths_\(\partial_{in}A\) and \(\partial_{out}A\) of \(A\) are the attaching maps of the two 2-cells in this cellulation of \(S^{2}\) that do not correspond to cells of \(A\). An _annular diagram in a complex_\(X\) is a combinatorial map \(A\to X\). A disc diagram \(D\to X\) is _collared_ by an annular diagram \(A\to X\) if \(\partial D=\partial_{in}A\).
An _arc_ in a diagram is a path whose internal vertices have valence 2 and whose initial and terminal vertices have valence \(\geq 3\). A _boundary arc_ is an arc that lies entirely in \(\partial D\).
**Definition 2.5** (Small-cancellation conditions).: A complex \(X\) satisfies the \(C(n)\) condition if for every reduced disc diagram \(D\to X\), the boundary path of each \(2\)-cell in \(D\) either contains a non-trivial boundary arc, or is the concatenation of at least \(n\) pieces. The complex \(X\) satisfies \(C^{\prime}(\frac{1}{n})\) condition if for each \(2\)-cell \(R\to X\), and each piece \(p\to X\) which factors as \(p\to R\to X\), then \(|p|<\frac{1}{n}|\partial R|\).
When a complex \(X\) satisfies sufficiently good small-cancellation conditions, then it is possible to classify the reduced disc diagrams \(D\to X\) in terms of a few simple behaviours exhibited by their cells.
**Definition 2.6** (Ladders, shells, and spurs).: A disc diagram \(L\) is a _ladder_ if it is the union of a sequence of closed \(1\)-cells and \(2\)-cells \(C_{1},\ldots,C_{n}\), such that for \(1<j<n\), there are exactly two components in \(L-C_{j}\), and exactly one component in \(L-C_{1}\) and \(L-C_{n}\). Moreover, if \(C_{i}\) is a \(1\)-cell then it is not contained in any other \(C_{j}\).
A _shell_ of \(D\) is a \(2\)-cell \(C\to D\) whose boundary path \(\partial C\to D\) is a concatenation \(qp_{1}\cdots p_{k}\) for some \(k\leq 3\) where \(q\) is a boundary arc in \(D\) and \(p_{1},\ldots,p_{k}\) are non-trivial pieces in the interior of \(D\). The arc \(q\) is the _outerpath_ of \(C\) and the concatenation \(p_{1}\cdots p_{k}\) is the _innerpath_ of \(C\). A _spur_ is a vertex of degree \(1\) on \(\partial D\).
We now state the two fundamental results of small-cancellation theory. Proofs can be found in [10, 11], for instance.
**Theorem 2.7** (Greendlinger's Lemma).: _Let \(X\) be a \(C(6)\) complex and \(D\to X\) be a minimal area disc diagram, then either_
1. \(D\) _is a single cell,_
2. \(D\) _is a ladder,_
3. \(D\) _has at least three shells and/or spurs._
**Theorem 2.8** (The Ladder Theorem).: _Let \(X\) be a \(C(6)\) complex and \(D\to X\) be a minimal area disc diagram. If \(D\) has exactly \(2\) shells or spurs, then \(D\) is a ladder._
Figure 1. An annular diagram collaring a disc diagram in a \(C(6)\) complex.
## 3. Main Theorem
We start with the following remark, which we state and prove in a way that is tailored exactly to our applications, and which is known to experts in other similar settings (see for instance [20, 5.6+5.7]):
**Lemma 3.1**.: _Let \(X\) be a \(C(6)\) small-cancellation complex. Then the intersection between any two \(2\)-cells of \(X\) is either empty or contractible._
Proof.: Assume that \(C_{1}\cap C_{2}\neq\emptyset\). To show that \(C_{1}\cap C_{2}\) is connected, let \(a,b\) be vertices of \(C_{1}\cap C_{2}\) and let \(\alpha\to C_{1},\beta\to C_{2}\) be paths with endpoints \(a,b\). Furthermore, choose \(\alpha,\beta\) to be geodesic and so that the disc diagram \(D\) bounded by \(\alpha\beta^{-1}\) has minimal area. Consider the disc diagram \(D_{+}\) obtained by attaching \(C_{1}\) and \(C_{2}\) to \(D\) along \(\alpha\) and \(\beta\). Then \(D_{+}\) is a ladder by Theorem 2.8. Now, since \(D\) was assumed to be minimal area, then Greendlinger's Lemma implies that \(D\) is either a single \(0\)-cell, a ladder, or has at least three shells or spurs.
If \(D\) has a shell \(C\) this contradicts the \(C(6)\) condition, as its outerpath is then the concatenation of at most \(2\) pieces (the intersections of \(C\) with \(C_{1}\) and \(C_{2}\)). Thus, \(D\) is either a single vertex, in which case \(a=b\), or \(D\) has exactly two spurs (in the ladder case) or at least three spurs (in the general case). If \(D\) has \(\geq 3\) spurs, then at least one of these lies either on \(\alpha\) or \(\beta\), contradicting that these paths are geodesic. Thus \(D\) has exactly \(2\) spurs, so is a degenerate ladder, and \(\alpha=\beta\).
It is now clear that \(C_{1}\cap C_{2}\) is simply-connected (and thus contractible). Indeed, we have shown that \(C_{1}\cap C_{2}\) is a piece, and the \(C(6)\) condition states that no essential cycle in \(X\) is the concatenation of \(<6\) pieces, so in particular a single piece cannot be essential.
We return to the setting of group presentations. A presentation \(P\) is a _\(C(n)\) small-cancellation presentation_ if \(\mathcal{X}(P)\) satisfies the \(C(n)\) condition.
Mainly to establish the notation that will be used later on, we review the construction of the Cayley complex associated to a group presentation. Let \(P=\langle s_{1},\ldots,s_{n}\mid r_{1},\ldots,r_{k}\rangle\) and let \(B_{n}\) denote a bouquet of \(n\) loops labelled by the generators \(s_{1},\ldots,s_{n}\). Recall that the _Cayley complex_\(\widetilde{\mathcal{X}}(P)\) for \(P\) is the universal cover of the complex obtained by coning-off the cycles \(c_{1}\to B_{n},\ldots,c_{k}\to B_{n}\) corresponding to the relations \(r_{1},\ldots,r_{k}\) in \(B_{n}\). The subgroup \(ker(F_{n}\to G(P))=\langle\langle r_{1},\ldots,r_{k}\rangle\rangle\) is associated to a regular covering space \(\hat{B}_{n}\to B_{n}\), and \(\widetilde{\mathcal{X}}(P)\) is obtained from \(\hat{B}_{n}\) by coning-off the set \(\{g\tilde{c_{i}}\}_{i\in I,g\langle r_{i}\rangle\in F_{n}/\langle r_{i}\rangle}\). In other words, \(\hat{B}_{n}\) is the Cayley graph of \(G(P)\).
Before stating our main result, we make a standard observation, which we prove for the sake of completeness.
**Lemma 3.2**.: _Let \(P=\langle s_{1},\ldots,s_{n}\mid r_{1},\ldots,r_{k}\rangle\) be a \(C(6)\) small-cancellation presentation. Then each lift \(g\tilde{c_{i}}\to\widetilde{\mathcal{X}}(P)\) of \(c_{i}\to\mathcal{X}(P)\) embeds in \(\widetilde{\mathcal{X}}(P)\)._
Proof.: Assume \(g\tilde{c_{i}}\to\widetilde{\mathcal{X}}(P)\) is not an embedding, let \(\sigma\) be a non-closed subpath of \(c_{i}\) whose induced lift \(\tilde{\sigma}\to\widetilde{\mathcal{X}}(P)\) factors through \(g\tilde{c_{i}}\to\widetilde{\mathcal{X}}(P)\) and bounds a disc diagram \(D\), which we can assume to be reduced. Let \(D^{\prime}\) be a disc diagram with boundary path \(\sigma\beta^{-1}\) and finally let \(D^{\prime\prime}=D\cup D^{\prime}\). Since \(D^{\prime}\) consists of a single 2-cell \(C\), then \(D^{\prime\prime}\) is still reduced, and since the endpoints of \(\sigma\) are assumed to be distinct, Theorem 2.7 implies that \(D^{\prime\prime}\) is either a ladder or has at least 3 spurs. Both cases lead to a contradiction, since the only shell on \(\partial D^{\prime\prime}\) is \(C\).
_Notation 3.3_.: In view of Lemma 3.2, we may drop the "\({}_{\sim}\)" from the notation and write \(gc_{i}\) to denote a translate \(g\tilde{c_{i}}\) of a cycle \(\widetilde{c_{i}}\) in \(\hat{B}_{n}\).
The rest of this paper is dedicated to proving that the Cohen-Lyndon property holds for \(C(6)\) quotients of free groups. This is Theorem 1.1 from the introduction, which we restate below.
Recall that a subset \(T\subset G\) is a _left transversal for \(H\) in \(G\)_ if and only if every left coset of \(H\) contains exactly one element of \(T\). Let \(S=\{s_{1},\ldots,s_{n}\}\) and let \(r_{i}\) be a word on \(S\cup S^{-1}\). We use the notation \(N(\langle r_{i}\rangle)\) to denote the normaliser of \(\langle r_{i}\rangle\) in the free group \(F(S)\).
**Theorem 3.4**.: _Let \(P=\langle s_{1},\ldots,s_{n}\mid r_{1},\ldots,r_{k}\rangle\) be a \(C(6)\) small-cancellation presentation. Then \((\langle s_{1},\ldots,s_{n}\rangle,\{\langle r_{i}\rangle\}_{i})\) has the Cohen-Lyndon property. That is, there exist left transversals \(T_{i}\) of \(N(\langle r_{i}\rangle)\langle\!\langle r_{1},\ldots,r_{k}\rangle\!\rangle\) in \(F(S)\) such that_
\[\langle\!\langle r_{1},\ldots,r_{k}\rangle\!\rangle=*_{i\in I,t\in T_{i}} \langle r_{i}\rangle^{t}.\]
We now introduce the objects and constructions derived from the presentation \(P=\langle s_{1},\ldots,s_{n}|r_{1},\ldots,r_{k}\rangle\) that will be used in the proof of this theorem.
**Definition 3.5** (The structure graph).: Define the _structure graph_\(\Lambda\) of the presentation complex \(\mathcal{X}(P)\) as follows. The vertex set \(V(\Lambda):=V\) of \(\Lambda\) has two types of vertices:
1. The first type corresponds to translates \(gc_{i}\) of the cycles \(c_{1},\ldots,c_{k}\) in \(\hat{B}_{n}\). The set of vertices of this type will be denoted \(V_{T}\).
2. To describe the second type, we start by defining the _untethered hull_ of \(\hat{B}_{n}\). This is the subcomplex \(\mathcal{F}\) of \(\hat{B}_{n}\) consisting of all 1-cells of \(\hat{B}_{n}\) that do not lie in any piece. In general, \(\mathcal{F}\) is not connected. An _untethered component_ is a connected component \(\mathcal{F}_{\iota}\) of \(\mathcal{F}\). The second type of vertices of \(\Lambda\) corresponds to the untethered components of \(\mathcal{F}\). The set of vertices of this type will be denoted \(V_{U}\).
Let \(X_{v}\) denote the subcomplex of \(\hat{B}_{n}\) corresponding to \(v\in V\). Let
\[\mathcal{U}=\{X_{v}:v\in V\}.\]
Note that \(\mathcal{U}\) is a topological cover of \(\hat{B}_{n}\).
The edges of \(\Lambda\) are also of two types: they either correspond to non-empty intersections \(gc_{i}\cap g^{\prime}c_{j}\), or \(gc_{i}\cap\mathcal{F}_{\iota}\) where \(g,g^{\prime}\) range over the left cosets of the \(\langle r_{i}\rangle\)'s, \(i,j\in I\), and \(\iota\) ranges over the connected components of \(\mathcal{F}\). Note that each intersection \(gc_{i}\cap g^{\prime}c_{j}\) is a piece \(p\), so an edge of \(\Lambda\) corresponds to a piece in \(\hat{B}_{n}\), or to an intersection between a \(gc_{i}\) and an untethered component of \(\mathcal{F}\). Note also that \(\Lambda\) is a simplical graph. Indeed, \(\Lambda\) has no bigons by construction, and no loops by Lemma 3.2.
_Remark 3.6_.: Since no essential path in \(\hat{B}_{n}\) is the concatenation of less than \(6\) pieces, then each \(\mathcal{F}_{\iota}\) is a tree, and hence contractible. Lemma 3.1 shows that each piece \(gc_{i}\cap g^{\prime}c_{j}\) is also contractible.
There is a Helly property for the elements of \(\mathcal{U}\). It can be proven using induction and Greendlinger's Lemma, and can also be found in [1, 6.11]:
**Lemma 3.7**.: _Let \(\mathcal{X}(P)\) be the presentation complex associated to a \(C(6)\) small-cancellation presentation \(P\), let \(\Lambda\) be its structure graph, and let \(V^{\prime}\subset V(\Lambda)\). If each pairwise intersection \(X_{v}\cap X^{\prime}_{v}\) with \(v,v^{\prime}\in V^{\prime}\) is non-empty, then the total intersection \(\bigcap_{v\in V^{\prime}}X_{v}\) is non-empty._
The following lemma provides finer information about the total intersections in Lemma 3.7:
**Lemma 3.8**.: _Let \(\mathcal{X}(P)\) be the presentation complex associated to a \(C(6)\) small-cancellation presentation \(P\) and let \(\Lambda\) be its structure graph. For all \(I\subset V(\Lambda)\) with \(|I|\geq 2\), the intersection \(\bigcap_{v\in I}X_{v}\) is non-empty if and only if it is connected (and hence contractible)._
Proof.: We proceed by induction on \(|I|\). The base case is \(|I|=2\) and follows from Lemma 3.1. Assume that the claim holds for intersections \(\bigcap_{v\in I}X_{v}\neq\emptyset\) where \(|I|\leq k\), and consider \(\bigcap_{v\in I\cup\{v_{k+1}\}}X_{v}\). If this intersection is empty, then there is nothing to show. So we may assume that \(\bigcap_{v\in I\cup\{v_{k+1}\}}X_{v}\neq\emptyset\), and thus contains a vertex \(b\) of \(\hat{B}_{n}\). The induction hypothesis implies that each intersection \(\bigcap_{v\in J}X_{v_{j}}\) with \(J\subsetneq I\) and \(I=J\cup\{v_{j}\}\) is contractible, and is in fact a tree as each \(X_{v}\) is \(1\)-dimensional. As \(\bigcap_{v\in I\cup\{v_{k+1}\}}X_{v}\subset\bigcap_{v\in I}X_{v}\), it suffices to show that \(\bigcap_{v\in I\cup\{v_{k+1}\}}X_{v}\) is connected. To this end, let \(x,y\) be vertices in \(\bigcap_{v\in I\cup\{v_{k+1}\}}X_{v}\).
Consider paths \(\beta\to X_{v_{k+1}}\) and \(\alpha\to\bigcap_{v\in I}X_{v}\) with endpoints \(x,y\), so that \(\beta\alpha^{-1}\) bounds a disc diagram \(D\) in \(\widetilde{\mathcal{X}}(P)\). Moreover, amongst all pairs of paths with endpoints \(x,y\) as above, choose \(\beta\) and \(\alpha\) to have minimal length. Consider the disc diagram \(D^{+}=D\cup_{\beta}X_{v_{k+1}}\cup_{\alpha}X_{v}\), where \(v\in I\). We claim that \(D^{+}\) is a degenerate disc diagram, that is, it contains no \(2\)-cells, so \(\alpha=\beta\) and \(x\) and \(y\) lie in the same connected component of \(\bigcup_{v\in I\cup\{v_{k+1}\}}X_{v}\). To prove this assertion, note that the choices of \(\alpha\) and \(\beta\) above imply that \(\partial D\) cannot contain any spurs, as such spurs could be removed to shorten \(\alpha\) and/or \(\beta\). Thus \(D\) must contain a shell \(S\), and the
outerpath of \(S\) is either a subpath of \(\alpha\), a subpath of \(\beta\), or contains either \(x\) or \(y\). In the first and second cases, \(\partial S\cap\partial D\) is a single piece, and the innerpath of \(S\) is the concatenation of at most \(3\) pieces, so \(\partial S\) is the concatenation of at most \(4\) pieces, contradicting the \(C(6)\) condition. In the third case, similarly, \(\partial S\cap\partial D\) is the concatenation of at most \(2\) pieces, so the innerpath of \(S\) is the concatenation of at most \(3\) pieces, and \(\partial S\) is the concatenation of at most \(5\) pieces, again contradicting the \(C(6)\) condition.
Thus, as asserted, \(D\) must be a degenerate disc diagram, so \(\bigcap_{v\in I\cup\{v_{k+1}\}}X_{v}\) is connected and contractible and the induction is complete.
As noted in Definition 3.5, the set \(\mathcal{U}=\{X_{v}:v\in V\}\) provides a topological cover of \(\hat{B}_{n}\). The structure graph \(\Lambda\), being simplicial by construction and by Lemma 3.2, is the \(1\)-skeleton of the geometric realisation of the nerve complex of \(\mathcal{U}\).
**Definition 3.9** (The nerve of \(\mathcal{U}\)).: The _nerve complex_\(\mathbf{N}(\mathcal{U})\) of a topological covering \(\mathcal{U}\) is the abstract simplicial complex
\[\mathbf{N}(\mathcal{U})=\{V^{\prime}\subset V:\bigcap_{v\in V^{\prime}}X_{v} \neq\emptyset,|V^{\prime}|<\infty\}.\]
It is natural to use \(\mathbf{N}(\mathcal{U})\) to organise the data of \(\mathcal{U}\). Concretely, we do this by defining an ordering on the vertices of \(\mathbf{N}(\mathcal{U})\); this ordering is key to the proof of Theorem 3.4.
**Definition 3.10** (The ordering on \(\mathcal{U}\)).: Choose \(v_{0}\in V(\Lambda)\) corresponding to a \(gc_{i}\). We define a total ordering \(\leq\) on \(\mathcal{U}\), that is, an injective function \(\varphi:V\to\mathbb{N}\). To do this, we first define an ordering on the simplices of \(\mathbf{N}(\mathcal{U})\) inductively as follows.
Start by setting \(\varphi(v_{0})=0\), and define
\[A_{0}=\{u\in V\ :\{u,v_{0}\}\in\mathbf{N}(\mathcal{U})\}\cup\{v_{0}\}.\]
Choose \(u_{1}\in A_{1}\), let \(\varphi(u_{1})=1\), and let
\[A_{01}=\{u\in V:\{v_{0},u_{1},u\}\in\mathbf{N}(\mathcal{U})\}\cup\{v_{0},u_{1}\}.\]
Inductively, assume that \(\varphi\) has been defined for a subset of cardinality \(k\), so \(\varphi(v_{0})=0,\ldots,\varphi(v_{k})=k\). For each non-empty simplex \(\{v_{i},\ldots,v_{\ell}\}\) of \(\mathbf{N}(\mathcal{U})\) where \(\varphi(v_{i}),\ldots,\varphi(v_{\ell})\) are already defined, let
\[A_{v_{i}\ldots v_{\ell}}=\{u\in V:\{v_{i},\ldots,v_{\ell}\}\cup\{u\}\in \mathbf{N}(\mathcal{U})\}\cup\{v_{i}\ldots v_{\ell}\}.\]
We view each simplex \(\{v_{i},\ldots,v_{\ell}\}\) as an ordered tuple \((v_{i},\ldots,v_{\ell})\) where \(\varphi(v_{i})<\varphi(v_{i+1})<\ldots<\varphi(v_{\ell})\), and order the simplices using the _Lusin-Sierpinski order1_, which is defined as follows. For a pair of simplices, set \(\{v_{i},\ldots,v_{\ell}\}<\{w_{i^{\prime}},\ldots,w_{\ell^{\prime}}\}\) if either
Footnote 1: Also known as the _Kleene–Brouwer order_. Perhaps it would be more suitable to call it the _long-lex order_, in analogy with the _short-lex order_, which is used frequently in geometric group theory.
1. there exists \(j\leq\min\{\ell,\ell^{\prime}\}\) with \(v_{\iota}=w_{\iota}\) for all \(\iota<j\), and \(v_{j}<w_{j}\), or
2. \(\ell>\ell^{\prime}\) and \(j=j^{\prime}\) for all \(j^{\prime}\leq\ell^{\prime}\).
Now consider a least simplex \(\{v_{i}\dots v_{\ell}\}\) such that there exists \(u\in A_{v_{i}\dots v_{\ell}}\) whose image is not yet defined and choose such a vertex \(u\) arbitrarily. Set \(\varphi(u)=k+1\). In Figure 2 we illustrate the ordering for a simple example.
A priori, \(\varphi\) is only defined for a subset \(V^{\prime}\subset V\); we show in Lemma 3.11 that in fact \(V^{\prime}=V\).
For vertices \(v,v^{\prime}\in V\), let \(\mathsf{d}(v,v^{\prime})\) denote the usual graph metric, where all edges are regarded as having length \(1\), so \(\mathsf{d}(v,v^{\prime})\) is the least number of edges in a path connecting \(v\) and \(v^{\prime}\).
**Lemma 3.11**.: _Let \(v,v^{\prime}\in V\). If \(\varphi(v)<\varphi(v^{\prime})\), then \(\mathsf{d}(v,v_{0})\leq\mathsf{d}(v^{\prime},v_{0})\). In particular, the function \(\varphi:V\to\mathbb{N}\) is well-defined._
Proof.: We prove the lemma by induction on \(N=\varphi(v^{\prime})\). If \(N=1\) there is nothing to show, so assume that the result holds for all pairs of vertices with image \(<N_{0}\).
Let \(v,v^{\prime}\in V\) and assume \(\varphi(v)<\varphi(v^{\prime})\leq N_{0}\). Note that if \(\varphi(v)\) and \(\varphi(v^{\prime})\) are not consecutive integers, then there exists \(u\in V\) with \(\varphi(v)<\varphi(u)=\varphi(v^{\prime})-1\), and by the induction hypothesis \(\mathsf{d}(v,v_{0})\leq\mathsf{d}(u,v_{0})\), so it would suffice in any case to show that \(\mathsf{d}(u,v_{0})\leq\mathsf{d}(v^{\prime},v_{0})\) to prove the lemma. Thus, we assume that \(\varphi(v)\) and \(\varphi(v^{\prime})\) are consecutive integers. By definition, there is a simplex \(\sigma\) which is least in the Lusin-Sierpinski order described above and such that \(\{v^{\prime}\}\cup\sigma\) is a simplex.
We first note that for any vertex \(u\) adjacent to \(v^{\prime}\) and for any vertex \(w^{\prime}\) in \(\sigma\) with \(\varphi(w^{\prime})<N_{0}\), if \(\mathsf{d}(w^{\prime},v_{0})>\mathsf{d}(u,v_{0})\), then by the induction hypothesis \(\varphi(w^{\prime})>\varphi(u)\). In particular, this holds for the least vertex \(w^{\prime}\) of \(\sigma\), so \(\mathsf{d}(u,v_{0})\geq\mathsf{d}(w,v_{0})\) for every vertex \(u\) adjacent to \(v^{\prime}\), or otherwise this would contradict that \(\sigma\) is the least simplex adjacent to \(v^{\prime}\). Thus, the distance \(\mathsf{d}(v^{\prime},v_{0})\) is realised by a path passing
Figure 2. The ordering in Definition 3.10 for a portion of a hexagonal grid.
through \(\sigma\), and in particular, passing through its least vertex \(w^{\prime}\). There are now several cases to consider:
1. If \(v\) is a vertex of \(\sigma\), then \(v\) is adjacent to \(v^{\prime}\), and by the discussion on the previous paragraph, \(\mathsf{d}(v,v_{0})\leq\mathsf{d}(v^{\prime},v_{0})\),
2. if \(v\) is not a vertex of \(\sigma\), then either 1. \(\{v\}\cup\sigma\) is also a simplex, in which case, as claimed: \[\mathsf{d}(v,v_{0})=\mathsf{d}(v,w^{\prime})+\mathsf{d}(w^{\prime},v_{0})=1+ \mathsf{d}(w^{\prime},v_{0})=\mathsf{d}(v^{\prime},w^{\prime})+\mathsf{d}(w^{ \prime},v_{0})=\mathsf{d}(v^{\prime},v_{0}),\] 2. or \(\varphi(w^{\prime})<\varphi(v)\). Since \(\varphi(v)<\varphi(v^{\prime})\), the least vertex, say \(w\), adjacent to \(v\) must satisfy that \(\varphi(w)<\varphi(w^{\prime})\) and in particular that \(\varphi(w)<N_{0}\). Thus \(\mathsf{d}(w,v_{0})\leq\mathsf{d}(w^{\prime},v_{0})\) by the induction hypothesis applied to \(w^{\prime}\) and \(w\), so \[\mathsf{d}(v,v_{0})=1+\mathsf{d}(w,v_{0})\leq 1+\mathsf{d}(w^{\prime},v_{0})= \mathsf{d}(v^{\prime},v_{0}).\]
And the induction is complete.
Write \(V=\{v_{j}\}_{j\in\mathbb{N}}\), where the indexing agrees with the ordering in Definition 3.10, and recall that \(X_{v_{j}}\) denotes the subcomplex of \(\hat{B}_{n}\) corresponding to \(v_{j}\in V\). The ordering on \(V\) induces an ordering on \(\mathcal{U}\), so \(\varphi(v)<\varphi(v^{\prime})\) if and only if \(X_{v}<X_{v^{\prime}}\). Thus, for ease of notation, in what follows we refer to the ordering of \(V\) and the ordering of \(\mathcal{U}\) interchangeably.
The next lemma is our main technical result.
**Lemma 3.12**.: _Let \(\mathcal{X}(P)\) be the presentation complex associated to a \(C(6)\) small-cancellation presentation \(P\), let \(V=\{v_{j}\}_{j\in\mathbb{N}}\) be the vertices of its structure graph \(\Lambda\), and for each \(v_{j}\in V\), let \(X_{v_{j}}\) be the corresponding element of \(\mathcal{U}\). For each \(k\in\mathbb{N}\), there is a homotopy equivalence_
\[\bigcup_{j\leq k}X_{v_{j}}\sim\bigvee_{j\leq k}X_{v_{j}}.\]
Proof.: The proof is by induction on \(k\). When \(k=0\) there is nothing to show, and the case of \(k=1\) is a special case of the fact that, for all pairs \(v,v^{\prime}\in V\) such that \(X_{v}\cap X_{v^{\prime}}\neq\emptyset\), the homotopy equivalence \(X_{v}\cup X_{v^{\prime}}\sim X_{v}\lor X_{v^{\prime}}\) holds because \(X_{v}\cap X_{v^{\prime}}\) is either a piece or a single vertex.
Assume that \(\bigcup_{j\leq k}X_{v_{j}}\sim\bigvee_{j\leq k}X_{v_{j}}\) for all \(k<k_{0}\). To show the homotopy equivalence \(\bigcup_{j\leq k_{0}}X_{v_{j}}\sim\bigvee_{j\leq k_{0}}X_{v_{j}}\), it suffices to show that \(\bigcup_{j<k_{0}}X_{v_{j}}\cap X_{v_{k_{0}}}\) is contractible.
Suppose towards a contradiction that \(\bigcup_{j<k_{0}}X_{v_{j}}\cap X_{v_{k_{0}}}\) is not contractible. Then, since this intersection is \(1\)-dimensional, it is either disconnected, or there is a non-nullhomotopic loop \(\sigma\to\bigcup_{j<k_{0}}X_{v_{j}}\cap X_{v_{k_{0}}}\). We first note that the second case can be reduced to the first, and is thus precluded by the induction hypothesis.
**Non-simply-connected intersection:** If \(\sigma\to\bigcup_{j<k_{0}}X_{v_{j}}\cap X_{v_{k_{0}}}\) is essential, then \(\sigma\to\bigcup_{j<k_{0}}X_{v_{j}}\) is also essential, and the image of \(\sigma\) in \(\bigcup_{j<k_{0}}X_{v_{j}}\) is covered by
a collection of arcs in the \(X_{v_{j}}\)'s. By the Seifert Van-Kampen Theorem, the image of \(\sigma\) cannot be covered by two contractible sets with connected intersection, so in particular, viewing \(\sigma\) as a concatenation of two arcs \(\tau^{\prime}\) and \(\tau^{\prime\prime}\) where \(\tau^{\prime}\) traverses a single \(X_{v_{j_{0}}}\) with \(1\leq j_{0}<k_{0}\) and \(\tau^{\prime\prime}\) traverses \(\bigcup_{j<k_{0},j\neq j_{0}}X_{v_{j}}\), it follows that the intersection \(\bigcup_{j<k_{0},j\neq j_{0}}X_{v_{j}}\cap X_{v_{j_{0}}}\) must be disconnected, contradicting the induction hypothesis since \(j_{0}<k_{0}\).
**Disconnected intersection:** Assume now that there exist vertices \(x,y\) lying in distinct connected components of \(\bigcup_{j<k_{0}}X_{v_{j}}\cap X_{v_{k_{0}}}\), and let \(X_{v},X_{v^{\prime}}\) be the corresponding elements of \(\mathcal{U}\) containing \(x\) and \(y\). Note that \(x\) and \(y\) are connected by a path \(\tau\) in \(\bigcup_{j\leq k}X_{v_{j}}\), since by the induction hypothesis, this union is path-connected. Let \(I_{\tau}\) be such that \(\bigcup_{I_{\tau}}X_{v_{j}}\) are the elements of \(\mathcal{U}\) traversed by \(\tau\), so \(\bigcup_{I_{\tau}}X_{v_{j}}\cup X_{v_{k_{0}}}\) is a union of boundaries of \(2\)-cells in \(\widetilde{\mathcal{X}}(P)\) defining an annular diagram \(A_{\tau}\) whose boundary paths are cycles in \(\bigcup_{I_{\tau}}X_{v_{j}}\cup X_{v_{k_{0}}}\), and \(A_{\tau}\) collars a reduced disc diagram \(D_{\tau}\) in \(\widetilde{\mathcal{X}}(P)\), as in Figure 1. Amongst all possible paths satisfying the above, choose \(\tau\) so that \(|I_{\tau}|\) is the least possible, and choose \(D_{\tau}\) to have the least number of cells amongst all disc diagrams collared by \(A_{\tau}\).
If \(\mathsf{Area}(D_{\tau})=0\), then \(D_{\tau}\) is a tree. Note that if \(D_{\tau}\) has branching, then removing an edge (or several edges corresponding to a piece) from \(D_{\tau}\) corresponds to shortening \(\tau\) by pushing it away from a pair \(X_{v_{j}},X_{v_{j^{\prime}}}\) and towards \(X_{v_{k_{0}}}\), as in Figure 3; such a reduction contradicts the choices in the previous paragraph, so we may assume that \(D_{\tau}\) is a possibly degenerate subpath of \(X_{v_{k_{0}}}\) with \(\partial D_{\tau}=\tau\tau^{-1}\). Since \(\tau\subset\bigcup_{j<k_{0}}X_{v_{j}}\cap X_{v_{k_{0}}}\), this contradicts the hypothesis that \(x,y\) lie in distinct connected components of \(\bigcup_{j<k_{0}}X_{v_{j}}\cap X_{v_{k_{0}}}\).
Hence, \(\mathsf{Area}(D_{\tau})\geq 1\). We may assume, by performing the same reductions to \(\tau\) as in the \(\mathsf{Area}(D_{\tau})=0\) case, that \(\partial D_{\tau}\) has no spurs, so by Greendlinger's Lemma \(D_{\tau}\) must have at least one shell; we claim that \(\partial S<X_{v_{k_{0}}}\) for every shell \(S\) in \(D_{\tau}\). Assuming the claim (see Claim 3.13, and proven below), we now explain how to finish the proof of Lemma 3.12. Let \(S\) be a shell in \(D_{\tau}\). Since \(S\) is a shell, it intersects at least \(3\) consecutive cells \(C_{1},C_{2},C_{3}\) of \(A_{\tau}\) as in the centre of Figure 5, and so the path \(\tau^{\prime}\) obtained from \(\tau\) by pushing across \(C_{2}\) traverses at most as many cells as \(\tau\), but bounds a disc diagram \(D_{\tau^{\prime}}\) with \(\mathsf{Area}(D_{\tau^{\prime}})<\mathsf{Area}(D_{\tau})\), contradicting our initial choices.
Thus, assuming the claim, we arrive at a contradiction in all cases, so \(\bigcup_{j<k_{0}}X_{v_{j}}\cap X_{v_{k_{0}}}\) must be contractible and the induction is complete.
**Claim 3.13**.: _Let \(\mathcal{X}(P)\) be the presentation complex associated to a \(C(6)\) small-cancellation presentation \(P\), let \(V=\{v_{j}\}_{j\in\mathbb{N}}\) be the vertices of its structure graph \(\Lambda\), and for each \(v_{j}\in V\), let \(X_{v_{j}}\) be the corresponding element of \(\mathcal{U}\). Let \(A_{\beta}\) be an annular diagram in \(\widetilde{\mathcal{X}(P)}\) collaring a reduced disc diagram \(D_{\beta}\) so that \(\partial D_{\beta}=D_{\beta}\cap A_{\beta}=\beta\), and finally, let \(X_{v_{k_{0}}}\) be the maximal element in the ordering of
_Definition 3.10_ _corresponding to the boundary of a \(2\)-cell in \(A_{\beta}\). Then \(\partial S<X_{v_{k_{0}}}\) for every shell \(S\) in \(D_{\beta}\)._
_Proof of Claim._ We prove the claim by induction on \(\mathsf{Area}(D_{\beta})\). For the base case, assume \(\mathsf{Area}(D_{\beta})=1\), so \(D_{\beta}=S\) is a single cell, and in fact, a shell. Consider the union \(\bigcup_{j<k_{0}}X_{v_{j}}\). Since \(X_{v_{k_{0}}}\) is the next element in the ordering, then \(X_{v_{k_{0}}}\) lies in a simplex \(\sigma\) that contains a simplex \(\sigma^{\prime}\) which is least in the Lusin-Sierpinski ordering, and there is a vertex in \(\sigma-\sigma^{\prime}\), namely \(v_{k_{0}}\), that is not in \(\bigcup_{j<k_{0}}X_{v_{j}}\). Let \(X_{v_{j_{1}}}<\ldots<X_{v_{j_{m}}}<X_{v_{k_{0}}}\) be the boundaries of the \(2\)-cells in \(A_{\beta}\). If all of the vertices \(v_{j_{1}},\ldots,v_{j_{m}}\) lie in \(\sigma\), then, by Lemma 3.7 and Lemma 3.8, \(\bigcap_{v\in\sigma}X_{v}\) is contractible, so a boundary component of \(A_{\beta}\), say \(\partial_{out}A_{\beta}\), bounds a reduced disc diagram \(E\) in \(\widetilde{\mathcal{X}(P)}\) such that the boundaries of the \(2\)-cells in \(E\) lie in \(\bigcup_{v\in\sigma}X_{v}\), and either \(\partial S=\partial E\), implying that \(E\) is a single cell2 and \(\partial S<X_{v_{k_{0}}}\), or the small-cancellation condition is violated. To see that the first assertion is true, note that if \(\partial S=\partial E\), then \(E\) must contain a shell \(S^{\prime}\) by Greendlinger's Lemma, but the outerpath of \(S^{\prime}\) is its intersection with \(S\), and therefore it is a single piece,
Figure 4. Some annuli collaring disc diagrams in the example from Figure 2, and exhibiting the behaviour explained in Claim 3.13.
Figure 3. Possible reductions in the last part of the proof of Lemma 3.12 when \(\mathsf{Area}(D_{\tau})=0\) and \(D_{\tau}\) has branching.
contradicting that the outerpath of a shell must be the concatenation of at least 3 pieces. To see that the second assertion holds, consider the union \(E\cup A_{\beta}\cup S\), where \(E\) and \(S\) are glued to \(A_{\beta}\) along their boundary paths; this union is topologically a sphere. If \(\partial S\neq\partial E\), then in particular \(S\neq E\). Note that \(E\) may be assumed to be non-trivial and to have no spurs, since any spurs could be pushed out of \(E\) without changing the cellular structure of \(E\cup A_{\beta}\cup S\), so \(E\) has a shell \(S^{\prime}\). Let \(C\) be a 2-cell of \(A_{\beta}\) in \(E\cup A_{\beta}\cup S\) intersecting \(\partial E\) in a subpath of \(\partial S^{\prime}\), and let \(C^{\prime},C^{\prime\prime}\) be the 2-cells of \(A_{\beta}\) adjacent to \(C\), then the boundary path \(\partial C\) is a concatenation of exactly 4 pieces - namely subpaths of \(E,C^{\prime},D_{\beta}\), and \(C^{\prime\prime}\). This contradicts the \(C(6)\) condition.
We may thus assume that there exists a cell \(C\) of \(A_{\beta}\) with \(\partial C=X_{v_{j_{i}}}\) for some \(1\leq i\leq m\) such that \(v_{j_{i}}\) does not lie on \(\sigma\). Suppose \(j_{i}\) is the lowest index corresponding to such a \(v_{j_{i}}\). If \(\partial S>X_{v_{k_{0}}}\), then since \(S\) is adjacent to \(X_{v_{j_{i}}}\) and \(X_{v_{j_{i}}}<X_{v_{k_{0}}}\), then \(\sigma\) must contain a vertex \(v_{n}\) with \(X_{v_{n}}<X_{v_{j_{i}}}\). But, by hypothesis, \(X_{v_{j_{i}}}<\ldots<X_{v_{j_{m}}}<X_{v_{k_{0}}}\), so there exists a simplex \(\theta\) satisfying that \(\theta\cup\{v_{\ell}\}\) is a simplex for each \(\ell\in\{n,j_{i},\ldots,j_{m}\}\). Let \(\theta^{+}:=\theta\cup\{v_{n},v_{j_{i}},\ldots,v_{j_{m}}\}\). As before, Lemma 3.1, Lemma 3.7, and Lemma 3.8 imply that \(\bigcap_{v\in\sigma\cup\theta^{+}}X_{v}\) is contractible, so \(A_{\beta}\) bounds a disc diagram \(E\) in \(\widetilde{\mathcal{X}(P)}\) such that the boundaries of the 2-cells in \(E\) lie in \(\bigcup_{v\in\sigma\cup\theta^{+}}X_{v}\), and arguing as in the previous paragraph, either \(\partial S=\partial E\), implying that \(\partial S<X_{v_{k_{0}}}\) or the small-cancellation condition is again violated. Thus, the base case of the induction is complete.
Assuming the claim when \(\mathsf{Area}(D_{\beta})<N\), let \(D_{\beta}\) be a disc diagram as hypothesised and with \(\mathsf{Area}(D_{\beta})=N\). By Greendlinger's Lemma, \(D_{\beta}\) is either a single cell, a ladder, or has at least 3 shells or spurs. Since we may assume that \(N>1\), and any spurs on \(\partial D_{\beta}\) can be removed to reduce the number of cells in \(D_{\beta}\), we may assume that \(D_{\beta}\) has at least 2 shells. Let \(S_{1},\ldots,S_{n}\) be the shells of \(D_{\beta}\). Consider the diagram \(D_{\beta^{\prime}}\) obtained from \(D_{\beta}\) by removing \(S_{1}\), so \(D_{\beta^{\prime}}\) is collared by the annular diagram \(A_{\beta^{\prime}}\) obtained from \(A_{\beta}\) by removing the cells in the outerpath of \(\partial S\) and adding \(S_{1}\). Every shell of \(D_{\beta}-S_{1}\) is still a shell of \(D_{\beta^{\prime}}\), so by the induction hypothesis, \(\partial S_{i}<X_{v_{M^{\prime}}}\) for each shell \(S_{i}\) with \(1<i\leq n\), where \(X_{v_{M^{\prime}}}\) is the maximal element in the ordering corresponding to the boundary of a 2-cell in \(D_{\beta^{\prime}}\).
It may be, a priori, that \(X_{v_{M^{\prime}}}=\partial S_{1}\); we explain now why this isn't the case. Repeating the same construction as above to obtain a disc diagram \(D_{\beta^{\prime\prime}}\) collared by an annular diagram \(A_{\beta^{\prime\prime}}\), but this time removing a shell \(S_{j}\neq S_{1}\), the induction hypothesis again implies that \(\partial S_{i}<X_{v_{M^{\prime\prime}}}\) for each shell \(S_{i}\) with \(1\leq i\leq n\) and \(i\neq j\) and where \(X_{v_{M^{\prime\prime}}}\) is defined analogously to \(X_{v_{M^{\prime}}}\), i.e., it is the maximal element in the ordering corresponding to the boundary of a 2-cell in \(D_{\beta^{\prime\prime}}\). If \(X_{v_{M^{\prime\prime}}}=S_{j}\), then \(\partial S_{j}<\partial S_{1}\) and \(\partial S_{j}>\partial S_{1}\), which is impossible. Thus, either \(X_{v_{M^{\prime}}}\neq\partial S_{1}\) or \(X_{v_{M^{\prime\prime}}}\neq\partial S_{j}\), and since \(X_{v_{M}}\) is the boundary of a 2-cell of either \(A_{\beta^{\prime}}\) and \(A_{\beta^{\prime\prime}}\)
as it can only be excluded when removing one of the shells -, then \(\partial S_{i}<X_{v_{k_{0}}}\) for each shell \(S_{i}\) with \(1\leq i\leq n\), and the induction is complete.
As explained in the introduction, Theorem 3.4 is a corollary of the following:
**Theorem 3.14**.: _There is a homotopy equivalence_
\[\hat{B}_{n}\sim\bigvee_{i\in I,g\langle r_{i}\rangle\in F_{n}/\langle r_{i} \rangle}gc_{i}. \tag{1}\]
Proof.: Since no essential path in \(\hat{B}_{n}\) is the concatenation of less than \(6\) pieces, then each \(\mathcal{F}_{\iota}\) is a tree, and hence contractible. Lemma 3.1 shows that each piece \(gc_{i}\cap g^{\prime}c_{j}\) is also contractible.
Mantaining the notation used in the previous lemmas, we write \(V=\{v_{j}\}_{j\in\mathbb{N}}\), where the indexing agrees with the ordering in Definition 3.10. The main step of the proof is Lemma 3.12, which asserts that for each \(k\in\mathbb{N}\), there is a homotopy equivalence
\[\bigcup_{j\leq k}X_{v_{j}}\sim\bigvee_{j\leq k}X_{v_{j}}.\]
Note that \(\hat{B}_{n}\) is equal to the infinite directed union \(\hat{B}_{n}=\bigcup_{j\leq k,k\to\infty}X_{v}\) and that
\[\bigcup_{j\leq k,k\to\infty}X_{\ell}\sim\bigvee_{j\leq k,k\to\infty}X_{v}\sim \bigvee_{j\leq k,v_{j}\in V_{T},k\to\infty}X_{v}\]
as each \(X_{v}\) with \(v\in V_{U}\) is contractible. Thus, the proof is complete.
We therefore conclude:
Proof of Theorem 3.4.: Taking the fundamental group on both sides of the homotopy equivalence in (1), we obtain the desired isomorphism. Indeed:
\[\langle\!\langle r_{1},\ldots,r_{k}\rangle\!\rangle=\pi_{1}\hat{B}_{n}\cong\pi _{1}\bigvee_{i\in I,g\langle r_{i}\rangle\in F_{n}/\langle r_{i}\rangle}gc_{ i}=*_{v\in V_{T},g\langle r_{i}\rangle\in F_{n}/\langle r_{i}\rangle}\langle r _{i}\rangle^{g}.\qed\]
Figure 5. The reductions used in the final part of the proof of Lemma 3.12 (replace \(\beta\) with \(\tau\) in the labelling) and in the inductive step of the proof of Claim 3.13.
_Remark 3.15_.: When \(\mathcal{X}(P)\) is aspherical (i.e., when the \(r_{1},\ldots,r_{k}\) are not proper powers [13]), the isomorphism \(\langle\!\langle r_{1},\ldots,r_{k}\rangle\!\rangle\cong*_{i\in I,g\in\pi_{1} \mathcal{X}(P)}\langle r_{i}\rangle^{g}\) is equivalent to the homotopy equivalence \(\hat{B}_{n}\sim\bigvee_{i\in I,g\in\pi_{1}\mathcal{X}(P)}gc_{i}\). In general, the isomorphism above might not imply a homotopy equivalence between the corresponding spaces, so Theorem 3.14 is stronger than Theorem 3.4.
|
2309.09779 | On Random Tree Structures, Their Entropy, and Compression | Measuring the complexity of tree structures can be beneficial in areas that
use tree data structures for storage, communication, and processing purposes.
This complexity can then be used to compress tree data structures to their
information-theoretic limit. Additionally, the lack of models for random
generation of trees is very much felt in mathematical modeling of trees and
graphs. In this paper, a number of existing tree generation models such as
simply generated trees are discussed, and their information content is analysed
by means of information theory and Shannon's entropy. Subsequently, a new model
for generating trees based on practical appearances of trees is introduced, and
an upper bound for its entropy is calculated. This model is based on selecting
a random tree from possible spanning trees of graphs, which is what happens
often in practice. Moving on to tree compression, we find approaches to
universal tree compression of the discussed models. These approaches first
transform a tree into a sequence of symbols, and then apply a dictionary-based
compression method. Conditions for the universality of these method are then
studied and analysed. | Amirmohammad Farzaneh, Mihai-Alin Badiu, Justin P. Coon | 2023-09-18T13:58:57Z | http://arxiv.org/abs/2309.09779v1 | # On Random Tree Structures, Their Entropy, and Compression
###### Abstract
Measuring the complexity of tree structures can be beneficial in areas that use tree data structures for storage, communication, and processing purposes. This complexity can then be used to compress tree data structures to their information-theoretic limit. Additionally, the lack of models for random generation of trees is very much felt in mathematical modeling of trees and graphs. In this paper, a number of existing tree generation models such as simply generated trees are discussed, and their information content is analysed by means of information theory and Shannon's entropy. Subsequently, a new model for generating trees based on practical appearances of trees is introduced, and an upper bound for its entropy is calculated. This model is based on selecting a random tree from possible spanning trees of graphs, which is what happens often in practice. Moving on to tree compression, we find approaches to universal tree compression of the discussed models. These approaches first transform a tree into a sequence of symbols, and then apply a dictionary-based compression method. Conditions for the universality of these method are then studied and analysed.
Entropy, Trees, Simply Generated Trees, Random tree models, Tree coding, Tree compression
## I Introduction
Trees are widely used in different areas of science. One of their biggest application area is the field of network science to model different structures, patterns, and behaviours. Some networks are formed specifically as a tree, such as the design used in the ZigBee specification [1]. Additionally, trees are often encountered in practice as subsets of complex networks. An application of this is routing tables in networks, which are essentially a tree structure [2, 3]. Other notable fields in which trees are used for modelling data include phylogenetic trees [4], parse trees in Natural Language Processing [5], and Barnes-Hut trees in astrophysics [6].
There are numerous studies that focus on the extraction of information content, often called the complexity, of graphical data structures. This is mainly because of the complex nature of graphical data structures, especially as their number of nodes increases. This exponential growth in complexity demands a formal way of quantifying the amount of information content in graphical data structures. This knowledge can later on be used for any application that involves the storage, transmission, or processing of these data structures. When it comes to data transmission, Shannon's entropy [7] has been the standard metric for complexity ever since its introduction back in 1948. Some notable studies on quantifying the complexity of graphical data structures in terms of Shannon's entropy include the calculation of the entropy of Erdos-Renyi structures [8], and random geometric graphs [9]. Compared to the numerous studies on this area for graphs, measuring the complexity of tree structures has barely been studied before. Some tree models for which entropy has been studied before include random binary trees [10], and specific cases of plane trees [11]. This shows that most of the studied models either lack generality, or lack the ability to simulate trees observed in real networks. Even though trees can be seen as a subset of graphs, their unique characteristics and features can be utilized to extract more accurate bounds and results, which can prove to be more useful. Consequently, one of the main approaches of this paper is to study the information content and complexity of tree data structures using Shannon's entropy.
Despite their vast applications, there are very few models for random generation of trees. There are numerous models for creating random graphs, such as the Erdos-Renyi model [12], Barabasi-Albert model [13], Stochastic Block Model [14], and Random Geometric Graphs [15]. However, this variety can not be seen in random models for trees. The existing models for trees are very limited, and most of them rely solely on a uniform distribution among the possible trees. For example, a random generation of a Prufer sequence [16] can result in a random tree. Other models focus only on specific type of tree, such as binary trees [17]. One of the most detailed studies on random trees can be found in [18], where several random tree models are introduced an analysed. The tress created using most of the models introduced in [18], such as Polya trees [19] and Galton-Watson trees [20] can grow indefinitely, which means that they are only useful in limited application scenarios. For instance, recursive trees are useful in the analysis of the spread of epidemics [21], family trees of preserved copies of ancient manuscripts [22], and chain letters or pyramid games [23], all of which have the possibility of growing indefinitely. One of the most simple, yet powerful, models that has been introduced so far for random tree generation is called Simply Generated Trees (or unconditioned Galton-Watson trees). This model is simple, but still effective to capture the dynamics of trees. It has been shown that Simply Generated Trees can act as a generation model for many different types of random trees that are seen in practice [24]. One example of this is modelling branching processes [25]. In this paper we start by focusing on this model, and extract its information theoretic content in terms of Shannon's entropy. This can be beneficial in analysing the
situations in which Simply Generated Trees are used to model the generation of trees. Another way of randomly generating trees, which has not been worked on in detail in the past, is the extraction of a random tree from an already existing underlying network. This is a scenario that happens a lot in practice. For instance, as stated earlier, the tree corresponding to the routing table of a node in a network is simply a spanning tree of the original network. Therefore, it will be beneficial to look at this method of developing random trees, and analyse it using information theory. In this paper, we formalize this method of creating random trees, and then find an upper bound to its entropy.
Ultimately, we focus on compressing trees to their theoretical limit, which is the entropy of the source. Tree compression can be used in all applications that involve the use of trees. For instance, applications of tree compression in syntax-directed compression of program file and pixel trees are studies in [26]. The storage space and bandwidth required for communicating trees grows exponentially large with the size of the tree if traditional methods such as adjacency matrices or adjacency lists are used for coding trees. Tree coding methods such as the Prufer code, the Neville codes, and the Demo and Micikevicius code have a fixed codeword length for same trees even if they are generated from different distributions [27], and therefore are not entropy-optimal. Additionally, survey methods that do study information-theoretic optimality of the introduced coding methods often do so by comparing the average codeword length with the entropy of the uniform source only [26]. Recently, powerful tree compression algorithms such as tree compression with top trees [28] tree structure compression using RePair [29] have been introduced. However, the performance of these algorithms is not analysed using information thoery, which always leaves the question of whether the structures could be compressed more efficiently. For these reason, we seek compression algorithm for trees that are both easy to implement and are shown to have a near optimal performance with respect to information-theoretic measures. The family of dictionary-based compression methods are among the most preferred lossless compression algorithms for different sources and data types [30]. They are widely used in applications such as text compression [31] and image compression [32]. Because of the advantages of dictionary-based compression methods, we want to be able to use them to optimally compress tree structures as well. However, the issue is that a tree structure is not made out of a consecutive series of symbols. For this reason, we first consider a family of transformations on trees, which we call tree traversals. We give examples of possible ways that tree traversals can be done, and then move on to applying dictionary-based compression algorithms to sequences generated using tree traversals. We then discuss under what conditions this method of tree compression guarantees optimality.
The paper starts with the study of the complexity of different tree sources. We start with the uniform source, and then move on to the entropy of Simply Generated Trees. Afterwards, a model for generating random trees using an underlying random graph is introduced, which we call the Spanning Tree Model. An upper bound to the entropy of a special case of the spanning tree model is also calculated. We then move on to the subject of tree compression, by introducing tree traversal and examples of it such as Pit-Climbing and Tunnel-Digging. Subsequently, we combine tree traversals and dictionary-based compression algorithms to Simply Generated Trees and the spanning tree model, and show that this combination can optimally compress trees generated from these sources. The paper ends with a conclusion, and potential future directions of research are mentioned.
## II Entropy of Tree Structures
In this section, we focus on quantifying the entropy of tree structures. If we look at trees as a random variable from a pool of possible trees, we need a probability distribution on these trees to be able to calculate Shannon's entropy. This necessitates the existence of a random model for creating the tree structures at hand. Having a random model would entail having a probability distribution on the trees, and model the dynamics of creating trees in real-life scenarios. Unfortunately, the issue is that there does not exist an adequate variety of random models for creating tree structures. Existing models are very limited in terms of scenarios they can simulate. For instance, there is no parameter to set the number of nodes in Galton-Watson trees, and the number of nodes in the tree can be anywhere from one to infinity. We will start this section by introducing some concepts and terminologies that we use throughout the paper. We then continue with a very simple method for creating random trees, which is to choose a tree uniformly among all possible trees with the same number of nodes. Additionally, One of the few existing models for creating random trees, called Simply Generated Trees, is studied. We will then introduce another model for creating a random tree. This model is based on creating the underlying graph first, and then choosing one of its spanning trees as the output of the random generator. We call this model the spanning tree model, and then study its entropy.
### _Terminologies_
We define a random tree source using the following parameters.
* \(T\): set of all possible trees that can be generated by the source. The size of this set can be finite or infinite.
* \(p_{T}(t)\): A probability distribution on the trees in \(T\). We may only use the notation \(p(t)\) if it is clear which tree source we are talking about. Generally, this distribution can be time-dependent or time-independent.
In this paper, the term entropy always refers to Shannon's entropy, and is calculated in base two (bits).
### _Entropy of the uniform source_
In this section, we calculate the entropy of a uniform source for various types of tree structures. By a uniform source we mean that having \(T\), the probability of observing any \(t\in T\) is simply \(1/|T|\). This way, the entropy of the source will simply be \(\log_{2}|T|\) bits.
We first start with unlabeled unordered rooted trees. The sequence of the number of unlabeled unordered rooted trees with \(n\) nodes is listed on the on-line encyclopedia of integer sequences [33, A000081]. Table I shows the possible trees of this kind that can be built with up to three nodes. It is known that the asymptotic limit of this sequence is \(cd^{n}n^{-3/2}\)[34], where \(c\) and \(d\) are constants that can be found at [33, A187770] and [33, A051491], respectively. Consequently, the asymptotic uniform entropy for this model, \(H_{T}\), can be calculated using the following equation. This equation shows that the growth rate of the entropy of a uniformly distributed unlabeled unordered rooted tree source is asymptotically linear.
\[\begin{split} H_{T}&\sim n\log_{2}d-1.5\log_{2}n +\log_{2}c\\ &\approx 1.5635n-1.5\log_{2}n-1.1846\end{split} \tag{1}\]
To study the same for unlabeled ordered rooted trees, we refer to the fact that their number matches the sequence of Catalan numbers [35, Ch. 8]. In other words, the \(n\)th Catalan number equals the number of possible ordered rooted trees that can be built on \(n\) nodes. The following equality is well-known about Catalan numbers [36].
\[\begin{split} C_{n}&=\frac{1}{n+1}\binom{2n}{n}\\ &=\frac{4^{n}}{\sqrt{\pi n^{3}}}\left(1+O\left(\frac{1}{n}\right) \right)\end{split} \tag{2}\]
Based on (2), the uniform entropy of the source can be calculated using the following equation.
\[\begin{split} H_{T}&=\log_{2}C_{n}\\ &=\log_{2}\frac{1}{n+1}\binom{2n}{n}\\ &=\log_{2}\frac{4^{n}}{\sqrt{\pi n^{3}}}\left(1+O\left(\frac{1}{n }\right)\right)\end{split} \tag{3}\]
Additionally, the asymptotic behaviour of this entropy can also be analysed using (3), which would provide us with the following result.
\[H_{T}\sim 2n \tag{4}\]
An interesting thing to note is that [10] achieves the same result for the uniform entropy, but for rooted full binary trees with \(2n-1\) nodes. This suggests the existence of a one-to-one mapping between ordered rooted trees with \(n\) nodes, and rooted full binary trees with \(2n-1\) nodes. It is already known that a transformation called child-sibling representation maps ordered rooted trees to binary trees [37]. Even though this transformation does not create full binary trees and is not a bijection, we can create a bijection between ordered rooted trees and rooted full binary trees using a similar idea. We call this transformation the double-node transformation, as every node in the original rooted tree, except for the root, will be mapped into two nodes in the binary tree. This is why the \(n\) nodes of the original tree will be mapped into \(2n-1\) nodes in the binary tree. This transformation is a bijection as every rooted ordered tree can be mapped into a unique rooted full binary tree and vice-versa. The steps of this transformation are explained below, and Fig. 1 shows an example of this transformation.
**Double-node transformation**
**Input:**: a rooted ordered tree with \(n\) nodes
**Output:**: a rooted full binary tree with \(2n-1\) nodes
1. The root of the input is mapped to the root of the output. As the root does not have any siblings, it is only considered a child node.
2. The nodes of the input are traversed using BFS.
3. Every observed node will be transformed into two nodes, which will both be attached to the same node in the binary tree. We call the left one the child node, and the right one the sibling node.
4. If the observed node is the first child of its parent, its corresponding child and sibling nodes will be attached to the child node of its parent. If not, its child and sibling nodes will be attached to the sibling node of its closest sibling to its left.
5. The mapping continues until all the nodes in the input tree have been traversed.
It can be seen that the double node transformation provides a bijection between rooted ordered trees with \(n\) and rooted full binary trees with \(2n-1\) nodes. Therefore, the uniform entropy results for these two families of trees are the same.
Finally we study labeled rooted and unrooted trees. The number of labeled unrooted trees is shown to be \(n^{n-2}\)[38, p. 26], which implies that the uniform entropy can be calculated using the following equation.
\[H_{T}=(n-2)\log_{2}n \tag{5}\]
Additionally, for any given labeled unrooted tree with \(n\) nodes, we can pick any of its \(n\) nodes as the root. Therefore, the number of labeled rooted trees is \(n^{n-1}\), and their uniform entropy can be calculated using the following equation.
\[H_{T}=(n-1)\log_{2}n \tag{6}\]
### _Entropy of Simply Generated Trees_
Simply Generated Trees (SGT) are a popular family of random trees. They were first introduced in [39], and since then have been used to model random trees. The reason for
Fig. 1: Example of Double-node transformation
the popularity of this model lies in its simplicity, and the fact that it is powerful enough to model many scenarios. They are also used to generate more complex tree generation models. In this section, we will calculate the entropy of this family of random tree models.
Simply Generated Trees are generated based on a probability distribution on the number of children of each node. They are rooted and ordered trees. To build an SGT, we need a distribution on the set of whole numbers \(\{0,1,2,\ldots\}\). We call this distribution the children distribution of the SGT, and show it with \(p_{C}(c)\). The children distribution is essentially a distribution on the number of children that each node can have, independently from others. The only condition that we impose on the children distribution is for \(p_{C}(0)\) to be nonzero. For example, we can use a geometric distribution, binomial distribution, or generally any discrete distribution on whole numbers that satisfies \(p_{C}(0)\neq 0\). After this distribution is chosen, we are ready to generate the random tree. The following steps show how an SGT is created using its underlying children distribution.
1. Create the root of the tree. This will be level 0.
2. Create the children of the root, based on a number acquired from the children distribution.
3. For all \(i>0\), go through the nodes on level \(i\), and choose the number of children for each of them, based on the children distribution.
4. The algorithm is terminated once the number of children for all the nodes is chosen, and there is no more node to explore (Note that once a node is decided to have zero children, the branch corresponding to that node will be terminated, and the node will become a leaf node).
Note that the resulting tree will have ordered branches, as the number of children for each node is chosen sequentially. Generating a tree this way will result in a tree whose probability is equal to the product of the probability of the number of children of all of its nodes. The following example illustrates SGTs that can be made using a specific children distribution.
**Example II.1**.: _Assume that we have a children distribution for which \(p_{C}(0)=p\), and \(p_{C}(1)=1-p\). In the trees generated using this distribution, nodes can therefore have no child, or have only one child. Table II illustrates the possible trees that can be made using this model, alongside their probabilities._
We now want to use information theory to measure the information content of SGTs. We will first start by studying the possible number of nodes in an SGT. It can easily be seen that in case \(p_{C}(0)\neq 1\), the number of nodes in an SGT can theoretically be unlimited. This is because as soon as the probability of having more than zero children is nonzero, the branches can keep growing infinitely. However, we can still study the average number of nodes in SGTs.
Assume that we have an SGT model, with a children distribution \(p_{C}(c)\), where \(p_{C}(0)\neq 1\). Additionally, assume \(T\) to be the set of all possible trees that can be generated by this model. We are interested in calculating the average number of nodes of trees in \(T\). Based on the average number of children, Simply Generated Trees can be categorized into subcritical, critical, and supercritical. This categorization can by calculating the expected number of children \(\mathbb{E}[c]\), which we show with \(\bar{C}\), based on the children distribution. The model is in the subcritical, critical, and supercritical if and only if \(\bar{C}<1\), \(\bar{C}=1\), \(\bar{C}>1\), respectively [24]. It will be shown that the average number of nodes for an SGT model is limited only in the subcritical phase.
Let \(n\) be the random variable that shows the number of nodes in a tree, and let \(n_{t}\) show the number of nodes in a particular tree \(t\). We can use the following equation to calculate the average number of nodes in the trees created using this model.
\[\mathbb{E}[n]=\sum_{t\in T}n_{t}p(t) \tag{7}\]
For each tree \(t\), \(n_{t}\) can be written as the sum of the nodes in different levels, with level 0 being the root node. If \(n_{i,t}\) shows the number of nodes in the \(i\)th level of tree \(t\), (7) can be rewritten as
\[\mathbb{E}[n] =\sum_{t\in T}p(t)\sum_{i}n_{i,t} \tag{8a}\] \[=\sum_{i}\sum_{t\in T}p(t)n_{i,t}. \tag{8b}\]
\begin{table}
\begin{tabular}{|c|c|} \hline Graph & Probability \\ \hline & \(p\) \\ \hline & \(p(1-p)\) \\ \hline & \(p(1-p)^{2}\) \\ \hline Line with \(n\) nodes & \(p(1-p)^{n-1}\) \\ \hline \end{tabular}
\end{table} TABLE II: Example of Simply Generated Trees and their probabilities
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(\mathbf{n}\) & **Possible trees** & **Count** \\ \hline
1 & & 1 \\ \hline
2 & & 1 \\ \hline
3 & & 2 \\ \hline
4 & & \(p(1-p)^{2}\) \\ \hline \end{tabular}
\end{table} TABLE I: Possible unlabeled unordered rooted trees of up to three nodes
Based on (8b), the average number of nodes can be rewritten as the sum of the average number of nodes in each level among all possible trees. Therefore, we need to calculate the average number of nodes for each level individually. We will show the random variable for the number of nodes in level \(i\) with \(n_{i}\). Firstly, as all the trees will have only one node in level 0, we have \(\mathbb{E}[n_{0}]=1\). Additionally, if a tree \(t\) has \(n_{i-1,t}\) nodes in level \(i-1\), its expected number of nodes in level \(i\) will simply be \(n_{i-1,t}\bar{C}\). This is because the number of children of individual nodes is independent. Therefore, if we show the probability of a generated tree having \(i\) nodes in level \(n\) with \(p_{n}(i)\), we can write the following recursive equation for \(i>0\).
\[\mathbb{E}[n_{i}] =\sum_{j=0}^{\infty}p_{n-1}(j)j\bar{C} \tag{9a}\] \[=\bar{C}\sum_{j=0}^{\infty}p_{n-1}(j)j\] (9b) \[=\bar{C}\mathbb{E}[n_{i-1}] \tag{9c}\]
Based on (9c) and the fact that \(\mathbb{E}[n_{0}]=1\), we have
\[\mathbb{E}[n_{i}]=\bar{C}^{i}. \tag{10}\]
Combining (8b) and (10), we get the following equation for the average number of nodes in the SGT model.
\[\mathbb{E}[n]=\sum_{i=0}^{\infty}\bar{C}^{i} \tag{11}\]
As (11) is a geometric series, we know that it will only converge when we have \(|\bar{C}|<1\). This means that the SGT model needs to be in the subcritical phase for the average number of nodes of the model to converge. From now on, we will assume the SGT models that we work with to be in the subcritical phase, so that the corresponding trees have a finite average node number.
Now, we move on to quantifying the entropy of an SGT model based on its children distribution. Assume the corresponding children distribution of the SGT model to have an entropy of \(H_{C}\). In other words, we have
\[H_{C}=-\sum_{i=0}^{\infty}p_{C}(i)\log_{2}p_{C}(i) \tag{12}\]
We want to represent the entropy of the SGT model based on \(H_{C}\). If we show the entropy of the SGT model with \(H_{T}\), we can write
\[H_{T}=-\sum_{t\in T}p(t)\log_{2}p(t). \tag{13}\]
To simplify (13), we note that we can write \(p(t)\) as the product of the probabilities of the number of children of each node in \(t\). We use this to write \(\log_{2}p(t)\) in the form of a sum. We use \(n_{t}\) to show the number of nodes in \(t\), and \(c_{t,i}\) to show the number of children of node \(i\) in \(t\).
\[H_{T}=-\sum_{t\in T}p(t)\sum_{i=1}^{n_{t}}\log_{2}p(c_{t,i}) \tag{14}\]
To simplify (14), we open it up based on the number of children. We use \(n_{t,i}\) to show the number of nodes in \(t\) that have \(i\) children. This way we can write
\[H_{T}=\sum_{i=0}^{\infty}H_{T,i}, \tag{15}\]
where
\[H_{T,i}=-\log_{2}p(i)\sum_{t\in T}p(t)n_{t,i}. \tag{16}\]
To calculate \(H_{T,i}\), we note that \(\sum_{t\in T}p(t)n_{t,i}\) essentially represents the average number of nodes that have \(i\) children in a random tree. To calculate this sum, we perform the same trick as when we calculated the average number of nodes in a tree, we open up the sum on the level of nodes in the tree. It can easily be seen that the average number of nodes that have \(i\) children in a specific level of the tree is simply \(p(i)\) times the number of nodes on that level. Therefore, we can use (9a) and write
\[\sum_{t\in T}p(t)n_{t,i}=p(i)\sum_{i=0}^{\infty}\bar{C}^{i}=\frac{p(i)}{1- \bar{C}}. \tag{17}\]
Inserting (17) into (16), we get the following equation.
\[H_{T,i}=\frac{-p(i)\log_{2}p(i)}{1-\bar{C}} \tag{18}\]
Finally, to calculate \(H_{T}\), we insert (18) into (15), and we get the following equation.
\[H_{T} =-\sum_{i=0}^{\infty}\frac{p(i)\log_{2}p(i)}{1-\bar{C}} \tag{19a}\] \[=\frac{H_{C}}{1-\bar{C}} \tag{19b}\]
(19b) gives us the entropy of the SGT model. It can be seen that the entropy can easily be calculated from the entropy of the underlying children distribution. Additionally, note that in order for this entropy to be computable, the sum in (17) needs to converge, and therefore the SGT needs to be in the subcritical phase. Additionally, note that this entropy can be seen as the number of nodes times \(H_{C}\), which makes sense intuitively. This is because the number of children of each node is independent from the others, and adds an entropy of \(H_{C}\) to the ensemble. This is a very interesting result, as it relates the entropy of the underlying children distribution to the tree model using a simple equation.
It can be seen in (19b) that both the entropy of the children distribution and the average number of children have an effect on the entropy of the trees. An increase in the entropy of the children distribution or the average number of children per node results in an increase in the entropy of the trees. Additionally, as \(0\leq\bar{C}\leq 1\), it can be concluded that \(H_{T}\geq H_{C}\), with equality holding if and only if the the only possible tree that the SGT model can create is a tree with only one node.
### _Conditioned Galton-Watson Trees_
In this section, we will attempt to quantify the entropy of conditioned Galton-Watson trees. Conditioned Galton-Watson trees are simply Galton-Watson trees that are conditioned on their number of nodes. In other words, we condition the trees generated using the Galton-Watson model such that \(|t|=n\). This will change the probability distribution of the original Galton-Watson trees.
Consider the random variables \(\{X_{1},X_{2},\ldots,X_{n}\}\) to be \(n\) i.i.d random variables sampled from the children distribution. For this sequence of random variables to be able to represent a tree with \(n\) nodes, we need to have
\[\sum_{i=1}^{n}X_{i}=n-1.\]
Therefore, the entropy that we are looking for can be formed using the following conditional entropy.
\[H_{n}=H(X_{1},\ldots,X_{n}|\sum_{i=1}^{n}X_{i}=n-1) \tag{20}\]
Calculating the entropy using (20) can prove to be challenging. Therefore, we will provide estimates to it using upper bound. We can start by using a zero-order upper bound as below.
\[\begin{split} H(X_{1},\ldots,X_{n}|\sum_{i=1}^{n}X_{i}=n-1)& \leq H(X_{1},\ldots,X_{n})\\ &\leq\sum_{i=1}^{n}H(X_{i})\\ &=nH_{C}\end{split} \tag{21}\]
Eq. (21) provides us with a simple upper bound. If we want to increase the accuracy of the upper bound, we can move up to higher order models. The equation below describes the first order model.
\[\begin{split}& H(X_{1},\ldots,X_{n}|\sum_{i=1}^{n}X_{i}=n-1)\\ &\leq H(X_{1}|\sum_{i=1}^{n}X_{i}=n-1)+\sum_{i=1}^{n}H(X_{i}|X_{1 },\sum_{i=1}^{n}X_{i}=n-1)\\ &\leq H_{C,n-1}+(n-1)\mathbb{E}_{C,n-1}\left[H_{C,n-1-i}\right],\end{split} \tag{22}\]
where \(H_{C,i}\) and \(\mathbb{E}_{C,i}\) show the entropy of the children distribution conditioned on the number of children being limited to \(i\) and the expectation over this conditional distribution, respectively. The method used in (22) can provide us with a tighter upper bound, but has more computational complexity. We can also keep increasing the order in the same manner to get more accurate bounds at the cost of more computational complexity.
### _The Spanning Tree Model_
Unlike graphs, the models for generating random trees are very limited. The variety observed in random graph models can not be seen in trees. Different distributions are fit to real-life networks in order to create mathematical models that can capture the properties of these networks. For instance, the Watts-Strogatz model [40] exhibits a high clustering coefficient, which is consistent with many real-life networks. However, random models that are able to exhibit the same properties of real-life tree data structures do not exist. Because of this reason, we were motivated to define a new random tree generation model, which is also based on practice. As a result, we introduce and study the spanning tree model in this section.
In practical applications, the trees we work with are often a spanning tree of a network. An example of this can be seen in network routing. Routing tables are usually used to store the shortest paths from any node in a network to any other one. It can be shown that the routing table for a node in a network is essentially representing a rooted tree, with the root being the origin node. It can also be seen that if the network forms a connected graph, this rooted tree is a spanning tree of the underlying network. Therefore, selecting a random spanning tree of the underlying network can be a practical model for generating random trees. The combination of random graph generation with random spanning tree selectors for creating random trees is a novel approach. One of the key points of this combination is that both fields have been studied extensively and powerful tools exist in both of them. We have already discussed the variety of different models that exist for random graph generation. In addition to that, there are powerful ways of randomly selecting a spanning tree of a single graph [41, 42, 43]. The introduced model is very flexible in the sense that any existing random graph generation model can be combined with any existing random spanning tree selector in order to fit the real-life scenario that we want to simulate. In this section, we analyse the entropy of the trees that are created using this method.
Assume that we have a random graph source \(G\). The first step is to create a graph \(g\), according to the distribution of \(G\). \(g\) will then have a number of spanning trees (which can also be zero). We then choose one of the spanning trees of \(g\) as the generated tree. This can generally be done according to an arbitrary distribution, which can also be dependent on \(g\). We use the term spanning tree model to refer to this method of generating a random tree. We use \(H_{G}\) to show the entropy of the random graph generator used in the model, and \(H_{T}\) to show the entropy of the trees generated using the spanning tree model. Fig. 2 shows the steps of the spanning tree model.
The goal in this section is to find \(H_{T}\), assuming that \(H_{G}\) is
Fig. 2: Steps of the Spanning Tree Model
known or can be easily calculated. We can write the following equation to find \(H_{T}\).
\[H_{T}=H_{G}+H(T|G)-H(G|T) \tag{23}\]
If we assume that the output trees are chosen uniformly from the spanning trees of the graph, then we can write the following equation for calculating \(H(T|G)\).
\[H(T|G) =\sum_{g\in G}p_{G}(g)H(T|G=g) \tag{24a}\] \[=\sum_{g\in G}p_{G}(g)\log_{2}s_{g}, \tag{24b}\]
where the sum is taken among all the possible graphs that can be generated using the random graph generator, \(p_{G}\) shows the probability distribution of the graph source, and \(s_{g}\) shows the number of spanning trees of graph \(g\). Notice that the summation needs to be taken among connected graphs of the source, as they need to have at least one spanning tree for \(\log_{2}s_{g}\) to be defined.
(23) and (24b) show that to calculate the entropy of the spanning tree model, we need knowledge about the underlying distribution for the network topology, as well as the number of spanning trees that exists for each graph. Unfortunately, the number of spanning trees of a graph, often called its tree-number, does not have a closed form representation in terms of its number of nodes and edges. There are methods such as Kirchhoff's theorem [44, Theorem. 13.1], that provide us with a way for calculating the tree-number of graphs, but they require knowledge about the graph's adjacency information. The spanning tree entropy also depends on the model that is used to generate the underlying graph topology, which is not always known. Because of these limitations, we will consider a specific class of random graphs, those created using the famous Erdos-Renyi model [12].
We now move on to study the entropy of the spanning trees of graphs that are created using the Erdos-Renyi model. In an Erdos-Renyi graph, each edge will be present with a probability of \(p\), independent of other edges. As the Erdos-Renyi model does not guarantee connectivity, there will be some graphs that do not even have a spanning tree. Additionally, it is known that there is no closed-form formula to calculate the number of spanning trees of a graph given its number of nodes and edges. Because of these reasons, we will find an upper bound to the entropy of the spanning trees of Erdos-Renyi graphs rather than its actual value.
We consider Erdos-Renyi graphs with \(n\) nodes, with parameter \(p\). It can easily be shown that the entropy of the graphs created using this model can be calculated using the following equation.
\[H_{G}=\binom{n}{2}H(p), \tag{25}\]
where
\[H(p)=-p\log_{2}p-(1-p)\log_{2}(1-p). \tag{26}\]
Additionally, we assume that once an ER graph is created, one of its spanning trees is chosen uniformly. If the graph does not have any spanning trees, then simply no tree is chosen. The next step is to therefore calculate an upper bound to \(H(T|G)\) based on this. Note that as the spanning tree is chosen uniformly after the graph is created, we can write
\[H(T|G)=\mathbb{E}\left[\log_{2}s(g)\right]. \tag{27}\]
As logarithm is a concave function, we can apply Jensen's inequality [45] to (27) and obtain the following inequality.
\[H(T|G)\leq\log_{2}\mathbb{E}\left[s(g)\right]. \tag{28}\]
Now, we note that based on Cayley's formula, there are \(n^{n-2}\) possible trees on \(n\) nodes, each of which are a spanning tree of the underlying n-node graph. As the underlying graph is an ER graph, the probability of each of these trees being present in the graph is simply \(p^{n-1}\). Therefore, we have
\[\mathbb{E}\left[s(g)\right]=p^{n-1}n^{n-2}. \tag{29}\]
By inserting 29 into 28, we get the following upper bound on \(H(T|G)\).
\[\begin{split} H(T|G)&\leq\log_{2}p^{n-1}n^{n-2}\\ &=(n-1)\log_{2}p+(n-2)\log_{2}n\end{split} \tag{30}\]
We now move on to calculate the term \(H(G|T)\) in (23). Notice that given a spanning tree of an ER graph with \(n\) nodes, we will know the status of \(n-1\) edges out of the possible \(\binom{n}{2}\) edges of the graph. Therefore, given a spanning tree of the graph, the remaining entropy is simply the entropy of the remaining \(\binom{n}{2}-(n-1)\) edges of an ER graph. Consequently, we can write the following equation.
\[\begin{split} H(G|T)&=\sum_{t}p_{T}(t)H(G|T=t)\\ &=\sum_{t}p_{T}(t)\left(\binom{n}{2}-(n-1)\right)H(p)\\ &=\left(\binom{n}{2}-(n-1)\right)H(p)\sum_{t}p_{T}(t)\\ &=\left(\binom{n}{2}-(n-1)\right)H(p),\end{split} \tag{31}\]
where the sum is taken over all possible spanning trees on the \(n\) nodes of the graph, and \(p_{T}\) shows the probability distribution of the trees.
Ultimately, we insert the results from (25), (30), and (31) into (23) to get the total upper bound on the entropy of the spanning trees of the ER model. This will provide us with the following equation after simplification.
\[H_{T}\leq(n-1)\left(H(p)+\log_{2}(np)\right)-\log_{2}n \tag{32}\]
Eq. (32) gives us an upper bound on the entropy of trees created using the ER Spanning Tree model. Fig. 3 illustrates this entropy, and compares it with the entropy of the graph, and the maximum entropy for trees. The maximum entropy is calculated using the fact that a uniform distribution maximises the entropy, and there exist \(n^{n-2}\) possible labelled trees on \(n\) nodes. The simulation is run for ER graphs with 100 nodes, and the entropy is plotted as a function of the ER parameter \(p\)
It can be seen that for larger values of \(p\), the estimated upper bound for the entropy is larger than the maximum entropy. However, (32) is providing us with a tighter upper bound when used for lower values of \(p\). The exact boundary for which our upper bound is providing a better bound compared to the maximum possible entropy can be calculated by solving the following equation.
\[(n-2)\log_{2}n=(n-1)\left(H(p)+\log_{2}(np)\right)-\log_{2}n \tag{33}\]
It can easily be checked that the value of \(p\) that satisfies (33) is \(p=0.5\). Therefore, our upper bound is working well for values of \(p\) less than \(0.5\).
Additionally, we performed a simulation for the regime of giant components for ER graphs. According to [46, Ch. 11.5], an ER graph has a connected component that grows with \(n\) (giant component) when \(p>1/(n-1)\). This is a regime which is of particular importance in this study, as it is more likely to have a connected graph in this regime. Therefore, we simulated the entropy upper bounds per node (divided by \(n\)) by setting \(p\) to the giant component regime threshold (\(1/(n-1)\)), and sweeping over \(n\). Fig. 4 illustrates the results. It can be seen that as \(n\) grows large, our estimate is providing a much tighter upper bound on the entropy of the generated trees. By inserting \(p=1/(n-1)\) into (32), it can be seen that the entropy upper bound for the giant component regime threshold is \((n-1)\log_{2}n-(n-2)\log_{2}(n-2)\).
## III Universal Compression of Tree Structures
In this section, we will introduce universal compression algorithms for the models discussed in the previous section. Before doing so, we need to define what we mean by universality and optimality in the context of these methods.
**Optimality:** In the previous section, we calculated the entropies of different random tree models. It is shown by Shannon [7] that the entropy of a random variable provides a lower bound on the average code length that we can use to compress that random variable. Therefore, we are looking for compression algorithms whose average codeword length is close enough to the entropy of the random tree source at hand.
**Universality:** As seen in the previous section, each of the introduced models for generating random trees has specific parameters. For instance, Simply Generated Trees have their respective children distribution as their parameter. By looking for universal compression algorithms for specific families of random trees, we are essentially looking for compression algorithms that perform optimally regardless of the model parameter. For example, if we develop a universal compression algorithm for Simply Generated Trees, we want it to be able to optimally compress any SGT, regardless of the respective children distribution.
The idea of all existing universal compression algorithms is that as the parameter of the distribution is unknown, the distribution needs to be somehow learned from the data. For instance, the renowned family of Lempel-Ziv [47] compression algorithms, uses dictionaries to store and learn the most common patterns that can happen for the random variable at hand. However, this demands having a sequence of the random variable, so that the most common patterns can be learned. Whereas, in this paper, we are interested in compressing single large trees, rather than a sequence of them. In this case, optimality translated into the average codeword length for all possible trees to be close enough to the entropy of the source. Based on this, we give the following definitions for the optimality of the compression algorithm for different tree source.
**Sources with a fixed number of nodes:** If we use \(E_{L,n}\) to show the expected codeword length for trees with \(n\) nodes and \(H_{n}\) to show the entropy of the tree source for trees with \(n\) nodes, our goal is for the compressor to satisfy
\[\lim_{n\rightarrow\infty}E_{L,n}=H_{\text{inf}}, \tag{34}\]
to ensure that our compression algorithm is asymptotically optimal on single trees when \(n\) and consequently the need for compression grow.
**Sources with no bound on their number of nodes:** If we use \(L(t)\) to show the codeword length of the compressed tree, our goal is for the compressor to satisify the following condition.
\[\mathbb{E}[L(t)]=H(T)+C, \tag{35}\]
Fig. 4: Entropy per node upper bound and maximum entropy per node for ER spanning trees as a function of node number \(n\) in the giant component regime threshold
Fig. 3: Entropy upper bound for ER spanning trees for graphs with 100 nodes as a function of the ER parameter \(p\)
where the expectation is taken over all possible trees in source \(T\), and \(C\) is a constant with respect to \(|T|\).
In order to achieve the conditions in (34) or (35) based on the model, and still have a universal compression algorithm, our main approach will be to decompose each single tree into a sequence of other random variables, and then apply existing universal compression algorithms to those sequences.
Generally speaking, there are numerous universal compression algorithms. The most famous of these algorithms are those designed by Lempel and Ziv in two papers published in 1977 and 1978, which are known as LZ77 [47] and LZ78 [48], respectively. These algorithms are proven to be asymptotically optimal for stationary and ergodic stochastic processes [49, Ch. 13]. Therefore, if a stochastic process of trees can be considered a stationary and ergodic process, then Lempel-Ziv algorithms can simply be applied to it to get an optimal compression. Another powerful variation of this family of compression algorithms is the Lempel-Ziv-Welch (LZW) algorithm [50], which also satisfies the same conditions. In the following sections, instead of using the term universal compression algorithm, we simply mention LZ. It must be noted by the reader that LZ can simply be replaced by any other universal compression algorithm that is able to optimally compress stationary and ergodic stochastic processes.
We start by introducing two tree traversal algorithms and their combination, which are shown to be effective for compressing trees from a uniform distribution. We then introduce a universal compression algorithm for Simply Generated Trees, and then move on to Erdos-Renyi random spanning trees. We conclude this section by having a brief discussion on general universal tree compression algorithms.
### _Compressing uniform tree sources_
In this section, we start by proposing two simple, yet effective, tree coding algorithms called Pit-Climbing (PC) and Tunnel-Digging (TD). We will then move on to introduce TreeExplorer, which is a combination of PC and TD. The methods presented in this section are based on our previous work published in [3].
#### Iii-A1 Pit-climbing algorithm
In this section, we introduce a novel tree structure coding algorithm that we call _pit-climbing_. We use this term because of the analogy between the proposed method, and a climber that has been trapped in a pit and wants to climb up.
**Ternary pit-climbing algorithm (TPC):** We start traversing the tree from the leftmost leaf. We log our tree traversal using three symbols: \(\uparrow\), \(\Uparrow\) and \(\downarrow\). Anytime that we are at a leaf, we take the only possible path, which is upwards. If we take an upward path at any point from an edge, we consider that edge and the subtree below it as deleted (or filled-in) from the original tree so that we do not explore it again. Additionally, we log this upward movement in our code. If we have moved to a node that we have never been to before, we log a \(\uparrow\) in the code. Otherwise, if we move upwards to a node that we have seen before, we log it with a \(\Uparrow\). When we reach a node that is not a leaf, we look at the leaves of the rooted subtree whose root is the node we are currently at. We then take the path downwards that falls into the leftmost leaf of that subtree. We log this entire fall with a single \(\downarrow\). We continue exploring the tree and logging the code in the same manner until we reach the root of the tree and there is no other edge to fall into.
We will clarify TPC with the following example.
**Example III.1**.: _Assume that we are given the rooted tree structure of Fig. 4, where the red node shows the root. The starting point of the algorithm is indicated, and the arrows show the path that PC takes. The orange, green, and blue arrows are used to show \(\downarrow\), \(\uparrow\), and \(\Uparrow\), respectively._
In source coding, usually a binary code is preferred over a ternary code, as most of our systems for storage and communication are binary-based. To transform TPC codes into binary, we look back at the definition of the symbols. We observe that we can never have consecutive \(\downarrow\)s. This is because whenever we fall, we fall down to a leaf, so we can never fall twice or more. We make use of this fact, and assign 0 to \(\downarrow\), and 00 to \(\Uparrow\). We also use 1 to represent \(\uparrow\). We call this new binary code for rooted tree structures simply _pit-climbing_ (PC). Even though this method of coding does not provide us with an instantaneous code, we claim that PC creates uniquely decodable codes. Theorem 1 proves this statement.
**Theorem 1**.: _Codes generated by pit-climbing are uniquely decodable._
Proof.: We use induction on the depth of the tree. Firstly, the code for a tree with a single node is uniquely decodable (\(\Downarrow\)). Next, assume that we know PC codes for all trees with a depth of \(k\) or less are uniquely decodable. For a tree with a depth of \(k+1\), we look at the subtrees of the children of the root. Based on the induction, we know that the PC code for all these subtrees are uniquely decodable. The PC code for the original tree is the concatenation of the PC codes of the subtrees, with a connector of \(\uparrow\downarrow\equiv 10\) for the first two subtrees, and \(\Uparrow\downarrow\equiv 000\) for all other subtrees. In case the root only has one child, the final code will be the code of the subtree rooted at the child, plus an additional \(\uparrow\equiv 1\). Therefore, in all of the cases, the PC code of the tree with a depth of \(k+1\) can be uniquely decoded.
Furthermore, we would like to investigate the length of the codewords generated by PC.
**Theorem 2**.: _The PC codeword length for a rooted tree structure with \(n\) nodes and \(l\) leaves is \(n+2l-3\) bits._
Proof.: Firstly, notice that each \(\downarrow\) in the TPC code corresponds to a leaf, as we always fall into a leaf. Additionally, we fall
Fig. 4: Running TPC on a sample tree
into every leaf except for the one we start the algorithm from exactly once. Therefore, we have \(l-1\downarrow\)s in the TPC code, which translates into \(l-1\) bits in the PC code. Additionally, we climb up each edge of the tree exactly once. Therefore, the number of \(\uparrow\)s and \(\Uparrow\)s in the TPC code is equal to the number of edges, which is \(n-1\). However, every \(\Uparrow\) will translate into 2 bits in the PC code. The number of \(\Uparrow\)s is equal to the number of \(\downarrow\)s, as anytime we fall from a node we will have to climb back up to it at some point. Therefore, the number of \(\Uparrow\)s is also \(l-1\), and we will have \(l-1\) additional bits when translating the TPC code into a PC code. Consequently, the total number of bits in the PC code will be \(l-1+n-1+l-1=n+2l-3\).
#### Iv-A2 Tunnel-digging algorithm
Based on Theorem 2, the PC code length will increase with the number of leaves. However, the number of rooted trees with \(l\) leaves does not necessarily increase with \(l\). Hence, there is no justified reason for having longer codes for trees with more leaves. As a result, another algorithm called _tunnel-digging_ is developed to tackle this problem. The PC algorithm is based on traversing a tree along its edges, up and down. Whereas, in TD, the aim is to traverse the tree in a horizontal manner. The name _tunnel-digging_ comes from seeing the traversal method of this algorithm as digging tunnels between nodes on the same depth.
**Ternary tunnel-digging algorithm (TTD):** We start with the leftmost child of the root, and start moving right to nodes with the same depth in the order of the nodes. For each node that we encounter, we log a \(\leftarrow\) if it is a leaf, and a \(\rightarrow\) otherwise. If at any point we have to move between two nodes that are not siblings, we use a \(\Rightarrow\) to show the transition (digging a tunnel!). Additionally, if at any point there are no more nodes on the right to move to, we move to the leftmost node on the level below, and we mark this transition again with a \(\Rightarrow\). We continue until all the leaves of the tree are logged in the code.
The following example illustrates running TTD on a sample rooted tree structure.
**Example III.2**.: _Assume that we are given the rooted tree structure of Fig. 4, where the red node shows the root. The starting point of the algorithm is indicated, and the arrows show the path that TTD takes. The blue, green, and orange arrows are used to show \(\leftarrow\), \(\rightarrow\), and \(\Rightarrow\), respectively._
To transform the TTD code into a binary code which will be called tunnel-digging (TD), we again use the properties of the TTD symbols. We notice that we can never have two consecutive \(\Rightarrow\)s, as there is always at least one node in between the dug tunnels. Therefore, we use \(0\) to represent \(\Rightarrow\) and \(00\) to represent \(\rightarrow\). Additionally, we use \(1\) to show \(\leftarrow\). Notice that we use a shorter code for leaf nodes, as the number of leaf nodes is expected to be higher than the number of non-leaf nodes when TD is used to code the tree. The proof that the code is uniquely decodable can be done in the same manner as Theorem 1 by replacing \(\downarrow\), \(\uparrow\), and \(\Uparrow\) with \(\rightarrow\), \(\leftarrow\), and \(\Rightarrow\), respectively. The following Theorem calculates the code length of tunnel-digging.
**Theorem 3**.: _The TD codeword length for a rooted tree structure with \(n\) nodes and \(l\) leaves is \(3n-2l-3\) bits._
Proof.: The number of \(\rightarrow\)s and \(\leftarrow\)s used in the code is exactly equal to the total number of nodes minus the root. However, we use two bits for each \(\rightarrow\), which shows non-leaf nodes. Therefore, the \(\rightarrow\)s and \(\leftarrow\)s use \(2(n-1)-l\) bits in total. Additionally, for every node that has at least one child (except for the root), we will have a \(\Rightarrow\) in the code. This would be equal to \(n-1-l\) bits. Thus, we will have \(3n-2l-3\) bits in total.
#### Iv-A3 TreeExplorer
To see in which scenarios TD performs better than PC we can write
\[3n-2l-3<n+2l-3\Rightarrow l>n/2. \tag{36}\]
Based on Eq. (36), PC works better when the number of leaves is less than \(n/2\), and TD works better otherwise. They exhibit the same performance when the tree has exactly \(n/2\) leaves. Based on this, we propose the following coding technique for rooted tree structures.
**TreeExplorer:** Firstly, the number of leaves of the rooted tree structure is counted (\(l\)). If \(l<n/2\), the structure is coded with PC. The code is then prefixed with a 0 to specify that it has been coded using PC. Otherwise, the structure is coded using TD, and the code is prefixed with a 1.
It can easily be shown that the codes created using TreeExplorer are uniquely decodable. This is because the first bit of the code uniquely determines the coding method, and we have already proven that both PC and TD are uniquely decodable. The following theorem shows the upper bound for the average code length of TreeExplorer.
**Theorem 4**.: _For any probability distribution on rooted tree structures with \(n\) nodes, the average code length of TreeExplorer is less than \(2n-2\)._
Proof.: Assume that the probability of the number of leaves (\(l\)) being less than \(n/2\) is \(q\). If \(L\) is the length of the code produced using TreeExplorer, we can write
\[\mathbb{E}[L] =q\mathop{\mathbb{E}}_{l<n/2}[n+2l-2]+(1-q)\mathop{\mathbb{E}}_{l \geq n/2}[3n-2l-2]\] \[<q(2n-2)+(1-q)(2n-2)\] \[=2n-2.\]
#### Iv-A4 Comparison with entropy
We compare the average code length of TreeExplorer with the entropy of the uniform source, which was calculated in section II-B. The result is plotted in Fig. 4. It can be seen that the performance of TreeExplorer is very close to the entropy of the source, which is the optimal compression limit.
Fig. 4: Running TTD on a sample tree
#### V-A5 Comparison with Adjacency list
The adjacency list representation is one of the most widely used methods for storing trees. This method uses \(2n\lceil\log_{2}n\rceil\) bits to represent a tree. Fig. 4 compares the performance of adjacency list with TreeExplorer. Notice that for coding a labeled tree using TreeExplorer, an additional \(n\lceil\log_{2}n\rceil\) bits are needed to list the node labels in the order of their appearance.
#### V-A6 The Newick format
The Newick format has been the standard for representing phylogenetic trees since its introduction back in 1986 [51]. In this method, trees are represented using parentheses and commas. This format starts from the root of the tree, and lists the children and subtrees of the root in a nested manner. We will not go into further details on how this method works. However, because of the similarities between this format and TreeExplorer, we compare the performance of these two methods.
Our calculations show an additional \(n+1\) bits and \(3n-4l+1\) bits in the Newick format compared to when TreeExplorer uses TD and PC, respectively. Fig. 4 compares the average codeword lengths of TreeExplorer and the Newick format for rooted trees of up to 50 nodes. It can be seen that the figure also confirms that TreeExplorer provides us with a shorter codeword length.
### _Universal compression of SGTs_
In accordance with what we discussed in the previous section, we will be looking for a way to decompose SGTs into a sequence of random variables that can be seen as a stationary and ergodic stochastic process. This will then allow us to simply run a universal compression algorithm on this sequence, and study its performance. We propose the following definition for an SGT sequence, which provides method to create a sequence from a given SGT.
**Definition III.1** (SGT sequence).: _For a given SGT, its unique SGT sequence is defined as follows. We start from the root, and add the number of its children as the first random variable in the sequence. We then move on to its leftmost child, and add its number of children to the sequence. We continue traversing the tree in a breadth-first manner [52, Ch. 22.2], and keep adding the number of children of each node that we see to the sequence. When we are finished exploring the tree and have added the number of children of all the nodes, the SGT sequence is done._
Fig. 4 shows an SGT alongside its SGT sequence. It can easily be shown that an SGT sequence creates a one-to-one mapping between the trees and the sequences, and the tree is fully recoverable given the SGT sequence. We use the notation \(f_{SGT}(t)\) to show the SGT sequence of a tree \(t\). Additionally, note that the tree traversal can generally be done in any way desirable. In this definition, we have used a Breadth-First Search (BFS). However, any other traversal algorithm such as Depth-First Search [52, Ch. 22.3] can also be used. This is
of course as long as the same method is used when translating the sequence back into a tree.
Based on the definition of SGTs, the number of children for each node is chosen in an i.i.d manner from the same children distribution. Therefore, the SGT sequence of a tree is a sequence of \(n\) i.i.d random variables, with \(n\) being the number of children of the tree. As i.i.d sequences are both stationary and ergodic, it can be concluded that LZ can be used to compress SGT sequences.
First, let us consider a sequence of independently generated SGTs \(\{t_{1},t_{2},\ldots,t_{n}\}\). If we create a new sequence \(\{f_{SGT}(t_{1}),f_{SGT}(t2),\ldots,f_{SGT}(t_{n})\}\), it can be seen that the tree sequence can be fully recovered having the SGT sequence. This is because each SGT is uniquely coded using its respective SGT sequence, and the boundaries between consecutive SGTs is always known as the leafs are always marked by having zero children in an SGT. Therefore, the sequence \(\{f_{SGT}(t_{1}),f_{SGT}(t2),\ldots,f_{SGT}(t_{n})\}\) is a sequence of i.i.d random variables, and can be compressed optimally using LZ as \(n\rightarrow\infty\). Therefore, the proposed compression algorithm is both universal and optimal for a sequence of SGTs from the same family.
After having considered sequences of trees, we now move on to study the performance of the proposed compression algorithm on single SGTs. For a single tree \(t\), the length of \(f_{SGT}(t)\) will also be limited. Therefore, there will be an inevitable redundancy in the LZ code of \(f_{SGT}(t)\). We will try to quantify this redundancy in the remainder of this section. To this end, we will use the upper bounds on the redundancy of LZ algorithms, calculated by Savari [53]. We will focus on the LZW algorithm for now, but the analysis for other members of the LZ family is very similar. Let \(R(t)\) show the redundancy per symbol when compressing \(f_{SGT}(t)\) using LZW, \(\mathcal{R}\) to show the average redundancy per symbol for all the possible trees using the SGT model at hand, and \(I(t)\) to show the information in \(t\) as per Shannon's definition of information [7]. [54] provides an upper bound on the redundancy of a finite sequence compressed using LZW, which we use to get the following bound on the redundancy for one single tree \(t\) with \(n\) nodes (and therefore \(n\) symbols in its SGT sequence).
\[R(t)\leq\frac{I(t)}{n\ln n}\log_{2}\left(\frac{C\log_{2}e}{H_{C}}\right)+ \mathcal{O}(\frac{1}{\ln n}) \tag{37}\]
To find an upper bound on \(\mathcal{R}\), we need to take an average of (37) over all possible trees in the SGT model. To this end, we first note that for \(t\) with \(n\) nodes, we have \(I(t)\leq\log_{2}n\). If we use \(K\) to show \(\frac{1}{\ln 2}\log_{2}\left(\frac{C\log_{2}e}{H_{C}}\right)\), we can write the following upper bound on \(R(t)\) based on (37).
\[R(t)\leq\frac{K}{n}+\mathcal{O}(\frac{1}{\ln n}) \tag{38}\]
We now need to take the average of (38) over all possible number of nodes in the SGT model. For this purpose, we need to calculate the probability of a tree having exactly \(n\) nodes. This is challenging, and might not be possible. Therefore, we again find an upper bound to it. If we use \(p_{0}\) to show \(P_{C}(c=0)\), and use \(N\) as the random variable for the number of nodes in the trees generated by the SGT model, we can write the following equation.
\[p(N=n)\leq(1-p_{0})^{n} \tag{39}\]
The upper bound in (39) comes from the fact that in order to have \(n\) nodes, we can have \(n\) nodes to have at least one child. Additionally, we know that for \(n=1\), the probability is exactly \(p_{0}\), and \(\log_{2}C\) bits are used to code it in LZW. We also use the lower bound \(1-1/x\) to \(\ln x\) in order to make \(\mathcal{O}(1/\ln x)\) calculable. Having this in mind and combining (38) and (39) gives us the following upper bound on \(\mathcal{R}\).
\[\mathcal{R} \leq p_{0}\log_{2}C+\sum_{i=2}^{\infty}p(N=i)\left(\frac{K}{i}+ \mathcal{O}(\frac{1}{\ln i})\right) \tag{40a}\] \[\leq p_{0}\log_{2}C+\sum_{i=2}^{\infty}(1-p_{0})^{i}\left(\frac{K }{i}+\frac{1}{1-1/i}\right)\] (40b) \[=K(p_{0}-\log_{2}p_{0}-1)-\log_{2}p_{0}+\frac{1}{p_{0}}\] (40c) \[+p_{0}(1+\log_{2}p_{0}+\log_{2}C)-2\]
(40c) provides us with an upper bound on the average redundancy of using the proposed compression algorithm on a single SGT. It can be seen that this redundancy depends on a number of factors, such as \(p_{0}\), \(H_{C}\), and possible number of children in the SGT model. As this method only compresses single trees, we can not expect the redundancy to be zero, as zero redundancy in universal compression algorithms only happens asymptotically as the number of samples tends to infinity. However, the found upper bound gives an acceptable redundancy for many SGT models. Specifically, it can be seen that the redundancy matches the conditions of optimality defined by (35).
It was seen that the way the number of the children of the nodes is listed in this compression algorithm is based on a DFS traversal on the tree nodes. However, notice that as long as it is known to both the encoder and the decoder how the tree was traversed, it does not matter which tree traversal method is used. This is because the number of children of nodes are i.i.d. random variables, and therefore their order does not affect the optimality of the LZ algorithm. For instance, Breadth-First Search, Pit-Climbing, Tunnel-Digging, or any other tree traversal algorithm that explores all the nodes of the tree could have also been used and we would still get the same results.
Fig. 4: An example SGT and its corresponding SGT sequence
### _Universal compression of the Erdos-Renyi Spanning Tree Model_
In this section, we will examine a similar approach to the previous section in order to compress trees generated using the Erdos-Renyi (ER) Spanning Tree Model. Our first aim is to therefore see if we can represent each tree using a sequence of random variables that satisfy stationarity and ergodicity, while using the properties of the way these trees have been generated.
Based on their definition, ER spanning trees are labeled. Additionally, the number of nodes in the graph and the tree must be predetermined (\(n\)). We propose the following approach for coding ER spanning trees. We divide our coding process into two steps: extracting certain bits from the adjacency matrix of the tree, and then compressing the extracted sequence of bits. We first start by describing the bit extraction process.
**Bit extraction:** We start by looking at the adjacency matrix of the tree, starting with the row corresponding to the connections of node 1. This row consists of \(n-1\) bits \((a_{1,2},a_{1,3}\ldots,a_{1,n})\), where each bit represents the existence of a connection between node 1 and the other nodes in the tree. We take all these bits during the extraction process. After this, we know all the connections of node 1 in the tree. Let us show the number of these connections with random variables \(C_{1}\). Each pair among these \(C_{1}\) edges removes the possibility of having one other edge in the tree. For instance, if node 1 is connected to both nodes \(i\) and \(j\), there can not be a connection between \(i\) and \(j\) in the tree. Therefore, having the connections of node 1 removes the need for including \(\binom{C_{1}}{2}\) bits in the adjacency matrix of the tree, and we will know exactly which ones. After having described the connections of node 1, we go to the nodes to which node 1 is connected in the order of their labels. We continue by writing down the rows of the adjacency matrix corresponding to these rows, without the connections that have been covered before or whose state is known due to the described edge elimination process. This way, the second node requires less than impossibility of this node being connected to other neighbours of node 1. We can carry on this way and explain the remaining connections of each node, until all the nodes in the tree are covered. Note that bit extraction can be applied to any simple graph, and not just ER graphs. We show the process of bit extraction from a tree \(t\) with \(f(t)\).
After having extracted certain bits of the adjacency matrix using \(f\), we simply feed them into a universal compression algorithm, such as the Lempel-Ziv-Welch algorithm [50]. Fig. 4 summarizes our proposed compression technique.
We will now show that the redundancy of the proposed compression algorithm tends to zero as the trees grow large. We will perform the calculations for the case where LZ78 is used as the universal compression. However, the result is similar for other compression algorithms in the LZ family. To this end, we state and prove the following theorem.
**Theorem 5**.: _The redundancy of the proposed compression algorithm for trees created using the ER spanning tree model tends to zero as the number of nodes of the tree \(n\) grow large. The algorithm is also universal in the sense that it does not depend on the value of the ER parameter \(p\)._
Proof.: It is shown in [55] that for a binary sequence of length \(l\), the redundancy of LZ78, which we show with \(\mathscr{R}\), satisfies the following inequality.
\[\mathscr{R}\leq\frac{\ln\ln l}{\ln l}+\mathcal{O}\left(\frac{1}{\ln l}\right) \tag{41}\]
Therefore, if we use \(L(t)\) to show the length of \(f(t)\), we can use the following inequality to find the average redundancy of the proposed compression algorithm.
\[\mathbb{E}\left[\mathscr{R}\right]\leq\mathbb{E}\left[\frac{\ln\ln L(t)}{\ln L (t)}\right]+\mathbb{E}\left[\mathcal{O}\left(\frac{1}{\ln L(t)}\right)\right] \tag{42}\]
As it can be easily shown that \(\ln\ln x/\ln x\) is a concave function, we can use Jensen's inequality [45] for the first term in (42) and write
\[\mathbb{E}\left[\frac{\ln\ln L(t)}{\ln L(t)}\right]\leq\frac{\ln\ln\mathbb{E }[L(t)]}{\ln\mathbb{E}[L(t)]}. \tag{43}\]
Based on (43), we need to calculate \(\mathbb{E}[L(t)]\). Note that the bit extraction process induces an order on the nodes of tree based on the traversal order. Looking closely, this is simply a Breadth-First Search traversal of the tree, by choosing node 1 as the root. Let us show the number of connections left to be described for the \(i\)th node in this sequence using \(A_{i}\). Firstly, we know that \(A_{1}=n-1\). Going to each new node, the need for describing the connections to all the nodes that came before it and all their neighbours is removed. As in each step, we do not have any prior information about the connections to the remaining nodes, each bit can be considered an independent Bernoulli process just like in the original graph. Therefore, for \(i>1\) the expected value of unknown connections is \(n-1\), minus the \(i-1\) node that have come before and their expected number of edges, which is simply \(p\) times their number of expected connections. However, we must take into account that this way we are double counting all the prior nodes except for node 1, and this needs a correction term. Based on these reasons, we can write the following recursive equations for the expected values of \(A_{i}\)s.
\[\begin{cases}\mathbb{E}[A_{1}]=n-1\\ \mathbb{E}[A_{i}]=n-i-p\sum_{j=1}^{i-1}\mathbb{E}[A_{j}]+(i-2)p,&i>1\end{cases} \tag{44}\]
Eq. (44) will result in the following recursive equation for \(\mathbb{E}[A_{i}]\) for \(i>1\).
\[\begin{cases}\mathbb{E}[A_{1}]=n-1\\ \mathbb{E}[A_{i}]=(1-p)\mathbb{E}[A_{i-1}]-1+p,&i>1\end{cases} \tag{45}\]
Fig. 4: Proposed algorithm for compressing ER Spanning Trees
Solving (45) gives us the following solution.
\[\mathbb{E}\left[A_{i}\right]=\frac{\left(p-1\right)^{2}-\left(\left(n-2\right)p+ 1\right)\left(1-p\right)^{i}}{p(p-1)} \tag{46}\]
Eq. (46) gives us the following result for the expected number of bits to code.
\[\sum_{i}^{n}\mathbb{E}[A_{i}]\] \[=\frac{\left(np^{2}-(1-p)^{n}-p\left(n(1-p)^{n}-2(1-p)^{n}+2 \right)+1\right)}{p^{2}} \tag{47}\]
If we use \(h(n)\) to show the term calculated in (47), we will only need to code a maximum of \(h(n)\) bits from the adjacency matrix of the tree on average. We can write the following equation by inserting (47) into (43).
\[\mathbb{E}[\frac{\ln\ln L(t)}{\ln L(t)}]\leq\frac{\ln\ln h(n)}{\ln h(n)} \tag{48}\]
For the \(\mathbb{E}\left[\mathcal{O}\left(\frac{1}{\ln L(t)}\right)\right]\) term in (42), we can simply replace it with a coefficient of \(\mathbb{E}[\frac{\ln\ln L(t)}{\ln L(t)}]\) and the inequality will still hold. Therefore, we will have the following upper bound on the average redundancy of the compression algorithm.
\[\mathbb{E}[\mathscr{R}]\leq K\frac{\ln\ln h(n)}{\ln h(n)}, \tag{49}\]
where \(K\) is a constant. It can easily be seen that
\[\lim_{n\rightarrow\infty}h(n)=n. \tag{50}\]
Therefore, we can write
\[\lim_{n\rightarrow\infty}\mathbb{E}[\mathscr{R}]=\lim_{n\rightarrow\infty}K \frac{\ln\ln h(n)}{\ln h(n)}=0. \tag{51}\]
Eq. 51 shows that the redundancy tends towards zero as \(n\) grows large. It can also be seen that it tends towards zero regardless of the value of \(p\) and the way that the random spanning trees were chosen. Therefore, it can be said that the proposed method is universal in the sense that it does not depend on the ER parameter or the random tree selection process.
An interesting observation is that the proposed compression algorithm can be further generalized by generalizing the bit extraction process. For instance, the process does not necessarily need to start from node 1 as mentioned in the bit extraction procedure. Additionally, it can be seen that other traversal methods such as a DFS traversal would have worked too. The reason for that is the fact that as long as we know the ordering on how the nodes have been traversed, we can reconstruct the respective parts of the adjacency matrix and recover the graph. This gives us the freedom to choose a traversal method that suits the needs of our application better. This can help us in designing a query-preserving tree coding algorithm. In other words, the tree traversal algorithm which determines the ordering of the nodes can be chosen in a way that is able to provide answers to queries of a particular scenario. If we look closely at the proof Theorem 5, it can be seen that the only importance of the bit extraction process is that it induces on order on the nodes, and then explores the rows of the adjacency matrix according to that order. Therefore, any traversal on the tree that induces an ordering on all the nodes of the tree can be used in the bit extraction process and we would still get the same results on the optimality of the algorithm. This includes both the traversal algorithm itself, and the choice for the node to start with. Some examples of other algorithms that can be used are any variations of the DFS (Preorder, Inorder, and Postorder) [52, Ch. 6], and the Best-First Search algorithm [56, p. 48].
## IV Conclusion
In this paper, we took a novel approach towards the entropy and compression of tree data structures. We started by looking at different random tree sources and analysing their complexity in terms of Shannon entropy. Uniform tree sources and Simply Generated Trees were studies first as existing models of random tree generation. We then moved on to introduce a new random tree generation algorithm that we call the spanning tree model. It was discussed that this model can simulate many of the scenarios that happen in practice. After the entropy of the general model was formulated, we introduced a subcategory of this model whose underlying network is generated using the ER model. This made us able to quantify the entropy of the the source in terms of the model parameters. Ultimately, having the entropy of each of the studied models, we moved on to the compression domain. Universal compression algorithms were introduced for all of the studied models, and it was proven that the redundancy of these algorithms tends to zero as these trees grow large. Future directions of research can include considering other random graph generators for the spanning tree model, and finding a more general tree compression algorithm that goes beyond the ER model.
## Acknowledgment
This work was supported by EPSRC grant number EP/T02612X/1. For the purpose of open access, the authors has applied a creative commons attribution (CC BY) licence (where permitted by UKRI, 'open government licence' or 'creative commons attribution no-derivatives (CC BY-ND) licence' may be stated instead) to any author accepted manuscript version arising. We also thank Moogsoft inc. for their support in this research project.
|
2309.16659 | The Eccentric Kozai-Lidov Mechanism as the Cause of Exocomet Transits of
KIC 8462852 | KIC 8462852 is a star in the Kepler field that exhibits almost unique
behaviour. The deep, irregular and aperiodic dips in its light curve have been
interpreted as the breakup of a large exocomet on a highly eccentric orbit
whose post-disruption material obscures the star. It is hypothesised that a
nearby M-dwarf, recently confirmed to be bound to the system, could be exciting
planetesimals in a source belt to high eccentricities if its orbit is highly
misaligned with the belt: an effect known as the 'Eccentric Kozai-Lidov
Mechanism'. To quantify how often this effect is expected to occur, this paper
presents a Monte Carlo model of wide binary stars with embedded, misaligned
planetesimal belts. These belts collisionally erode over time until they are
excited to high eccentricities on secular timescales by a companion star if its
orbit is sufficiently misaligned. The large planetesimals then produce an
observable dimming signature in the light curve for a set period of time which
may or may not overlap with similar events. The model finds that, for dimming
events that persist for 100 yr, the most likely companion stars are located at
$10^2 - 10^4$ au, the most likely belts are at $10^2-10^3$ au and the system
age is most likely to be $10^2 - 10^3$ Myr. However, the probability of
observing one or more stars exhibiting this phenomenon in the Kepler field is
$1.3 \times 10^{-3}$, such that it is unlikely this mechanism is driving the
observations of KIC 8462852. | Steven D. Young, Mark C. Wyatt | 2023-09-28T17:57:52Z | http://arxiv.org/abs/2309.16659v1 | # The Eccentric Kozai-Lidov Mechanism as the Cause of Exocomet Transits of KIC 8462852
###### Abstract
KIC 8462852 is a star in the Kepler field that exhibits almost unique behaviour. The deep, irregular and aperiodic dips in its light curve have been interpreted as the breakup of a large exocomet on a highly eccentric orbit whose post-disruption material obscures the star. It is hypothesised that a nearby M-dwarf, recently confirmed to be bound to the system, could be exciting planetesimals in a source belt to high eccentricities if its orbit is highly misaligned with the belt: an effect known as the 'Eccentric Kozai-Lidov Mechanism'. To quantify how often this effect is expected to occur, this paper presents a Monte Carlo model of wide binary stars with embedded, misaligned planetesimal belts. These belts collisionally erode over time until they are excited to high eccentricities on secular timescales by a companion star if its orbit is sufficiently misaligned. The large planetesimals then produce an observable dimming signature in the light curve for a set period of time which may or may not overlap with similar events. The model finds that, for dimming events that persist for 100 yr, the most likely companion stars are located at \(10^{2}-10^{4}\) au, the most likely belts are at \(10^{2}-10^{3}\) au and the system age is most likely to be \(10^{2}-10^{3}\) Myr. However, the probability of observing one or more stars exhibiting this phenomenon in the Kepler field is \(1.3\times 10^{-3}\), such that it is unlikely this mechanism is driving the observations of KIC 8462852.
keywords: Planets and satellites: dynamical evolution and stability - Comets: general - Kuiper belt: general
## 1 Introduction
Transits, whereby bodies in other systems are observed to pass in front of their host stars, have been used to great effect to explore the wealth of extrasolar planetary systems in the Galaxy (Borucki et al., 2010). The Kepler space telescope has used this technique to find over 2,600 exoplanets, some in the habitable zone, and characterise their radii and masses, discovering some of the most well known and dynamically interesting systems such as Kepler-223 (Mills et al., 2016). Planets are not the only objects to have been detected around other stars. Transits due to smaller bodies have also been found with Rappaport et al. (2018) finding evidence of comets around F stars using Kepler observations of their asymmetric transits. A dust cloud released from bodies forming a tail of debris can explain both the levels and asymmetry of the transits, and enables a mass estimate of the parent bodies.
One of these 'dipper' stars that has so far evaded explanation, however, is the main sequence F star KIC 8462852, also known as 'Boyajian's star' or 'Tabby's star'. Boyajian et al. (2016) found, using the Kepler light curves, that the star experienced irregularly shaped transits with depths up to 20%; these transits were aperiodic and lasted between 5 and 80 days. In addition to this a level of secular dimming was detected but the exact amount is in dispute depending on the interpretation of archival data from photographic plates (Monter and Simon, 2016; Schaefer, 2016). Boyajian et al. (2016) considered many possibilities for the cause of the transits but came to the conclusion that the most consistent with the data was the passage of a family of exocomets transiting at about 0.5 au. These could result from the breakup of a single body greater than 100 km in size with a minimal mass of \(10^{-6}M_{\oplus}\). It has since been shown that a family of comets moving on similar orbits can reproduce the observed transits with about 700 objects with 10 km radii needed (Bodman and Quillen, 2016). An alternate hypothesis was put forward by Wright and Sigurdsson (2016) where the transits are caused by an artificial mega-structure, known as a 'Dyson sphere' or a 'Dyson swarm', though this requires the presence of extra terrestrial intelligence in the system.
Wyatt et al. (2018) extended the comet hypothesis by showing that the secular dimming could be caused by material distributed along a single elliptical orbit. Though they make no assumptions about the origin of this material, it fits well with the exocomet hypothesis where one large (\(>100\) km) body breaks up and the resultant material is spread around the progenitor's elliptical orbit. The constraints derived from the secular dimming give a transit distance between 0.05 and 0.6 au. The parent body for these comets would likely have come from a reservoir of debris left over from planet formation, like our own Kuiper belt, and was perturbed onto its current orbit. While most belts observed in other planetary systems typically exist at 10s to 100s of au from their host star, the lack of detection of an infrared excess around KIC 8462852 does not rule
out a cold belt at these distances (Thompson et al., 2016). Given that these exocometre are inferred to transit at between 0.05 and 0.6 au from the host star, the planetesimals causing these transits must have very high eccentricities (\(\sim 0.99\)) leaving the question: how did the parent body end up on such a highly elliptical orbit? One hypothesis originally proposed by Boyajian et al. (2016) is that the parent body could have evolved under action of the Kozai-Lidov mechanism.
The Kozai-Lidov mechanism is a dynamical process first formulated by Kozai (1962) and Lidov (1962). It is a three body effect that occurs when the orbital planes of two bodies orbiting the same host star are highly misaligned. The two bodies then undergo oscillations in inclination and eccentricity as they exert a gravitational torque on each other. Kozai (1962) examined this effect in the context of the perturbation of Jupiter on an inclined comet. That study neglected the effect of Jupiter's eccentricity and found that the oscillations take place for mutual inclinations \(i\) in the range \(\cos(i)<2/5\) and found a well defined relationship between the initial mutual inclination and the maximum eccentricity of the comet. In this case with a perturber on a circular orbit, the maximum eccentricity can only be appreciably large for initial mutual inclinations close to 90 degrees. Including the effects of a perturber's eccentricity leads to much more complicated behaviour; studies have shown that in this case extremely high eccentricities can be reached and the orbital plane of the perturbed body can flip from prograde to retrograde (Lithwick and Naoz, 2011). This behaviour can occur at high inclination and low eccentricity (HiLe) or low inclination and high eccentricity (LiHe) (Naoz, 2016) and is often chaotic (Li et al., 2014). Though eccentricities very close to 1 can theoretically be achieved, in reality the effect of General Relativity and/or tides becomes dominant once the body gets close enough to the host star (Naoz et al., 2013). The action of these effects is to cause a precession in the longitude of pericentre of the body's orbit which competes with that induced by the Kozai-Lidov mechanism, shutting it off if its perturbation is stronger. The dissipative effect of tides could then also act to circularise the orbit at a low pericentre and increase the timescale for the Kozai-Lidov evolution essentially decoupling the bodies from each other. Indeed, this has been proposed as a formation mechanism of both hot Jupiters and close Kuiper belt binaries (Perets and Naoz, 2009; Naoz et al., 2010, 2012).
For the planetesimals in a belt around KIC 8462852 to undergo eccentricity oscillations from this mechanism, a perturber is needed. This could be an unseen planet in the system, however it would have to have become significantly inclined to the planetesimal belt at some point in its life. Planets form out of the protoplanetary disc that evolves into a debris disc once the gas has dispersed, thus it is expected that debris discs and planets should be aligned and there are many systems where this is the case including our own Solar system. However, there are planetary systems where the planets have large mutual inclinations with respect to each other like \(\pi\) Men Xuan and Wyatt (2020). These are thought to form from dynamical instabilities where planets undergo close encounters and scatter each other to high inclinations. Thus, it is possible for there to exist systems with high mutual inclinations between planets and debris discs (as is actually seen in HD 106906 (Nguyen et al., 2021)), though close encounters that lead to inclinations high enough for the Kozai-Lidov mechanism may be highly unlikely. A more promising candidate for a misaligned perturber is a 0.4 \(M_{\odot}\) M dwarf seen with small on sky separation from KIC 8462852 in Keck AO images (Boyajian et al., 2016). It was hypothesised to be bound as it has a similar Gaia distance estimate to KIC 8462852 of about 450 parsecs (Gaia Collaboration et al., 2016). Follow up observations by Pearce et al. (2021) show that the two stars have the same proper motion and are in fact bound with a projected separation of 878 \(\pm\) 8 au. Wide binaries such as this could potentially form through one of two pathways. The first is core fragmentation whereby the collapsing cloud of gas that the stars form from fragments into two large cores that form two stars (Goodwin et al., 2004; Fisher, 2004; Offner et al., 2010). The other mechanism is dynamical capture where stellar encounters within the birth cluster result in pairs of stars that formed separately becoming bound, whilst other stars are ejected, though this method is too inefficient to account for all binary stars (Kroupa and Burkert, 2001). Either way, it could have a random inclination to any planetesimal belt around KIC 8462852 and could potentially be highly inclined (Hale, 1994), causing Kozai-Lidov oscillations of small bodies which could explain the observations.
This paper aims to test how often the action of the Kozai-Lidov mechanism on a belt of planetesimals due to a wide binary companion can excite the largest planetesimals to high eccentricities. The derived occurrence rate can then be compared to the one potential detection in the Kepler field to see if the Kozai-Lidov mechanism is a likely explanation for the phenomenon. In section 2 the parameter space of the Kozai-Lidov mechanism for an eccentric perturber is explored to investigate what orientation a general planetesimal belt has to start with to reach low pericentres and the fraction of objects that reach them. This is examined through integrating the secular equations of motion and comparing the results with N-body simulations. Section 3 outlines a Monte Carlo model of binary systems in the Kepler field which is used to find the fraction of the systems that undergo Kozai-Lidov oscillations and for what fraction of their main sequence lifetimes they produce observable signatures. Section 4 details the results of this model for sensible system parameters, examining the most likely location of belts and companions in these systems. Section 5 illustrates the dependence of the results on the unknown parameters of the model and the choice of initial distributions as well as providing a discussion on the caveats of the model and section 6 presents our conclusions.
## 2 Parameter space exploration of the eccentric Kozai-Lidov mechanism
If we are to create a Monte Carlo model of the action of the Kozai mechanism on stars and their planetary systems in the Kepler field it is first necessary to examine how belts of planetesimals behave in the presence of an inclined companion star. Once this behaviour has been discerned, it can then be fed into the Monte Carlo model to produce a probability that the Kozai mechanism is causing the variability in the lightcurve of KIC 8462852. Specifically, the inclinations between the belt and companion star that allow low pericentres to be reached and the fraction of objects in such an inclined belt that reach a low enough 'threshold' pericentre to cause observations are needed for the Monte Carlo model.
This work is restricted to the action of wide binary companion stars on planetesimal belts: specifically we are considering the Kozai-Lidov mechanism in the case of an external massive perturber and an internal massless perturbed object which does not exert a torque on the perturber. There are four main variables in this problem which are all orbital elements of the perturbed object as the orbital elements of the perturber do not change with time. In our context these are a planetesimal and a companion star respectively
and hereafter referred to as such. The variables of the planetesimal's orbit are: the mutual inclination with respect to the companion star (\(i\)), the eccentricity (\(e\)), the longitude of ascending node as measured with respect to the plane of the binary (\(\Omega\)) and the longitude of pericentre (\(\omega\)). The basic setup of the problem is illustrated in figure 1. Whilst these are the only variables in the problem, there are also other, constant, parameters of the system that are important. For example, the masses of the two stars contribute to the _timescale_ of the effect, but not its amplitude. Likewise, the semi-major axes of the two orbits and the eccentricity of the companion star affect the timescale to first order, though it has been shown that they have second order effects on the amplitude of motion (Naoz et al., 2013).
There are two ways that the variation of these orbital elements can be explored: the secular equations of motion can be integrated numerically, or N-body integrations can be used to numerically integrate Newton's second law. The latter will be more accurate but also take a prohibitive amount of time and so the full exploration of parameter space will be undertaken with the secular equations and the results compared to N-body integrations.
### The Secular Equations
The Kozai-Lidov mechanism is a subset of the hierarchical three body problem. One comparatively massless planetesimal (\(m_{\rm pl}\)) orbits a massive host star \(M_{\rm*}\) which is also in a binary orbit with a companion star of mass \(M_{\rm c}\). The system is hierarchical because \(a_{\rm pl}\ll a_{\rm c}\). The Hamiltonian for the massless planetesimal \(m_{\rm pl}\) is approximately given by
\[H^{TP}\approx\frac{3}{8}\frac{GM_{\rm*}m_{\rm pl}}{a_{\rm c}}\left(\frac{a_{ \rm pl}}{a_{\rm c}}\right)^{2}\frac{1}{(1-e_{\rm c}^{2})^{3/2}}(F_{\rm quad}+ \epsilon F_{\rm oct}), \tag{1}\]
where
\[\epsilon=\frac{a_{\rm pl}}{a_{\rm c}}\frac{e_{\rm c}}{1-e_{\rm c}^{2}} \tag{2}\]
and \(F_{\rm quad}\) and \(F_{\rm oct}\) are the quadrupole and octupole contributions respectively. These are functions of the orbital elements and are listed in appendix A1. For all the following integrations of the secular equations we use a renormalised Hamiltonian which removes the prefactors in equation 1. This simply results in a renormalised time parameter \(\tau\) which is related to the true time t by
\[t=\frac{8M_{\rm*}a_{\rm c}^{3}(1-e_{\rm c}^{2})^{3/2}}{3m_{\rm c}a_{\rm pl}^{ 3}\Omega_{\rm*}}\tau, \tag{3}\]
where \(\Omega_{\rm*}\) is the angular velocity of \(m_{\rm pl}\) about \(M_{\rm*}\).
The Hamiltonian in equation 1 has been averaged over the longitudes of both the planetesimal and companion, expanded in the ratio \(a_{\rm pl}/a_{\rm c}\) and truncated after the octupole term. This approximation is equivalent to smearing the objects out over their orbits to form a wire whose density at some point is inversely proportional to the orbital velocity at that location; these wires then exert a gravitational torque on each other proportional to their mass (so the massless planetesimal 'wire' does not exert a torque on the companion). If the Hamiltonian is cut off at first order, such that only the quadrupole term is left, then the system is integrable: this is referred to as the standard Kozai-Lidov mechanism (hereafter referred to as the 'SKM'). This also arises if the companion is on a circular orbit such that \(e_{\rm c}\approx\epsilon\approx 0\). In the case of the SKM, the orbit of the planetesimal exhibits coupled oscillations in its inclination and eccentricity, becoming more eccentric and less inclined to the perturber before reversing, as illustrated in figure 2. The timescale for these oscillations to occur is given by (Liu et al., 2015)
\[t_{\rm quad}=5.3\left(\frac{a_{\rm pl}}{20{\rm au}}\right)^{-3/2} \left(\frac{M_{\rm*}}{1.43M_{\odot}}\right)^{1/2}\left(\frac{M_{\rm c}}{0.4M_{ \odot}}\right)^{-1}\left(\frac{a_{\rm c}}{1000{\rm au}}\right)^{3} \tag{4}\] \[(1-e_{\rm c}^{2})^{3/2}{\rm Myr}.\]
From this it can be seen that \(t_{\rm quad}\gg t_{\rm orb}\) as the system is hierarchical. Also, the closer the planetesimal is to the companion star (i.e. the smaller the ratio \(\frac{a_{\rm c}}{a_{\rm pl}}\)), the faster the oscillations occur. The reason for the coupling between eccentricity and inclination in this case is because the component of the planetesimal's angular momentum that is parallel to the companion's angular momentum \(J_{z}\propto\cos(i)\sqrt{1-e^{2}}\) is conserved (Kozai, 1962; Lidov, 1962).
If the perturber has an appreciable eccentricity then \(\epsilon\neq 0\) and the octupole terms in the expansion of the Hamiltonian become important; these terms can significantly change the overall dynamical behaviour of the system. Thus, the quantity \(\epsilon\) acts as the'strength' of the octupole contribution and for these effects to be significant without the secular approximation breaking down it must lie in the range \(10^{-3}-10^{-1}\)(Naoz, 2016). This case is known as the eccentric Kozai-Lidov mechanism (hereafter referred to as the EKM). The timescale for these 'octupole order' effects is given by
\[t_{\rm oct}=\frac{t_{\rm quad}}{\epsilon^{1/2}}. \tag{5}\]
Using Hamilton's equations we can find the rate of change of the planetesimal's orbital elements with time which can then be integrated numerically and thus perform a parameter space exploration. These equations are listed in appendix A for both the SKM (\(\epsilon=0\)) and the EKM (\(\epsilon\neq 0\)).
### Parameter Space Exploration
The numerical integrator used for this analysis is the LSODA package (Hindmarsh, 2019; Petzold, 1983). It handles stiff and non-stiff differential equations using the BDF and Adams method respectively, automatically detecting which is needed at each timestep. The timestep it uses is variable and is set to keep the relative and absolute error tolerances below a threshold value. For all of the following work, the error tolerance is set to \(10^{-11}\) in order to adequately capture the high eccentricities achieved (e \(\gg 0\)).
The general behaviour of a particle undergoing the eccentric Kozai mechanism for a specific set of initial conditions is shown in figure 3. The left panel shows the evolution of \(\cos(i)\) for the paramters noted in the caption. The inclination oscillates on a comparatively short timescale given roughly by \(t_{\rm quad}\) and is equivalent to evolution in the standard Kozai mechanism. This behaviour is modulated by the longer term orbital flips that happen on the comparatively longer octupole timescale \(t_{\rm oct}\). The right panel shows the eccentricity, plotted as \(1-e\), restricted to where the eccentricity is closest to 1 for clarity. It highlights the extreme eccentricities reached in this situation with the maximum being when \(1-e\sim 10^{-7}\), though it should be noted that other physical processes would prevent such a
high eccentricity from ever being reached (such as GR precession, sublimation, collision with the star). This set of initial conditions was used by Lithwick & Naoz (2011) and these results can be compared with figures 4 and 6 from their work.
For the Monte Carlo model outlined in section 3, we will need to know the inclinations between belts and companion stars that allow planetesimals in the belt to reach high eccentricities. As we will be dealing with eccentricities very close to one, we will instead examine the'scaled pericentre' parameter defined to be
\[q^{\prime}=1-e_{\rm pl}=\frac{q_{\rm pl}}{a_{\rm pl}}, \tag{6}\]
which is the true pericentre of a planetesimal orbit scaled by the semi-major axis. The minimum scaled pericentre \(q^{\prime}\) that a planetesimal reaches will depend on its initial orbital elements: \(i_{0}\), \(\Omega_{0}\), \(\omega_{0}\) and \(e_{0}\). When considering belts of planetesimals, however, all objects in a belt will share the same initial inclination \(i_{0}\) and longitude of ascending node \(\Omega_{0}\) relative to a distant perturber as illustrated in figure 1: it is these two parameters that define the belt. Within the belt the planetesimals will have a distribution of initial eccentricities \(e_{0}\) and longitudes of pericentre \(\omega_{0}\). Therefore, in the
Figure 1: An example of the type of planetary system that might undergo Kozai-Lidov evolution. A host star (yellow) is orbited by a planetesimal (brown) in a belt of particles (light grey). Also in orbit around this system is a wide stellar companion (red) which has been drawn closer to the belt than expected in a hierarchical system for clarity. The inclination \(i\), longitude of pericentre \(\omega\) and the longitude of ascending node \(\Omega\) of the planetesimal’s orbit is shown to clarify their geometrical significance. The orange line represents the line of nodes, where the disc intersects the orbital plane of the companion, and the green line represents the semi-major axis of the planetesimal’s orbit which is in the plane of the disc. The black lines represent a 3D coordinate basis aligned with the major, minor and perpendicular axes of the elliptical orbit of the companion.
Figure 2: Evolution of the inclination (blue) and eccentricity (orange) of a test particle in the SKM scenario (\(e_{2}=\epsilon=0\)). The initial values of the orbital elements of the perturbed object are \(i_{0}=80^{\circ}\), \(e_{0}=0.05\), \(\omega_{0}=180^{\circ}\) and \(\Omega_{0}=0\).
context of examining how close to their host stars particles in a disc would be seen to get, it is necessary to find \(\min(q^{\prime}(i_{0},\Omega_{0};\epsilon))\). This is the minimum possible scaled pericentre that can be achieved by one of the particles in a belt defined by \(i_{0}\) and \(\Omega_{0}\) and is shown in figure 4. The value for each disc represents the minimum scaled pericentre found when doing 100 integrations with randomly distributed values of \(\omega_{0}\) and initial eccentricities taken from a Rayleigh distribution with peak 0.03. The Rayleigh distribution of eccentricities is motivated by observations of objects in the classical Kuiper belt and from debris disc scale heights (assuming \(e\sim I\)) (Sai et al., 2015; Han et al., 2022) as well as N-body simulations of mutual planetesimal scattering (Ida and Makino, 1992), though in our model a companion star is perturbing the disc so the characteristic eccentricity could be higher (Mustill and Wyatt, 2009). Each integration ran for a time of \(\tau=500\) and the orbital elements were recorded at \(10^{7}\) equally spaced intervals; the process was repeated for three values of \(\epsilon=[10^{-3},10^{-2},10^{-1}]\)
Figure 4 shows that there is only a weak dependence of \(\min(q(i_{0},\Omega_{0};\epsilon))\) on \(\Omega_{0}\) over the probed values of \(\epsilon\), whereas there is a strong dependence on \(i_{0}\). As \(\epsilon\) increases, the range of \(i_{0}\) over which it is possible to get very low scaled pericentres increases from a small window around \(90^{\circ}\) to a window that extends all the way down to \(45^{\circ}\) which matches with the simulations undertaken previously by O'Connor et al. (2021). Figure 4 shows that it is important to consider the EKM effects when modelling planetesimal belts in wide binaries as it widens the range of initial inclinations at which planetesimals can achieve low pericentres compared to the SKM case. In the SKM, to achieve a scaled pericentre \(q^{\prime}_{\rm crit}\), planetesimals must have initial inclinations greater than \(i_{\rm crit}\) where
\[\cos(l_{\rm crit})=\pm\sqrt{\frac{3}{5}(1-(1-q^{\prime}_{\rm crit})^{2})}. \tag{7}\]
This leads to a 'window' in initial inclination around \(90^{\circ}\) within which a planetesimal will reach scaled pericentres \(q^{\prime}<q^{\prime}_{\rm crit}\) and is given by
\[\Delta i_{0}\approx\frac{180}{\pi}\sqrt{\frac{24q^{\prime}_{\rm crit}}{5}}, \tag{8}\]
where \(\Delta i_{0}\) is in degrees and \(q^{\prime}_{\rm crit}\ll 1\) is assumed. Hence, to achieve \(q^{\prime}_{\rm crit}<10^{-4}\) in the EKM case (assuming \(\epsilon=10^{-1}\)) a planetesimal must have \(i_{0}\geq 45^{\circ}\) as can be seen from figure 4, but in the SKM case, using equation 8, a planetesimal must have \(i_{0}\geq 88.75^{\circ}\).
### Comparison with N-body Simulations
The validity of these results is examined with N-body integrations. The IAS15 integrator in rebound was used for the comparison (Rein and Spiegel, 2015; Rein and Liu, 2012). It uses a 15th order modified Runga-Kutta method and Gauss-Radau spacing and has a variable timestep to make sure the motion at pericentre is adequately captured when the orbit is highly eccentric and the particle is moving very fast. Rein and Spiegel (2015) show that it copes well with the extreme eccentricities achieved in the EKM up to \(e\sim 1-10^{-10}\) whilst maintaining an energy error of \(\sim\frac{10^{-16}}{1-e_{\rm max}}\).
The comparison is made with the results from integrating the secular equations and the results are plotted in figure 5. Simulations were run in which particles had various values of \(i_{0}\) representing belts of different inclinations. Due to the lack of dependence of the scaled pericentre on \(\Omega_{0}\) shown in figure 4, this was set to \(\Omega_{0}=0\) for these simulations. For each N-body integration the values of \(\omega_{0}\) and \(e_{0}\) were chosen such that they corresponded to those that gave the
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(\epsilon\) & \(a_{1}\) f au & \(a_{2}\) f au & \(e_{2}\) \\ \hline
0.001 & 88.5 & 885 & 0.01 \\
0.01 & 87.6 & 885 & 0.1 \\
0.1 & 18.68 & 885 & 0.9 \\ \hline \end{tabular}
\end{table}
Table 1: The parameters used in the N-body integrations to achieve different octupole strengths.
Figure 3: Evolution of a test particle in the eccentric Kozai-Lidov mechanism with \(\epsilon=0.01\). Initial conditions were: \(I_{0}=72.5^{\circ}\), \(e_{0}=0.192\), \(\omega_{0}=0\), \(\Omega_{0}=\pi\). The left panel shows the time evolution of the cosine of inclination. The right panel shows the time evolution of the eccentricity, zooming in on where it gets very close to 1. Different ranges of normalised time (\(\tau\)) are used in each plot to highlight where the eccentricity is predicted to reach very extreme values (\(1-e\approx 10^{-7}\)).
lowest scaled pericentre in the secular integrations. The maximum eccentricity is then found for each simulation and compared to the same result found by integrating the secular equations. Each N-body integration was performed three times with different particle semi-major axes and a different eccentricity of the perturber. This is done so that the evolution can be followed for \(\epsilon\) values of 0.001, 0.01 and 0.1. The parameter values used to produce each octupole strength are listed in table 2 and were chosen to make sure that the test particle would not be captured by the companion due to a close approach or experience other forms of orbital evolution such as resonance (Naoz & Silk, 2014).
Figure 5 shows that the results obtained when solving the secular equations agree very well with those from the N-body simulation for the case \(\epsilon=0.001\), but that there is some disagreement with the other octupole strengths. This is probably due to the chaotic nature of the problem and the parameter space in \(\omega_{0}\) and \(\epsilon_{0}\) for lining up exactly between secular integrations and N-body simulations. However, we note that the general behaviour, a severe drop in scaled pericentre, is still observed for a window of inclinations around \(90^{\circ}\). In fact, our work will only be interested in using scaled pericentres down to values of \(10^{-4}\) and to this level the N-body simulations and secular integrations show good agreement.
### Fraction of Belt Mass Excited to High Eccentricities
Arguably the most important parameter space exploration needed for the Monte Carlo model is the fraction of planetesimals in a belt that will reach low enough scaled pericentres to cause the transits seen in KIC 8462852 as a function of the mutual inclination between belt and companion. This is because, in the model, belts will have a wide variety of inclinations relative to their companion stars and it is therefore important to know not only whether or not it is possible for planetesimals to reach small pericentres, but also _how many_ of them reach these as it is not initially clear from the equations governing the secular evolution, and so we investigate it here.
Katz et al. (2011) provide a theoretical equation that relates the inclination above which planetesimals reach'small' scaled pericentres (the level of which is undefined) \(i_{\rm crit}\) to the value of the octupole strength \(\epsilon\). From this one might theoretically assume that the fraction of objects in a belt reaching a threshold value of the scaled pericentre is a step function with its transition at \(i_{\rm crit}\), though this is not initially obvious. In order to investigate whether this is the case, we integrate the secular equations for 1000 particles with randomly distributed values of \(\Omega_{0}\) and \(\omega_{0}\) and a Rayleigh distribution of eccentricities centred on 0.03. This was done for a set of inclinations that are equally spaced in \(\log_{10}(90-{\rm i})\) and different values of \(\epsilon\). The fraction of orbits reaching a scaled pericentre less than \(10^{-2}\) (i.e., \(F(q^{\prime}<10^{-2})\)) is plotted in figure 6. The behaviour is roughly equivalent to a step function where, above some \(i_{\rm crit}\), all objects in a belt will reach the required threshold scaled pericentre \(q^{\prime}_{\rm crit}=10^{-2}\) and we fit the data with a \(\tan^{-1}\) formula of the form
\[F(q^{\prime}<10^{-2})=\frac{1}{2}-\frac{\tan^{-1}\left(\frac{i_{\rm mid}-1}{ \sigma}\right)}{\pi}, \tag{9}\]
where \(i_{\rm mid}\) and \(\sigma\) are the parameters of the fit and \(i_{\rm mid}\) is the inclination at which 50% of all planetesimals in the belt reach scaled pericentres less than \(10^{-2}\) (i.e. \(F(q^{\prime}<10^{-2})=0.5\)). The fits and their comparison to the data are shown for a select sample of \(\epsilon\) values in figure 7.
The values of \(i_{\rm mid}\) for our fits are plotted as a function of \(\epsilon\) in figure 8. Comparing with the theoretical prediction from Katz et al. (2011) (blue curve) shows that the equation provides the correct functional form for the dependence on \(\epsilon\). However, the theoretical prediction is systematically offset towards higher inclinations which is due to the fact that this equation is not associated with a specific threshold value of the scaled pericentre, only that it is 'quite small'. It is expected that, by decreasing \(q^{\prime}_{\rm crit}\) by orders of magnitude, this systematic offset would be reduced. Figure 8 also shows the value of the critical inclination needed to reach a scaled pericentre of \(10^{-2}\), when solely considering the SKM case (orange dot-dashed line). This shows that, for \(\epsilon<10^{-3}\), the behaviour tends towards the standard Kozai-Lidov mechanism where the initial inclination needed to reach a maximum eccentricity of \(e_{\rm max}\) is given simply by equation 7.
Plotted in black in figure 8 is a fit to the values of \(i_{\rm mid}(\epsilon)\). A quadratic form is fitted, capped at the value expected from the standard Kozai-Lidov mechanism, with a best fit found to be
\[i_{\rm mid}=A_{1}\epsilon^{2}+B_{1}\epsilon+C_{1}, \tag{10}\]
where \(A_{1}=3237.4\), \(B_{1}=-723.5\) and \(C_{1}=84.8\) and \(\mu\) is in degrees.
In addition to fitting a functional form for the fraction of planetesimals in a belt that reach a scaled pericentre of \(10^{-2}\), it is necessary to examine how many planetesimals reach other, smaller scaled pericentres. This is because belt objects in the Monte Carlo model will be required to reach a _physical_ pericentre to produce an observational
Figure 4: Minimum scaled pericentre as a function of initial inclination \(i_{0}\) and longitude of ascending node \(\Omega_{0}\). The colour bar shows the value of \(1-e_{\rm max}\). To eliminate the dependence on the angles, for each value of \(I_{0}\) and \(\Omega_{0}\) the maximum eccentricity was calculated using 100 randomly chosen values of \(\omega_{0}\) and a Rayleigh distribution of \(\epsilon_{0}\) centred on 0.03. From the 100 results the maximum achievable eccentricity was taken and plotted.
signature like Boyajian's star and as these belts will be at different radii this will translate into different scaled pericentres for each belt (see equation 6). Figure 9 illustrates the best fit values of \(i_{\rm mid}\) for simulations where particles were required to reach scaled pericentres of \(10^{-2}\), \(10^{-3}\) and \(10^{-4}\). The coefficients for the quadratic fit for the \(10^{-3}\) and \(10^{-4}\) cases are: \(A_{1}=4711.3\), \(B_{1}=-928.9\), \(C_{1}=90.0\) and \(A_{1}=4443.9\), \(B_{1}=-916.2\), \(C_{1}=90.7\) respectively.
### Summary
In order to run a Monte Carlo model of planetesimal belts in misaligned wide binary systems it is necessary to know how the belts behave in these environments. This section has shown that, due to the EKM, large eccentricities can be reached by belt particles if the misalignment between belt and companion star is large enough. It has also shown that, when this is the case, a large fraction of belt particles reach these small scaled pericentres and has produced equations for the fraction that reach \(q_{\rm crit}\) as a function of inclination and the octupole strength \(\epsilon\).
Figure 5: Comparison with N-body simulations of the maximum eccentricity for different initial inclinations. The lines are the results from integrating the secular equations and the points are found using the N-body simulations. All secular and N-body data is calculated using \(\Omega=0\) and the values of \(e_{0}\) and \(\omega_{0}\) for the N-body points match the corresponding values used in the integration of the secular equations. All data are floored at \(1-e_{\rm max}=10^{-7}\) as this is the error tolerance of the integrator.
Figure 8: The values of \(i_{\rm mid}\) from the tan\({}^{-1}\) fits to the curves in figure 6 as a function of \(\epsilon\) (scatter points). The theoretical prediction from equation ref is also included for comparison (blue curve). The inclination expected when solely considering the SKM is plotted as the dot-dashed orange line. The quadratic fit to the data capped at the SKM value is shown as the black line.
Figure 6: Fraction of planetesimal orbits that reach a scaled pericentre of at least \(10^{-2}\) as a function of \(i\) and \(\epsilon\). Each point corresponds to 1000 integrations of orbits with uniformly distributed values of \(\omega_{0}\) and \(\Omega_{0}\) and a Rayleigh distribution of \(e_{0}\) centred on 0.03.
Figure 7: The fraction of randomly distributed orbits reaching a scaled pericentre less than \(10^{-2}\) as a function of initial inclination for a select few values of \(\epsilon\) and the fitted tan\({}^{-1}\) functions as a comparison.
## 3 Monte Carlo model
### General Setup
The purpose of the Monte Carlo model is to find the expected occurrence rate of Boyajian-like stars \(\langle N_{\rm exp}\rangle\) which are defined to be those that will have had planetary material undergo Kozai-Lidov oscillations and migrate close to the star such that they are currently producing a visible signature in the form of deep, irregular, aperiodic exocomet transits. Comparing this occurrence rate to the one system in the Kepler field will yield a probability that the 'Kozai-Lidov induced eccentric exocomet' hypothesis is correct.
In the model, \(10^{8}\) stellar systems with planetesimal belts are generated, some fraction of which are binaries, whose values of the belt semi-major axis \(a_{\rm b}\), companion semi-major axis \(a_{\rm c}\), companion eccentricity \(e_{\rm c}\), host star mass \(M_{*}\) and companion star mass \(M_{\rm c}\) are drawn from distributions such that the population will accurately reflect the Kepler field. Some fraction \(f_{\rm reject}\) of them are rejected and cut from the sample as the EKM is prohibited from acting due to one of several physical reasons outlined in section 3.6. Every system in the model is assumed to have a belt of planetesimals around each component of the binary with semi-major axis \(a_{\rm b}\) and width \(\Delta a=\frac{1}{2}a_{\rm b}\). The objects in the belt are assumed to undergo a collisional cascade by which larger objects collide and fragment into smaller objects and the very smallest are blown out of the system by radiation pressure. The orbits of large planetesimals in the belt are assumed to evolve due to secular interactions with the binary companion and so can, depending on the inclination of their orbit relative to the binary, migrate to small pericentres. In order to reach the roughly sub au scales associated with the transits of KIC 8462852, we require particles to achieve a pericentre less than \(10^{-1}\)au and hence a scaled pericentre less than
\[q^{\prime}_{\rm crit}<\frac{0.1}{a_{\rm b}/\rm au}. \tag{11}\]
The presence of planetesimals at these small distances could result in an observational'signature' like that for KIC 8462852 which is assumed to last for a set amount of time \(t_{\rm dur}\), whose true value is unknown and is therefore a free parameter of the model. The fraction of the system lifetime during which this light curve signature is observable, \(f_{\rm t}\), can be calculated for each system and the mean over all systems in the model \(\overline{f_{\rm t}}\) can then be found. Only some of the randomly oriented planetesimals' orbits will cross the line of sight and hence have the right geometry for their dust clouds to be observationally detectable from Earth; the probability that a planetesimal's orbit causes its enveloping dust cloud of radius \(R_{\rm c}\) to occult the stellar disc as seen from Earth is \(P_{\rm geo}\) and is given by
\[P_{\rm geo}=\frac{R_{*}+R_{\rm c}}{2q}\approx\frac{R_{*}}{2q}, \tag{12}\]
where it is assumed \(R_{\rm c}\ll R_{*}\), \(R_{\rm c}\) and an average has been taken over all pericentre angles (Winn, 2010).
These quantities combine to form the expected probability for a single star to be seen to undergo this behaviour
\[p=(1-f_{\rm reject})\overline{f_{\rm t}}P_{\rm geo}, \tag{13}\]
such that the expected number of stars in the Kepler field seen to exhibit this phenomenon is
\[\langle N_{\rm exp}\rangle=1-(1-p)^{N_{\rm exp}}\approx(1-f_{\rm reject}) \overline{f_{\rm t}}P_{\rm geo}N_{\rm Kep}, \tag{14}\]
where the last relation holds if \(p\ll 1\).
### Finding \(f_{\rm t}\)
The observations of KIC 8462852 are consistent with being caused by the breakup of a large \(m_{\rm crit}\gtrsim 10^{-6}M_{\oplus}\) planetesimal. Therefore, within our model, we are only interested in the number of similar sized objects in the belt at the time small pericentres are reached \(N(m>m_{\rm crit};t=t_{\rm oct})\) as they will cause transits of similar depth to KIC 8462852; the rest of the objects in the belt are ignored. The fraction of these objects \(F(q^{\prime}<q^{\prime}_{\rm crit})\), that reach small enough pericentres is found using the results of section 2.4 (figure 9), where we calculate values of \(F(q^{\prime}<q^{\prime}_{\rm crit})\) by interpolating between the values for \(q^{\prime}_{\rm crit}=10^{-2},10^{-3}\) and \(10^{-4}\). For that fraction that reach \(q^{\prime}_{\rm crit}\), they are assumed to produce an observable signature that lasts for \(t_{\rm dur}\) Myr which is a free parameter. Hence, the total fraction of the main sequence lifetime during which transits could be observed is
\[f_{\rm t}=\frac{F(q^{\prime}<q^{\prime}_{\rm crit})N(m>m_{\rm crit};t=t_{\rm oct })t_{\rm dur}}{t_{\rm MS}}. \tag{15}\]
However, if the system has enough bodies more massive than \(m_{\rm crit}\) then the transits due to different objects will end up overlapping and eventually the transits will saturate. In this case the fraction of the lifetime where transits are observable is instead given by
\[f_{\rm t}=\frac{(t_{\rm oct,lower}-t_{\rm oct,upper})+t_{\rm dur}}{t_{\rm MS}}, \tag{16}\]
where the numerator represents the range of time for which planetary material from any part of the belt will be at small pericentres. This implicitly assumes that all the material that will migrate to small pericentres will do so on the first octupole cycle and will stay there for \(t_{\rm dur}\) until it is removed from the system. For each system in the model both the saturated and unsaturated values of \(f_{\rm t}\) are calculated and the smaller of the two is adopted as the value for that system.
Figure 9: The best fit values of \(i_{\rm mid}\) from fitting arctan functions to the fractions of planetesimals reaching scaled pericentres less than \(10^{-2}\), \(10^{-3}\) and \(10^{-4}\) respectively. The black curve represents the theoretical result.
As can be seen from equations 15 and 16, in order to calculate \(f_{\rm I}\), it is necessary to know the main sequence lifetime of the system. This is taken from the mass using the homology relation
\[f_{\rm MS}=\begin{cases}10000\ M_{*}^{-9/2}&\text{for $M_{*}<1.5\ M_{\odot}$}\\ 3630\ M_{*}^{-2}&\text{for $M_{*}>1.5\ M_{\odot}$}\end{cases} \tag{17}\]
where \(t_{\rm MS}\) is in Myr and \(M_{*}\) in \(M_{\odot}\). In the saturated case it is necessary to know the octupole timescale for the belt which is given by equation 5 but to illustrate the dependence on the orbital parameters of the problem, we rewrite it in the form given by Liu et al. (2015) and used by Metzger et al. (2017) as
\[t_{\rm oct}=40\left(\frac{M_{*}}{1.43M_{\odot}}\right)\left(\frac{0.4M_{\odot }}{M_{\rm c}}\right)\left(\frac{a_{\rm b}}{20{\rm au}}\right)^{-2}\left(\frac{ a_{\rm c}}{1000{\rm au}}\right)^{7/2}\frac{(1-\epsilon_{\rm c}^{2})^{2}}{e_{ \rm c}^{0.5}}{\rm Myr}. \tag{18}\]
The timescale for planetesimals in a disc at a radius \(a_{\rm b}\) to be excited to small enough pericentres is taken to be the value of \(t_{\rm oct}\) at the central disc radius, however the upper and lower edges of the disc will have timescales of \(t_{\rm oct,upper}\) and \(t_{\rm oct,lower}\) respectively which are given by replacing \(a_{\rm b}\) with \(a_{\rm upper}=\frac{3}{4}a_{\rm b}\) and \(a_{\rm lower}=\frac{3}{4}a_{\rm b}\) respectively in equation 18.
In the unsaturated case it is necessary to know the number of particles greater than a certain mass at the time the belt undergoes the EKM \(N(m>m_{\rm crit};t=t_{\rm oct})\). In order to do this the mass of the belt must be known and this requires a collisional model of the belt.
### Collisional Model
A population model for belts around main sequence sun-like stars that accounts for collisional evolution was developed by Wyatt et al. (2007) and its free parameters were constrained by comparing with the infrared emission detected from nearby stars (Sibthorpe et al., 2018). In this model, it is assumed that all stars are born with a planetesimal belt whose masses \(M_{\rm b}\) are drawn from a \(\log\)-normal distribution centred on \(M_{\rm mid}\) which is a free parameter. These belts orbit a host star of mass \(M_{*}\) at semi-major axis \(a_{\rm b}\) and have a blackbody radius \(R_{\rm bb}\), drawn from a power law distribution with exponent \(\gamma\) within the range \(1<R_{\rm bb}/{\rm au}<1000\), i.e.
\[P(R_{\rm bb})\propto\begin{cases}R_{\rm bb}^{\gamma}&1<R_{\rm bb}/{\rm au}<1000 \\ 0&\text{Otherwise.}\end{cases} \tag{19}\]
In this model these belts are assumed to undergo collisional evolution where large bodies that have been stirred onto crossing orbits will collide and catastrophically disrupt to form smaller bodies. The planetesimals have a diameter D which varies between the maximum size \(D_{\rm c}\) which is set by planet formation processes when the system is born, and the blowout size \(D_{\rm bl}\) at which radiation pressure puts dust grains onto unbound orbits. Planetesimals in the belt are assumed to have a size distribution of the form
\[n(D)=KD^{-\alpha}, \tag{20}\]
where \(\alpha\) is 3.5 in an infinite collisional cascade (Dohnanyi, 1969) and K is a normalisation constant. Assuming that the mass is the only significant time variable quantity, then the disc mass evolves according to
\[M=\frac{M(0)}{1+t/t_{\rm c}(0)}, \tag{21}\]
where \(t_{\rm c}(0)\) is the initial collisional timescale of the largest bodies in the belt. Assuming that particles have a Rayleigh distribution of eccentricities with means \(\langle e\rangle=\langle l\rangle\), and that the fractional size of an object that will catastrophically destroy a planetesimal \(X_{\rm c}\ll 1\), Wyatt et al. (2007) find that the mass of a disc at times \(t_{\rm age}>>t_{\rm c}(0)\) is given by
\[M=1.4\times 10^{-9}r^{13/3}(dr/r)D_{\rm c}Q_{\rm D}^{\ 5/6}(e)^{-5/3}M_{*}^{-4/3}t_{\rm age }^{-1}, \tag{22}\]
where \(Q_{\rm D}^{*}\) is the dispersal threshold of a planetesimal, \(\langle e\rangle\) is the peak of the distribution of eccentricities, \(dr\) is the width of the belt, \(D_{\rm c}\) is the maximum size of planetesimal and \(t_{\rm age}\) is the age of the system. This can be expressed more simply as
\[M=M_{*}^{-4/3}t_{\rm age}^{-1}r^{13/3}M_{\rm mid}A/B, \tag{23}\]
where \(A=D_{\rm c}^{1/2}Q_{\rm D}^{\ 5/6}e^{-5/3}\) and \(B=D_{\rm c}^{-1/2}M_{\rm mid}\). A similar equation that also depends on A and B can be found for the fractional luminosity of these discs (assuming black body emission) and the population model was compared to observations of fractional excesses of nearby systems by Sibthorpe et al. (2018). This enabled best fit values for the parameters A, B and \(\gamma\) which could be well constrained, albeit with some degeneracy, since varying \(B\) changes the initial fractional luminosity distribution that belts are born with and varying \(A\) changes the fractional luminosity distribution at late times. Sibthorpe et al. (2018) find best fit values of \(A=5.5\times 10^{8}{\rm km}^{1/2}{\rm J}^{5/6}{\rm kg}^{-5/6}\), \(B=0.1M_{\rm a}{\rm km}^{-1/2}\) and \(\gamma=-1.7\) and these values of A and B are used in the equation for the masses of our belts 23 and the value \(\gamma\) is the exponent in our power law distribution of belt radii.
Using the model of Wyatt et al. (2007) with the above best fit values, the corresponding total mass in the belt \(M_{\rm bb}\) at the time when the EKM excites planetesimals to small pericentres, \(t_{\rm oct}\), can be found
\[M_{\rm bb}=1.75\times 10^{-7}R_{\rm bb}^{13/3}t_{\rm oct}^{-1}M_{\rm mid}, \tag{24}\]
where \(M_{\rm bb}\) and \(M_{\rm mid}\) are in \(M_{\rm 0}\), \(R_{\rm bb}\) is the black body radius of the belt in an and \(t_{\rm oct}\) is in Myr. The population model of Sibthorpe et al. (2018) was fitted to the distribution of infrared excesses of nearby stars and hence constrain the distribution of temperatures of discs in the population which are assumed to emit like the blackbody of the temperature appropriate for their radius. This is why the blackbody radius is used in equation 24 and the belt mass is correct assuming blackbody emission. However, since dust grains emit inefficiently in a manner dependent on their size and composition (Krivov et al., 2006), discs are hotter than expected for their radius which means the distribution of these disc radii is likely to be different to that of their black body radius. Pawellek and Krivov (2015) found that the blackbody radius of a debris disc \(R_{\rm bb}\) derived from fitting SEDs does not exactly match the physical radius from resolved millimetre images \(R_{\rm mm}\), which we identify with \(a_{\rm b}\), but instead differs by a factor \(R_{\rm mm}=\Gamma R_{\rm bb}\) which depends on the luminosity of the disc hosting star. As the belt's radius is increased by a factor of \(\Gamma\), the mass must be increased by a factor of \(\Gamma^{-2}\) in order to maintain the same distribution of fractional luminosities. This is equivalent to the argument that the cross-sectional area has decreased by a factor \(Q^{-1}\) where \(Q\) is the absorption efficiency of dust particles averaged over
the dust temperature, this is assumed to be constant and is equivalent to \(\Gamma^{-2}\). Thus, the true maximum mass of belts in this model is given by
\[M_{\rm max}=1.75\times 10^{-7}\Gamma^{2}R_{\rm bb}^{13/3}r_{\rm cct}^{-1}M_{ \rm mid}. \tag{25}\]
The most recent analysis shows that the best fitting functional form of \(\Gamma\) is given by (Pawellek et al., 2021)
\[\Gamma=2.92\left(\frac{L_{\rm*}}{L_{\odot}}\right)^{-0.13}, \tag{26}\]
and, for this model, we follow the methodology of Pearce et al. (2022) which uses equation 26, capped at a maximum value of 4, to convert \(R_{\rm bb}\) to \(R_{\rm mm}\). However, in order to make use of equation 26, the luminosity of each star in the sample must be known and hence it is assumed that the sample stars follow the power law Mass-Luminosity relation given in Eker et al. (2015) and expanded upon in Eker et al. (2018).
Equation 25 is only valid at \(t\gg t_{\rm coll}\), i.e. at times greater than the collisional lifetime of objects in the belt. At earlier times, the belt has not begun to collisionally deplete and no small dust has been produced and blown out of the system by radiation pressure. Thus, at these early times, belts will retain their initial mass \(M_{\rm init}=M_{\rm mid}\) and so we adopt the following formalism for the mass of belts at a time \(t_{\rm oct}\)
\[M_{\rm b}=min(M_{\rm max},\,M_{\rm mid}). \tag{27}\]
Using this formalism for the mass of the belt, the number of objects with masses greater than \(m_{\rm crit}\) at \(t_{\rm oct}\), \(N(m>m_{\rm crit};t=t_{\rm oct})\), can be found. Using equation 20 we can write the number of objects per unit belt mass with a mass between \(m\) and \(m+dm\) as
\[\frac{n(m)}{M_{\rm b}}=\frac{1}{6}m_{\rm max}^{-1/6}m^{-11/6}, \tag{28}\]
where \(m_{\rm max}\) is the mass of the largest object of diameter \(D_{\rm max}\). Integrating this expression we find that the number of objects with a mass greater than \(m_{\rm crit}\) per unit belt mass (\(n_{\rm c}^{\prime}\)) is
\[n_{\rm c}^{\prime}=\frac{N(m>m_{\rm crit})}{M_{\rm b}}=\frac{1}{5}\left[(m_{ \rm max}m_{\rm crit}^{5})^{-1/6}-m_{\rm max}^{-1}\right], \tag{29}\]
where \(m_{\rm crit}\) and \(m_{\rm max}\) are in \(M_{\oplus}\).
### Incorporating the Collisional Model
Now that we have a collisional model for the belt mass we can return to our formalism for \(f_{\rm t}\) and elucidate its dependence on the physical variables of the system and the different regimes it can lie in. Taking the simpler case, in the saturated regime, we can substitute equation 18 into equation 16 replacing \(a_{\rm b}\) with \(a_{\rm b}+\Delta a_{\rm b}/2\) and \(a_{\rm b}-\Delta a_{\rm b}/2\) for \(t_{\rm oct,upper}\) and \(t_{\rm oct,lower}\) respectively. This leads to the following equation for \(f_{\rm t}\)
\[f_{\rm t}=\frac{1280}{9t_{\rm MS}}\left(\frac{M_{\rm*}}{1.43}\right)\left( \frac{0.4}{M_{\rm c}}\right)\left(\frac{a_{\rm c}}{1000}\right)^{7/2}\frac{(1- e_{\rm c}^{2})^{2}}{e_{\rm c}^{1/2}}\left(\frac{a_{\rm b,mid}}{20}\right)^{-2}+ \frac{t_{\rm dur}}{t_{\rm MS}}. \tag{30}\]
In the unsaturated case, assuming the belt mass has not been capped at its upper limit of \(M_{\rm mid}\), we can substitute equations 25 and 29 into equation 15. This yields the following for \(f_{\rm t}\)
\[f_{\rm t}=1.1\times 10^{-10}\frac{n_{\rm c}^{\prime}t_{\rm dur}\Gamma^{-7/ 3}F(q^{\prime}<q_{\rm crit}^{\prime})}{t_{\rm MS}}\left(\frac{1.43}{M_{\rm*} }\right)\left(\frac{M_{\rm c}}{0.4}\right) \tag{31}\] \[\left(\frac{a_{\rm c}}{1000}\right)^{-7/2}\frac{e_{\rm c}^{1/2}}{ (1-e_{\rm c}^{2})^{2}}a_{\rm b,mid}^{19/3}.\]
### Input Distributions
Having developed a model that will calculate the expected number of Boyajian-like stars in the Kepler field, it is important that the distributions of the input parameters also match observations to give a realistic output. Section 3.5.1 contains the stellar mass distribution, section 3.5.2 the belt radius distribution, section 3.5.3 the binary semi-major axis distribution and 3.5.4 the binary eccentricity distribution.
#### 3.5.1 Stellar Masses
One important property of stars in the model is their mass, since both the timescale for the EKM interaction and the main sequence lifetime of the system depend on it, both of which affect \(f_{\rm t}\). Higher mass stars have much shorter lifetimes than lower mass stars so there will be less opportunity for their discs to undergo Kozai-Lidov oscillations before the stars end their lives, though those that do spend a greater fraction of their lifetime doing so than an equivalent lower mass star. In order to compare our results with the Kepler field we use the observed mass distribution for this set of \(\sim 200,000\) stars. The mass distribution of the Kepler field from which the masses of the primary stars, \(M_{\rm*}\), are drawn is shown in figure 10. For the secondary stars, we instead draw masses, \(M_{\rm c}\), from a random distribution between values of 0 and \(M_{\rm*}\) for each binary pair. Figure 10 shows the resultant total mass distribution which is different from that of the Kepler field. Though different, the primary star masses follow the Kepler distribution and secondary stars are mostly sub-solar M-dwarfs which might not have been resolved or detected by Kepler (as was the case for KIC 8462852). It is possible to make the total distribution of masses which is identical to the Kepler distribution by picking both primary and secondary masses from such a distribution, but this does not produce a uniform distribution of mass ratios nor is it consistent with observations of binary stars (Raghavan et al., 2010).
#### 3.5.2 Belt Semi-Major Axis
Planetesimal belts can have a range of radii as can be seen from our own system, with belts at \(\sim 3\) au and \(\sim 30\) au, whilst exoplanetary systems have been found to host belts that are quite massive and can extend to hundreds of au (Matthews et al., 2010) and this range must be incorporated into the model. As an equation for the mass of belts was used from Sibthorpe et al. (2018) which assumed a power law distribution of debris disc radii, the same radius distribution must also be used here for consistency. The power law exponent (equation 19), whose best fit value was found to be -1.7, cannot be altered without also altering the best fit values for A and B in equation 23 in a consistent manner which is beyond the scope of this work.
The best fit value of this exponent is such that there are more belts at small radii than large, this is because lots of belts at small radii were
needed in Sibthorpe et al. (2018) to account for the fact that only 20% of stars had an infrared excess. As every star was assumed to host a belt in this analysis, most of the population had to have close-in belts that would collisionally deplete fast enough such that most stars would have no detectable excess from a belt and this is reflected in the initial distribution of blackbody radii shown in figure 11
#### 3.5.3 Wide Binary Semi-Major axis
Around 50% of solar-like stars in the local galaxy are gravitationally bound to other stars (Duchene and Kraus, 2013; Moe and Di Stefano, 2017; Duquennoy and Mayor, 1991; Raghavan et al., 2010). The most common configuration is a binary pair which have a wide distribution of possible semi-major axes that can be wide (100s to 1000s of au) or close (1-10s of au), though higher order hierarchical systems such as triples and quadruples also exist. Despite the obvious hindrance of the gravitational pull of a second body, multiple systems seem to be remarkably resilient when it comes to planet formation. Planets have been found both orbiting both stars in a close pair (P-type/circumbinary), e.g. Kepler-16 (Doyle et al., 2011) and also around one star in wide pair (S-type/wide binary planet) e.g. Kepler-444A (Campante et al., 2015). In addition to planets, planet-forming discs have also been detected around binary stars (Kennedy et al., 2012). Therefore, it can be expected that, especially in wide binary systems where the star is far away and its perturbation smaller, planetesimal belts will still exist around each star. Indeed, studies have shown that planetesimal belt formation is only suppressed by intermediate binaries (10s to 100s au) (Yelverton et al., 2019).
In the Monte Carlo model it is assumed that 30% of the stars are binaries and, of those that are, a log-normal period distribution centred on \(10^{5}\) days is used as found observationally by Raghavan et al. (2010). This period distribution is combined with the mass distribution described in section 3.5.1 to give the semi-major axis distribution shown in blue in figure 12.
#### 3.5.4 Wide Binary Eccentricity
Wide binaries are thought to form through core fragmentation or dynamical capture and, due to the nature of these formation mechanisms, a wide distribution of eccentricities is expected (Bate et al., 2003). There are currently two competing interpretations of the data on wide binary eccentricities: that they have a thermal distribution where \(P(e)\propto e\)(Tokovinin and Kiyaeva, 2016) or a uniform distribution as argued for by Raghavan et al. (2010). Although surveys of the widest binaries are biased against the highest eccentricities, in order to be consistent with the sourcing of the semi-major axis distribution from Raghavan et al. (2010), we adopt the uniform eccentricity distribution in our model but check that the results do not change significantly when using a thermal distribution.
### Cuts to Initial Distribution
In order to analyse the Monte Carlo model effectively, it is important to identify and remove systems where our setup is incompatible with a belt of particles undergoing Kozai-Lidov oscillations. These
Figure 11: Histogram of the initial belt radii \(a_{\rm b}\) in the model (blue) and those that remain after imposing the cuts outlined in section 3.6 (orange). The discontinuity in the pre-cut sample is due to the conversion between the blackbody radii and true radii as the conversion factor is capped at a maximum of 4 (see equation 26 and section 3.3).
Figure 12: Histogram of the initial companion semi-major axes \(a_{\rm c}\) in the model (blue) and those that remain after imposing the cuts outlined in section 3.6 (orange).
Figure 10: Mass distribution of the primary stars (blue) and the total sample including secondaries (orange). It is a combination of the distribution of the Kepler field plus lower mass companions that correspond to unobserved M-dwarfs like that of the companion of KIC 8462852. The lowest mass star is \(0.086M_{\odot}\) and the highest mass is \(3.7M_{\odot}\)
systems can then be cut from the model to leave only those that are capable of this behaviour which will allow us to see the most likely locations of belts and companions that are experiencing this effect. There are many reasons why a system might not be able to undergo Kozai-Lidov oscillations and the specific reasons examined here are: the companion star is too close to the belt and causes chaotic motion of disc particles (section 3.6.1), the companion star's orbital period is comparable to the timescale for secular evolution thus invalidating the equations of motion (section 3.6.1), the belt is too close to its host star such that GR effects shut off the Kozai-Lidov mechanism (section 3.6.2), the stars leave the main sequence before objects reach small pericentres (section 3.6.3), and the lack of any companion star at all (section 3.6.4). The combined effect of these cuts is to reject a fraction \(f_{\rm reject}=0.986\) of the initial systems in the model.
#### 3.6.1 Star-Belt Separation
Not all separations between a companion star and a planetesimal belt will lead to Kozai-Lidov oscillations. The mechanism is hierarchical in nature, so systems where the companion star is too close to the belt will not experience this effect. The peak of initial values of \(a_{\rm c}\) as shown in figure 12 is located at \(\sim 10\) au. The distribution of \(a_{\rm b}\), meanwhile, shows closer in belts are more common (\(\sim 1\) au). However, there is some overlap of far-out belts with close-in companions and these are not nearly hierarchical enough for the EKM to take effect. This is not to say that particles will not reach very small pericentres through some other mechanism, secular chaos or scattering for example (O'Connor et al., 2022; Yoshikawa, 1990), however this Monte Carlo model has been set up to specifically examine the EKM effect due to wide binary companions and thus any system that cannot undergo this phenomenon is excluded. This will include all systems where \(a_{\rm c}<a_{\rm b,upper}\), i.e. where the companion star is within the belt and where the belt is outside the star (e.g. a P-type binary) and 53% of the systems satisfy this condition. Also excluded is the case where the star is outside the belt but sufficiently close to expose the disc particles to chaotic evolution. The formula for the semi-major axis below which this occurs is given by equation 1 in Holman and Wiegert (1999), this is proportional to \(a_{\rm c}\) with the proportionality factor depending only on the eccentricity of the companion star and the masses of both bodies. 24% of all systems in the model have belts located in the chaotic zone of their companions.
Further to this, the K-L mechanism is a secular effect and this approximation requires that the timescale for the secular effect is greater than the orbital periods of the bodies in the system; this translates to the requirement that the smallest secular timescale \(t_{\rm quad}\) be much larger than the largest orbital period \(t_{\rm orb,comp}\) and for this analysis we cut any system where \(t_{\rm quad}<10t_{\rm orb,comp}\) which corresponds to 18.9% of systems.
The cut on the secular timescales imposes a relation between the variables of the model that will bound the results of later calculations. Using equation 4 for \(t_{\rm quad}\) and \(t_{\rm orb,comp}=10^{-6}\sqrt{\frac{a_{\rm c}^{3}}{M_{\rm t}}}\)Myr, then by requiring \(\frac{t_{\rm orb,comp}}{t_{\rm quad}}=10^{-1}\) we get the relationship at the boundary of the cut
\[a_{\rm c}\approx 0.158\left(\frac{M_{\rm c}}{M_{\star}}\right)^{2/3}(1-e_{ \rm c}^{2})^{-1}a_{\rm b}. \tag{32}\]
#### 3.6.2 General Relativity
The effect of General Relativity is to induce a pericentre precession in any planetesimals which increases in strength closer to the host star; if this is stronger than the precession due to the EKM, it will dominate and the EKM will not manifest. The strength of general relativistic effects can be approximated in Newtonian gravity as a perturbation term that falls off with distance as \(r^{-3}\), thus only belts that are sufficiently close to their host stars, and with sufficiently distant companions, will experience this shut off. Hamilton and Rafikov (2021) derive an \(\epsilon_{\rm GR}\) analogous to that for the EKM given by
\[\epsilon_{\rm GR}=B\frac{a_{\rm c}^{3}M_{\star}^{2}}{a_{\rm b}^{4}M_{\rm c}}, \tag{33}\]
where B is \(1\times 10^{-8}\) such that masses are in \(M_{\odot}\) and semi-major axes are in au. We can then impose the cut \(\epsilon_{\rm GR}<1\) such that Kozai-Lidov evolution is not shut off by General Relativity. This cut removes the systems with the closest belts and the furthest companions and 4.8% of the initial sample violates this criterion. Using equation 33 and requiring \(\epsilon_{\rm GR}=1\) at the boundary of the cut, we can obtain the following relation between the parameters of the systems at this boundary
\[a_{\rm c}=\left(\frac{1}{B}\frac{M_{\rm c}}{M_{\star}^{2}}\right)^{1/3}a_{\rm b }^{4/3}. \tag{34}\]
However, this analysis only excludes discs whose precession due to GR is greater than that of the EKM in their initial low eccentricity state and hence will not deviate from a belt structure at all. There will be some belts in the model where this is not true and the particles in these belts will begin to evolve to higher eccentricities. However, the pericentre precession due to GR depends on the pericentre distance, \(q\), as well as the semi-major axis and hence the precession rate due to GR will increase during their evolution and eventually eclipse that of the Kozai mechanism. While the particles in these belts reach high eccentricities, some of them may not meet the threshold eccentricities to start producing strange Boyajian star-like light curves before GR takes over (i.e they do not reach \(q^{\prime}<q^{\prime}_{\rm crit}\)) and these systems must also be rejected from the sample. To do this we use equation 51 from Liu et al. (2015b) which gives the minimum scaled pericentre achievable due to GR, \(q^{\prime}_{\rm min,GR}\); as
\[\sqrt{q^{\prime}_{\rm min,GR}(2-q^{\prime}_{\rm min,GR})}=\frac{1}{9}\left(4 \epsilon_{\rm GR}+\sqrt{16\epsilon_{\rm GR}^{2}+135\cos^{2}(i_{0})}\right) \tag{35}\]
and those systems which cannot achieve the required scaled pericentre (i.e. \(q^{\prime}_{\rm min,GR}>q^{\prime}_{\rm crit}\)) are removed from the model.
#### 3.6.3 System Age
The octupole timescales of the systems initially drawn from our distributions, given by equation 18, span many orders of magnitude. The systems with a calculated \(t_{\rm oct}\) that is implausibly small are removed by the cut that requires the orbital timescale to be much smaller than the secular timescale. The systems with \(t_{\rm oct}\) so large that they would never undergo Kozai-Lidov evolution in the lifetime of the universe also get removed from the model as they fall within the GR cut. These cuts still leave a variety of octupole timescales ranging from \(10^{5}-10^{12}\) years. We exclude systems that do not undergo Kozai-Lidov oscillations before the star turns off the main sequence and
evolves into a white dwarf as we want to compare with observations of main sequence stars in the Kepler field. Thus we require that \(t_{\rm oct}\leq t_{\rm MS}\) and 4.5% of the initial systems violate this criterion. This imposes another relation between the system parameters at the boundary of the cut which can be found by setting \(t_{\rm oct}\) (given by equation 18) equal to \(t_{\rm MS}\) and is given by
\[a_{\rm c}=1000\left(\frac{t_{\rm MS}}{16000}\left(\frac{1.43}{M_{*}}\right) \left(\frac{M_{\rm c}}{0.4}\right)\frac{e_{\rm c}^{1/2}}{(1-e_{\rm c}^{2})^{2} }\right)^{2/7}a_{\rm b}^{4/7}. \tag{36}\]
We also remove all systems whose octupole timescales are smaller that 10 Myr; this is because at earlier times the system is still in its planet formation stage and has a protoplanetary disc. Studies have shown that the action of the Kozai-Lidov mechanism on such a disk causes eccentric gas and dust ring formation (Martin and Lubow, 2022). However, it is unclear if any massive and highly eccentric planetesimals that are uncoupled to the gas would be able to produce a KIC 8462852-like signature given the surrounding gas will have a non-negligible optical depth. As this scenario is uncertain, we exclude it from our analysis. 88% of systems in the model have octupole timescales shorter than 10 Myr and thus violate this cut.
#### 3.6.4 Binarity Fraction
As evidenced by our own solar system, not every star is in a binary pair and hence the fraction of stars that are in binaries needs to be included. Stellar surveys show that the general binarity fraction for FGK stars that dominate the Kepler sample is about 30% (see Duchene and Kraus (2013) and references therein). Imposing this final cut, along with all the previous cuts from sections 3.6.1, 3.6.2 and 3.6.3 leads to 98.6% of all initial systems in the model being removed, leaving only 1.4% of the initial systems to undergo Kozai-Lidov oscillations if they have the correct orientation.
## 4 Results
The main output of the Monte Carlo model is \(\overline{f_{\rm t}}\) which is the mean value of the fraction of the main sequence lifetime that a system spends with large objects at small pericentres causing an observational signature and is found to be \(\overline{f_{\rm t}}=2.7\times 10^{-4}\) for \(t_{\rm dur}=100\) yr. This value is a mean over the entire sample and sections 4.1, 4.2 and 4.3 will elucidate its origin with respect to the main parameters of the model: \(a_{\rm b}\), \(a_{\rm c}\), \(M_{*}\) and \(t_{\rm oct}\). Unless otherwise stated, all calculations and plots assume \(t_{\rm dur}=100\) yr.
### Dependence of \(f_{\rm t}\) on semi-major axes
The two most consequential parameters in the model are \(a_{\rm b}\) and \(a_{\rm c}\). Figure 13 shows the number of systems that survive the cuts of section 3.6 and illustrates the effect of these cuts and the belt and companion parameters that can potentially cause evocomet transits via the EKM. It shows that the majority of the systems have close-in belts with \(a_{\rm b}<10\) au and companion separations between \(100\la a_{\rm c}\la 3000\) au. As expected, companions with large belt radius \(a_{\rm b}\sim 100\) au but small companion separation \(a_{\rm c}\sim 100-1000\) au are removed due to the secular timescale \(t_{\rm quad}\) being too similar to the orbital timescale of the companion \(t_{\rm orb}\). As can be seen from equation 32, this translates to a lower bound on \(a_{\rm c}\) of the form \(a_{\rm c}\propto a_{\rm b}\) which is seen sculpting the lower edge of the population in figure 13. Similarly, close-in belts (\(a_{\rm b}\sim 1-10\) au) and distant companions \(a_{\rm c}\sim 1000-10000\) au are removed because the precession due to GR is greater than that of the Kozai-Lidov mechanism. This imposes another lower bound of the form \(a_{\rm c}\propto a_{\rm b}^{4/3}\) and this can clearly be seen in figure 13 plotted as the red bounding line.
In order to understand where the mean value of \(f_{\rm t}\) comes from, it is important to first examine how it depends on the variables of the model. Figure 14 shows how \(f_{\rm t}\) depends on the belt radius \(a_{\rm b}\) for the belts expected to undergo EKM. The dominant relation seen in the figure is given by \(f_{\rm t}\propto a_{\rm b}^{19/3}\) and arises from equation 31 as most systems are in the unsaturated regime. It shows that the furthest belts spent most of their life transiting, a direct result of the longer collisional lifetime, and hence larger masses, of more distant belts at the time they undergo EKM. The upper bound of this behaviour (plotted as the upper red line in figure 14) is set merely by the lifetime of the system and the cuts made to the initial population have very little effect.
Figure 15 shows how \(f_{\rm t}\) depends on the companion semi-major axis \(a_{\rm c}\). Naively, it might be expected that the relationship between \(f_{\rm t}\) and \(a_{\rm c}\) would be given by \(f_{\rm f}\propto a_{\rm c}^{-7/2}\) as this is what is given by equation 31 which gave the correct relation between \(f_{\rm t}\) and the belt radius. This relation can indeed be seen bounding the lower region of the parameter space in figure 15 as the negatively sloped line. However, the dominant relation between \(f_{\rm t}\) and \(a_{\rm c}\) is given instead by \(f_{\rm f}\propto a_{\rm c}^{91/12}\) such that \(f_{\rm t}\) increases with companion semi-major axis. This is not expected from equation 31 as more distant companions should take longer to destabilise belts which would then have lost mass through collisions. This result is instead due to the cut discussed in section 3.6.3, where the EKM timescale must be less than the main sequence lifetime (\(t_{\rm oct}<t_{\rm MS}\)). This leads to the relation between \(a_{\rm c}\propto a_{\rm b}^{4/7}\) along the boundary of the cut as seen in equation 36 which, substituting into equation 31, gives us the relation \(f_{\rm f}\propto a_{\rm c}^{91/12}\) that is seen bounding the upper and lower regions of the
Figure 13: 2D histogram of the belt radii and companion semi-major axis for every the systems that survived all the cuts. The orange line shows the boundary of the parameter space due to GR, found by substituting equation 34 into 31 and using \(t_{\rm MS}=10\) Myr, \(e_{\rm c}=0.1\), \(M_{\rm c}=1M_{\odot}\) and \(M_{*}=1M_{\odot}\). The green line shows the boundary of the parameter space due to the timescale for secular quadrupole oscillations being 10 common orbital timescales using \(M_{*}=3M_{\odot}\), \(M_{\odot}=0.1M_{\odot}\) and \(e_{\rm c}=0.1\). The artefact at \(a_{\rm b}=4\) au is due to a majority (but not all) of the systems having a true radii that are the maximum of 4 blackbody radii according to the prescription laid out in section 3.3
parameter space in figure 15. The second lower bound that is the most important below \(a_{\rm c}\sim 10^{4}\) au is due to the requirement that the secular timescale be much longer than the orbital timescales as laid out in section 3.6.1. As shown by equation 32, this leads the relation \(a_{\rm b}\propto a_{\rm c}\) along the boundary and, substituting this into equation 31, generates the observed relation \(f_{\rm f}\propto a_{\rm b}^{17/6}\) at the lower edge. Hence, the overall effect of all the cuts made to the initial population is that the fraction of time a system will spend with large objects at small pericentres actually _increases_ with \(a_{\rm c}\) rather than decreasing.
Figures 14 and 15 show that systems with more distant belts (up to \(10^{3}\) au) and more distant companion stars (\(\sim 10000\) au) have the largest values of \(f_{\rm f}\) and hence spend the greatest fraction of their main sequence lifetime in the 'transiting' state. However, this does not account for the rarity of these systems. Indeed figures 11 and 12 show that most systems have close-in belts (\(\sim 1-10\)au) and close companions (\(\sim 100-1000\)au). These most common systems spend much less of their lifetime in the transiting state and hence skew the mean value of \(f_{\rm f}\) to lower values.
It is important, however, to find the most likely systems to be observed, and the greatest contributors to \(\overline{f_{\rm f}}\). Figure 16 shows \(f_{\rm f}P(a_{\rm c})da_{\rm c}P(a_{\rm b})da_{\rm b}\) which is the local mean of \(f_{\rm f}\) in \(a_{\rm b}\) and \(a_{\rm c}\), multiplied by the probability distributions of those parameters. The distributions used are those of the post-cut population shown in orange in figures 11 and 12. It can be seen that the most likely systems to be seen transiting, and that dominate the contribution to the mean value, are those that have belts in the range 100-1000 au and companions in the range 300-10000 au.
### Dependence of \(f_{\rm f}\) on Stellar Mass
Figure 17 illustrates how \(f_{\rm f}\) depends on the mass of the stars in the system. In an analogous manner to figure 16, it shows \(f_{\rm f}(M_{*})P(M_{*})dM_{*}\) which is the local mean of \(f_{\rm f}\) in stellar mass multiplied by the stellar mass probability distribution. The latter is taken to be the mass distribution of stars observed by Kepler (fig 10) rather than the expected stellar mass function of the Galactic field in order to match the results to the Kepler field. Whilst more massive host stars undergo Kozai-Lidov oscillations more slowly (equation 18) and hence do not have many large objects left by that time, they also have a much shorter lifetime: hence \(f_{\rm f}\) is larger for these systems. The reverse is true for less massive host stars, whilst they have more massive belts at the time of Kozai-Lidov, they have much longer lifetimes and hence are less likely to be observed with large objects at small pericentres. This increasing trend with stellar mass persists despite the high bias towards solar mass stars in the Kepler field, though the increase levels off after 1 solar mass.
Figure 16: Mean value of \(f_{\rm f}\) multiplied by the probability for a system to be in that bin as a function of belt radius \(a_{\rm b}\) and companion semi-major axis \(a_{\rm c}\) such that the sum of the values at each point gives the mean \(f_{\rm f}\) over all systems in the model. The bin size is \(0.03\)dex\({}^{2}\) and the artefact at \(a_{b}=4\) au is due to a majority (but not all) of the systems having a true radii that are the maximum of 4 blackbody radii according to the prescription laid out in section 3.3.
### Dependence of \(f_{\rm t}\) on system age
Figure 18 shows the dependence of \(f_{\rm t}\) on the octupole timescale of the system \(t_{\rm oct}\), weighted by the probability distribution of octupole timescales. As \(t_{\rm oct}\) represents when systems would first excite large objects to small pericentres, this is roughly equivalent to the age of the system when the observable signatures of cometary transits would become visible in the lightcurves of these stars. It shows that the most likely systems to exhibit this phenomenon are stars that are roughly \(10^{2}-10^{3}\) Myrs old, whilst below \(10^{2}\) Myrs and there is a downturn. The downturn below \(10^{2}\) Myrs is only slight, however, before it reaches the stage where systems would still be in the protoplanetary disc phase (\(t_{\rm oct}\sim 10\) Myr) below which systems are cut from the model. Above \(\sim 10^{3}\) Myrs, systems become less likely to be observed in a transiting state and this is due to a combination of factors. Firstly, from figure 17, more massive stars are more likely to be seen to transit due to their shorter lifetimes, hence stars are unlikely to be seen transiting at \(\sim 10\) Gyr ages as all the high mass stars have left the main sequence and the low mass stars will either have transit events earlier on in their lives or \(t_{\rm oct}\) is \(\sim 10\) Gyr long but the belt has been severely depleted.
### Probability of the EKM as the cause of observations
The mean fraction of their lifetime that stars in the Kepler field spend with large planetesimals at scaled pericentres \(q^{\prime}<10^{-2}\) is found to be \(\overline{f_{\rm t}}=2.7\times 10^{-4}\). In order to turn this into an expected number of KIC 8462852-like objects in the Kepler field (\(N_{\rm exp}\)) we first use equation 13 to find the probability an individual star exhibits KIC 8462852-like dips. Using the homology relation \(R_{*}\propto M_{*}^{1/13}\) and \(q=0.6\) au from the observations of KIC 8462852, a value of \(P_{\rm geo}\) for each star can be found which, due to the weak dependence of \(R_{*}\) on \(M_{*}\), varies little from system to system and has a mean value of \(\overline{P}_{\rm geo}=3.8\times 10^{-3}\). Combining \(\overline{f_{\rm t}}\) with \(\overline{P}_{\rm geo}\) and \(f_{\rm reject}\) yields \(p=6.6\times 10^{-9}\) and, as \(p\ll 1\), equation 14 gives the probability of observing one or more stars to undergo these KIC 8462852-like dimming events in the Kepler field as \(\langle N_{\rm exp}\rangle=1.3\times 10^{-3}\).
This can also be framed in a Bayesian sense. If the occurrence rate of stars with a KIC 8462852-like lightcurve P(L) is 1/200,000 from Kepler observations, and the occurrence rate of said stars if their properties are due to comet scattering via the Kozai mechanism P(L|K) is \(\overline{f_{\rm t}}(1-f_{\rm reject})\overline{P}_{\rm geo}=6.6\times 10^{-9}\), then using Bayes' theorem the probability of the Kozai mechanism causing the strange lightcurve observations P(K|L) is:
\[P(K|L)=\frac{P(L|K)P(K)}{P(L)}=1.3\times 10^{-3}, \tag{37}\]
where it is assumed that P(K), the probability that the Kozai mechanism will take effect in the systems, disregarding the considerations already made, is unity.
Figure 19 shows the distribution of non-zero values of \(f_{\rm t}\) in the sample of the \(\sim 1\%\) of systems that were not rejected and shows that the majority of the values of \(f_{\rm t}\) sit below the mean. The fact that the majority of systems spend a very small fraction of their lifetime in the transiting stage is to be expected. This is chiefly because most systems will have belts close to their host stars around 4 au as shown in figure 11, and companions that are around 1000 au as shown in figure 12. Hence, the octupole strength \(\epsilon\) will be extremely weak and only some of these systems will have a large enough mutual inclination to undergo extreme Kozai-Lidov oscillations. Furthermore, the timescale for these systems to undego Kozai-Lidov will be long (equation 18) such that, over this period of time, assuming the stellar system has not left the main sequence and ended its life, the close-in belt will have collisionally ground away leaving it with a very low mass.
### Importance of the EKM vs. the SKM
Figure 20 shows the relative importance of including the effects of the EKM as opposed to using the simpler case of the SKM as an approximation. It shows the percentage of systems in the model that have an inclination greater than the critical inclination for their system \(i_{\rm crit}\) above which all planetesimals in the belt are excited to low scaled pericentres for both the EKM and SKM cases. For the simpler SKM case, \(i_{\rm crit}\) is calculated using equation 7 and is the same for every system in the Monte Carlo model. Conversely, for the EKM, \(i_{\rm crit}\) is unique to each system and is calculated using the formalism outlined in section 2.4. It shows that there is a difference between the two cases, albeit slight, and that the EKM does increase the number
Figure 17: Average value of \(f_{\rm t}\) as a function of \(M_{*}\), the primary star mass, weighted by the Kepler mass probability density function.
Figure 18: Mean fraction of lifetime spent with large objects at small pericentres as a function of the octupole timescale \(t_{\rm oct}\) and weighted by the probability distribution of \(t_{\rm oct}\). This timescale roughly corresponds to the stellar age at the time when transits would become observable and hence shows what age stars that exhibit cometary lightcurves would be expected to be.
of systems that have high enough inclinations by about \(\sim 3\%\). For the critical scaled pericentre considered in the Monte Carlo model, \(\sim 14\%\) of the systems have a misalignment large enough for the EKM to take effect. The overall percentages in each case depend on the critical scaled pericentre \(q^{\prime}_{\rm crit}=1-e_{\rm crit}\) that planetesimals are required to reach: the smaller the value of \(q_{\rm crit}\) that is needed, the less systems that are correctly aligned. For the lowest scaled pericentres, the difference in the percentage of correctly aligned systems between the SKM and EKM cases can traverse an order of magnitude and hence results will differ significantly depending on which case is used in the modeling. For the EKM, the critical inclination above which most planetesimals are excited to high eccentricities depends on \(e\) and thus on \(a_{\rm b}\), \(a_{\rm c}\) and \(e_{\rm comp}\). Therefore, the percentage of the population that have inclinations above \(l_{\rm crit}\) depends on the distributions of these parameters and hence on the cuts imposed as these can and do change these distributions as shown in figures 11 and 12.
## 5 Discussion
The likelihood of the Kozai mechanism as the origin of the observations of KIC 8462852 is small but not entirely improbable. The Monte Carlo simulation shows that, for a Kepler-like distribution of stars, the expected observed rate of stars with planetesimals excited to high eccentricities is \(6.6\times 10^{-9}\). This arises because, from figure 16, the most likely systems to be seen transiting are those with belts and binary companions where \(10^{2}{\rm au}\lesssim a_{\rm b}\lesssim 10^{3}{\rm au}\) and \(10^{2}{\rm au}\lesssim a_{\rm c}\lesssim 10^{4}{\rm au}\), which are approximately \(1\%\) of systems. Only \(\sim 14\%\) of these systems have a large enough inclination for the eccentric Kozai mechanism to take effect and, for those that do, they spend, on average, \(0.08\%\) of their main sequence lifetimes in the transient state where large objects are excited to high eccentricities. Not all of these would be observable in the form of dips in their lightcurves, however, as the orbits would need to be correctly aligned with the line of sight from earth and this geometrical transit probability is approximately \(0.8\%\). Taken together, this accounts for the calculated expected rate of \(\sim 10^{-8}\) that is the output of the model. The model also shows that the most likely belts to undergo this behaviour are like those seen in observations of debris disc systems with \(10^{2}<a_{\rm b}/{\rm au}<10^{3}\). Additionally, the companions that are most likely to cause belts to undergo this instability are at intermediate distances for wide binaries: at around 100s-1000s of au. This matches the observed projected separation of the companion star of KIC 8462852, found by Pearce et al. (2021) to be \(878\pm 8\) au.
Care should be taken with this, however, as the measurement by Pearce et al. (2021) is only the projected on sky separation between KIC 8462852 and the M dwarf and not necessarily the semi-major axis of its orbit. Figure 21 shows the distribution of possible semi-major axes that are consistent with the observed projected separation (Yelverton et al., 2019). The distribution was calculated by producing separations calculated from random orbits with uniformly distributed random values of \(i\), \(\Omega\), \(e\) and mean anomaly M. The semi-major axes are derived from the same log normal period distribution that is used in the Monte Carlo model, that was the best fit to observations of wide binaries (Raghavan et al., 2010). Orbits were considered to have produced a correct separation on a probabilistic basis, with the probability of acceptance depending on the produced separation itself and given by a Gaussian centred on 878 au with a standard deviation of 8 au. Figure 21 shows that the possible semi-major axes of the companion range from 439 to 2000-3000 au. The lower limit arises because orbits with lower \(a\) would not reach a separation of 878 au even with \(e\approx 1\), whilst the tail is due to orbits with larger \(a\) needing more eccentric or edge on orbits to produce the correct separation. Hence, the distribution of possible semi-major axes of the M-dwarf companion is still consistent with the range of semi-major axes of wide binaries most likely to induce the Kozai instability in planetesimal belts.
### Dependence on Model Parameters and Distributions
The Monte Carlo model that has been built, and hence the results, depends on a certain number of parameters whose true values are unknown. The most important of these is the 'duration of transiting events' \(t_{\rm dur}\) and the dependence of the number of stars in the Kepler field expected to show KIC 8462852-like dips, \(\langle N_{\rm exp}\rangle\), on this pa
Figure 19: Histogram of all the non-negative values of \(f_{\rm f}\) of the systems that survived all the cuts. For those that did survive, \(8\%\) had non-zero values of \(f_{\rm i}\) and are shown here. The mean value of \(f_{\rm i}\) is shown by the dashed black line and is clearly skewed by the highest values such that the vast majority of systems have a value of \(f_{\rm i}\) that lie below this value.
Figure 20: Percentage of systems in the Monte Carlo model whose inclinations exceed those prescribed by the SKM (blue points) and the EKM (orange points) to send particles beyond a threshold eccentricity as a function of that threshold value. Black lines show, for each threshold eccentricity, the difference of the percentage of systems in the model that are sufficiently inclined when using SKM and the EKM.
rameter is shown in figure 22. It is clear that, the longer the transiting events last for, the greater the probability of observing a star with a KIC 8462852-like light-curve. However, they are not proportional to each other as would be expected from equation 31 and this is because this equation only holds for those systems that are in the unsaturated state. As \(t_{\rm dur}\) increases so too does the percentage of saturated systems and, as the value of \(\overline{f_{\rm f}}\) for saturated systems is independent of \(t_{\rm dur}\) when \(t_{\rm dur}\) is small, this increase accounts for the shallower relationship between \(\langle N_{\rm exp}\rangle\) and \(t_{\rm dur}\) that would otherwise be expected.
The value of \(t_{\rm dur}\) doesn't just affect the expected number of KIC 8462852-like stars, it also affects the most likely parameters of observable systems. For example, figures 23 and 24 show the most likely belt radii and companion semi-major axes to be observed respectively for three different values of \(t_{\rm dur}\). For small values of \(t_{\rm dur}\) (i.e. 1-100 yr) only the most distant belts and companions are expected to be observed. However, if \(t_{\rm dur}\) is increased to an extreme value of 1 Myr, then a large range of belts (10-1000 au) and companions (300-10000 au) are likely to be observed. Similarly, figure 18 shows how the most likely age of observed systems changes with \(t_{\rm dur}\); though the age is less sensitive to this free parameter, the smallest values of \(t_{\rm dur}\) tend to disfavour the oldest systems.
The value of \(t_{\rm dur}\) reflects the lifetime of dust on an eccentric orbit around a central star and hence for how long any optical dips would be observable. The Kreutz family are highly inclined and eccentric sungrazing comets in our own system that are the result of breakups of larger parent bodies, albeit orders of magnitude smaller than the parent body hypothesised for the KIC 8462852 system (Kreutz, 1888). These have been observed for hundreds of years
Figure 21: Probability density function of the possible semi-major axes of the M dwarf companion’s orbit, given its observed separation of (878 \(\pm\) 8) au from KIC 8462852. Assumed priors on the companion’s orbit are: randomly distributed \(\omega\), randomly distributed \(\cos i\), randomly distributed \(e\), and semi-major axes drawn from the lognormal period distribution of Raghavan et al. (2010).
Figure 22: Expected number of KIC 8462852-like objects in the Kepler field as a function of the duration of the observable transit signature caused by each breakup event of a parent body with a mass greater than \(10^{-6}M_{\oplus}\).
Figure 23: Fraction of stellar lifetime spent with large (\(m>10^{-6}M_{\oplus}\)) bodies at small pericentres as a function of belt radius \(a_{\rm p}\), weighted by the probability that a belt would be found there. The probability distribution of belts used is that of the post-cut population (orange histogram in figure 11). The different colour curves represent different values of \(t_{\rm dur}\), the lifetime of observable transits caused by the breakup of massive bodies on sufficiently eccentric orbits, in Myr.
and have orbital periods of \(10^{2}-10^{3}\) years and hence must have lifetimes of many orbital periods (\(\sim 10^{3}\) yr) (Fernandez et al., 2021). Additionally, constraints on the lifetime of large dust releasing bodies can be found using the observations of the depth of optical dips as measured by Boyajian et al. (2016).
We consider a comet of mass \(M_{\rm comet}\), density \(\rho_{\rm comet}\) and radius \(R_{\rm comet}\) at the pericentre of its orbit at distance \(r_{\rm p}\) from the central star and which is emitting dust as a spherically symmetric wind. Mass conservation implies that for a constant mass loss rate \(\dot{M}\)
\[\dot{M}=4\pi r^{2}\rho_{0}\left(\frac{r}{r_{0}}\right)^{-2}u, \tag{38}\]
where r is radial distance from the comet, \(\rho_{0}\) and \(r_{0}\) are the density and radius respectively at some reference position and \(u\) is the speed of the dust.
The depth of the optical dips measured around KIC 8462852 \(\delta\) caused by material of optical depth \(\tau\) covering a fraction \(\Omega_{*}\) of the stellar surface is
\[\delta=\tau\Omega_{*}, \tag{39}\]
for \(\tau\ll 1\) and where optical depth is itself given by the line of sight (z axis) absorption due to material with an opacity \(\kappa\) i.e.
\[\tau=\int\kappa\rho dz. \tag{40}\]
The opacity \(\kappa\) is the ratio of the interaction cross section of a particle to its mass which, assuming a dust size \(s\) and density \(\rho_{\rm d}\), is
\[\kappa=\frac{3}{4s\rho_{\rm d}}. \tag{41}\]
Using equation 40 and considering the star as a point source, if the comet is transiting with impact parameter \(b=0\) and speed \(v\) along the \(x\) axis such that when \(t=0\) then \(x=0\), then, at \(t=0\) which corresponds to the deepest part of the dip and assuming the size of the clump is approximately \(r_{\rm p}\)
\[\tau=\frac{3\rho_{0}r_{0}^{2}}{2s\rho_{\rm d}r_{\rm p}}. \tag{42}\]
Using equations 39 and 42, the reference density and radius can be related to the dip depth by
\[\rho_{0}r_{0}^{2}=\frac{2s\rho_{\rm d}r_{\rm p}\delta}{3}, \tag{43}\]
where \(\Omega_{*}=1\) has been used as the star is considered to be a point source in this approximation. Substituting equation 43 into 38 and further assuming that the velocity is approximately the escape velocity of the comet \(u_{\rm esc}=\sqrt{\frac{8}{3}\pi G\rho_{\rm comet}}R_{\rm comet}\) gives an expression for the mass loss rate in terms of the dip depth
\[\dot{M}=\frac{8\pi s\rho_{\rm d}r_{\rm p}\delta}{3}\sqrt{\frac{8}{3}\pi G\rho _{\rm comet}}R_{\rm comet}. \tag{44}\]
Hence, assuming \(\rho_{\rm d}=\rho_{\rm comet}=\rho\) and using \(\delta=0.2\) as observed by Boyajian et al. (2016), the evaporation timescale \(t_{\rm evap}=\frac{M_{\rm comet}}{M}\) is
\[t_{\rm evap}=23\left(\frac{R_{\rm comet}}{100{\rm km}}\right)^{2}\left(\frac{ \rho}{2700{\rm kgm}^{-3}}\right)^{-1/2}\left(\frac{s}{1\mu{\rm m}}\right)^{-1} \left(\frac{r_{\rm p}}{0.1{\rm au}}\right)^{-1}{\rm yr}. \tag{45}\]
This estimate is found using the mass loss rate at pericentre using the depth of the deepest dips observed. However, comets on eccentric orbits only experience mass loss for a small portion of their orbits before they move further from the star towards apocentre where the mass loss rate is much lower and consequently it will take a certain number of orbital periods for the comet to fully evaporate. However, the total time the dip from this one body would be observable for is roughly \(t_{\rm evap}\) and even if there are multiple evaporating bodies close in orbital phase then \(t_{\rm dur}\) will still be roughly \(t_{\rm evap}\) or slightly larger.
Another model parameter that affects the outcome is \(M_{\rm mid}\) which is the peak of the log-normal distribution of debris disc masses all stars are assumed to be born with that, along with the maximum size of their planetesimals \(D_{\rm c}\), is constrained by Sibthorpe et al. (2018). The results of the model have been based on a value of \(10M_{\oplus}\) which is derived from protoplanetary disk observations (Andrews & Williams, 2005). Whilst this parameter sets the maximum mass of belts in the model and should not be set unphysically high, it has no effect on the value of \(\langle N_{\rm exp}\rangle\). This is because although \(M_{\rm b}\propto M_{\rm mid}\), the number of objects, per unit belt mass, between \(m_{\rm crit}\) and \(m_{\rm max}\) (\(n_{\rm c}^{\prime}\)) is proportional to \(M_{\rm mid}^{-1}\). Hence the total number of objects in a belt with masses between \(m_{\rm crit}\) and \(m_{\rm max}\) (\(n_{\rm c}^{\prime}M_{\rm b}\)) is independent of \(M_{\rm mid}\). However, \(M_{\rm mid}\) does have a minimum value in order for the belts to have planetesimals that are large enough to cause dimming events (i.e. \(m_{\rm max}>10^{-6}M_{\oplus}\)) and this occurs at \(M_{\rm mid}=1.27M_{\oplus}\).
There are different hypothesised eccentricity distributions for wide binaries whose applicability depends on the formation mechanism of the stars themselves. The difficulty in constraining the eccentricity distribution from observations of wide binaries is due to their very long periods (i.e. a semi-major axis of \(\sim 900\) au corresponds to a period of \(\sim 20,000\) years for solar mass stars), which means that a tiny fraction of an orbital arc is covered by the observations leading to many possible orbits with a wide variety of eccentricities that fit the data. For example Raghavan et al. (2010) found that the eccentricity distribution was consistent with being uniform. However, other studies by Tokovinin & Kiyaeva (2016) have found that the eccentricity distribution is thermal (i.e. \(\propto\epsilon\)) or even super thermal for wide binaries. The model was rerun with these different eccentricity distributions but they did not affect the results as the eccentricity only weakly influences the EKM timescale.
### Applicability to other dusty Stars
The eccentric Kozai mechanism is a convenient mechanism for exciting objects to high eccentricities and is often claimed as a potential cause of multiple observed phenomena. For example, various stars are observed to have what is termed 'Extreme Debris Discs' (EDDs) which are identifiable by very hot dust close to the star (blackbody radii \(R_{\rm bb}<1\) au and fractional luminosities \(f>0.01\)). This dust could not have formed in situ as it would have collisionally depleted over the age of the stars (Wyatt et al., 2007), of which the lifetimes of some are found to be greater than 100 Myrs (Moor et al., 2021; Weinberger et al., 2011). One explanation for this phenomenon is that it is the result of giant impacts where, after planetary embryos are formed and the gas disc dissipates, embryos
are dynamically excited onto crossing orbits and collide (Agnor et al., 1999; Chambers & Wetherill, 1998). However, simulations show that the era of giant impacts is \(\sim 30-200\) Myr (Canup, 2004; Chambers, 2013) which is difficult to reconcile with the ages of the oldest EDD systems. On the other hand, it is not trivial to instead assign the longer timescale Kozai mechanism as the cause of this close-in dust. The results of this work show that, whilst the expected ages of most systems would be 100-1000 Myr, the expected rate is not necessarily applicable to EDDs as the input parameters were taken from those stars in the Kepler field. In order to get a meaningful comparison, the model must be rerun accounting for any biases of the searches for EDDs (Kennedy & Wyatt, 2012, 2013) which is beyond the scope of this paper.
Similar to EDDs, exozodiacal dust is defined to be warm dust within the habitable zone of a system (though the demarcation between the two is ill defined). Kennedy & Wyatt (2013) find warm 12\(\mu\)m excesses are detectable towards 1% of stars with a majority of systems identified around young stars (\(<120\) Myr) and that they correlate with cold outer belts like in \(\eta\) Corvi (Marino et al., 2017). Some exozodi can be explained by dust from collisions in the outer belt migrating inwards through PR drag (Rigley & Wyatt, 2020) but others like \(\eta\) Corvi require a scattering chain of planets (Marino et al., 2018) to deliver cometary material inwards through many scattering events which then fragment (Rigley & Wyatt, 2022). Though the EKM is a possible cause of delivery, not all systems with warm exozodi are in known stellar binaries although the possibility of misaligned planets in these systems cannot be discounted.
Exocomets have been found through lightcurve analysis around other stars in the Kepler and TESS samples (Kennedy et al., 2019) and most of these systems are consistent with being \(\sim 100\) Myr old. Additionally, the presence of exocomets can also be inferred from detecting the gas they release using emission line spectroscopy (Reboliido et al., 2020). It is possible that the EKM is the cause of some of these observations though the results of this model show that, for the case of wide stellar binary perturbers, it is too rare to explain all the systems. Whilst the model struggles to account for the one star with an odd lightcurve, it is interesting to note that the lightcurve of the recently discovered TESS star TIC 43488669 (Tajiri et al., 2020) shows a remarkably similar lightcurve to KIC 8462852 in terms of its complexity. This would increase the known number of KIC 8462852-like stars and could cause worse agreement between this model and the data, though this model was developed for the Kepler field and not for TESS.
The Kozai mechanism is also claimed to be a likely cause of some observations of White Dwarfs (WDs). A not insignificant proportion of White Dwarfs' atmospheres are found to be polluted with metals (Koester et al., 2014), these must have been accreted recently in the history of the star as they have small sinking timescales that would cause them to sink out of the atmosphere and no longer be observable (Fontaine & Michaud, 1979; Paquette et al., 1986). This requires recent accretion of planetesimals or disrupted planetary material onto the star which, as White Dwarfs are Gyrs old, suggests that a recent instability could have occurred in the system. As the timescales for the Kozai mechanism can be Gyrs long, it is often claimed that this could contribute to some of the polluted systems seen, though not all of them (Bonsor & Veras, 2015). Similarly to the pollution, WD 1856b, one of the few planets found transiting a White Dwarf, is thought to have been influenced by the Kozai mechanism (O'Connor et al., 2021; Stephan et al., 2021). This is because the planet's current location would mean that, if it had been there on the main sequence, it would have been consumed by the star as it expanded into a red giant (Merlov et al., 2021). This system is also not just a binary, but part of a higher order system where the Kozai timescale of the distant stars would be long enough to cause the planet to become excited to high eccentricities and migrate inwards where it tidally circularises after the star has evolved to the White Dwarf stage. Whether the Kozai mechanism is a frequent occurence in white dwarf systems is not clear, as figure 18 shows that, for the smallest values of \(t_{\rm dur}\), the most common stars to undergo this mechanism are 100-10000 Myr old and there is a sharp downturn at ages greater than 1 Gyr whereas there is no downturn for larger \(t_{\rm dur}\). In addition, white dwarf systems evolve such that \(M_{*}\), \(a_{\rm b}\) and \(a_{\rm c}\) would all change once the main sequence phase has ended which clouds the picture and like the case with the EDDs the exact results of the occurrence rate from this model are not directly applicable. This work only considers the case of stars that undergo Kozai oscillations within the main sequence lifetime of the system and more work will have to be done to examine the population that Kozai after the main sequence, and the biases of White Dwarf observations would have to be accounted for before any comparison could be made.
This work has sought to quantify the probability that the dips seen in the lightcurve of KIC 8462852 are due to the breakup of an eccentric comet that has undergone Kozai oscillations due to a stellar companion. Whilst the probability found was low, there is a possibility that the Kozai mechanism could still be the cause, albeit not in the form examined in this work. For example, a planet in the system could induce the Kozai instability if it were sufficiently misaligned from any planetesimal belt. Whilst alignment between planets and belts would be expected from formation scenarios, and this is the case in our own solar system, it is not infeasible to have a misalignment. This is evidenced by giant planets which have been found to be significantly inclined to each other such as in \(\pi\) Men (Kuan & Wyatt, 2020), as well as the young HD 106906 system where an exterior, eccentric and inclined Jupiter is warping the belt (Kalas et al., 2015; Nguyen et al., 2021). As, for sensible values of \(t_{\rm dur}\), the model predicts the occurrence of KIC 8462852-like objects to be rare it is worth asking if this disfavours the interpretation of the data as the breakup of an exocomet onto an eccentric orbit. This is not the case, however, as there are other dynamical mechanisms that can place planetesimals onto highly eccentric orbits. The most appealing mechanism would be scattering of material in an outer belt inwards by a planet or chain of planets as is thought to occur in \(\eta\) Corvi (Marino et al., 2018). This would require a chain of planets in the system and for the architecture of the system to be such that the levels of dust supplied by scattering of parent bodies is roughly constant throughout the age of the system otherwise we would be unlikely to observe it. Similarly, another possible mechanism is the resonant destabilisation of a belt. This also requires the presence of a planet such that the locations of its resonance lie in any cold belt of planetesimals in the system such that the dynamics of any bodies in the belt would be chaotic, achieving high eccentricities over the lifetime of the system (Yoshikawa, 1990; Bonsor et al., 2013).
### Caveats
#### 5.3.1 Planets
The presence of planets in misaligned wide binary systems would act to suppress the Kozai instability induced by the companion. Perturbations from such planets would drive secular (or, for the right period
ratios, resonant) oscillations in the orbits of planetesimals. Ample evidence for the influence of planets on smaller bodies comes from our own Solar system in the form of the Asteroid and Kuiper belts, as well as various comet populations (Yoshikawa, 1990; Malhotra, 1995). This influence is also seen in exoplanetary systems, the comets seen in \(\beta\) Pic are thought to be scattered inwards from the planetesimal belt by one of the planets in the system (Kiefer et al., 2014), whilst the exozodi in the \(\eta\) Corvi system is thought to be due to scattering of comets inward from a cold outer belt by a chain of sufficiently massive planets (Marino et al., 2018). There are also eccentric belts, for example Fomalhaut (MacGregor et al., 2017; Gaspar et al., 2023), as well as those that have warps or gaps, which provide evidence that planets can dominate the evolution of planetesimals around them. If this effect is strong enough, usually meaning that the planetesimals are close enough to the planet(s), then the planetary interaction will have a greater effect than that of the binary companion and this would act to shut off the Kozai mechanism in a manner analogous to General Relativity (Innanen et al., 1997). The planet, however, could itself be affected by the star and increase its eccentricity and the effect of this on the planetesimals orbits is unknown though the evolution of planets under the Kozai mechanism may be subject to tidal considerations which severely complicate the picture. In addition to this, a system of multiple planets with or without a belt can precess as a rigid disc in the presence of a highly misaligned companion star instead of undergoing the eccentric Kozai mechanism and avoid destruction Imnanen et al. (1997).
#### 5.3.2 Input Distributions
Throughout this work it has been assumed that the inclination distribution of wide binary companions to planetary systems is uniformly distributed. Recent analysis of astrometric observations by Christian et al. (2022) and Behmard et al. (2022), however, have revealed the possibility that wide binary companions are biased towards low mutual inclinations. This could be caused by the natural inclination distribution that arises out of binary star formation through core fragmentation. Though some binaries would inevitably be formed by capture and have random orientations, these may be in the minority of total wide binary systems and would be represented only at the widest separations. The observed bias could also, however, be due to the Kozai mechanism itself. If the distribution inherited since birth is uniform, then it could be expected that some systems will have a high enough inclination that they will become unstable due to the EKM and hence will not be included in the samples analysed by Christian et al. (2022) and Behmard et al. (2022), as they will have been destroyed. Though it should be noted that, even for the most highly inclined systems, the susceptibility to the EKM is subject to the same restrictions outlined in section 3.6.
The parameter distributions used in this model are uncorrelated which is not necessarily true in real systems. For example, more massive stars might be expected to form with more massive protoplanetary discs and hence have more massive debris discs. Similarly, more distant binary companions are more likely to have formed by capture than core fragmentation than close in pairs and thus could be expected to have larger eccentricities. Whilst these would not change the final answer by orders of magnitude, they might affect the most likely masses and ages of stars that would be seen to be undergoing these events.
## 6 Conclusions
This work has sought to examine the effect of highly misaligned wide binary companion stars on planetesimal belts, with a specific focus on explaining the extreme lightcurve of KIC 8462852 through the 'Eccentric Kozai Mechanism'. The secular equations of motion for the hierarchical three body problem were integrated to show that planetesimals in a belt can reach eccentricities greater than 0.99 for large enough inclinations. The exact inclination above which this occurs depends on the semi-major axes of the planetesimal and companion, but in some cases can be as low as \(45^{\circ}\). For these inclinations, not only does this high eccentricity / low pericentre space become unlocked but the integrations also show that, on average, 100% of the belt particles will reach these high eccentricities.
These results were then fed into a Monte Carlo model of the Kepler field that sought to constrain how often the eccentric Kozai mechanism would be expected to produce an observable exocomet signature in the lightcurves of stars and the parameters of the most likely systems to be seen in this state. It was found that the binary systems most likely to be observed with large objects at small pericentres are those with belts at \(10^{2}-10^{3}\) au, companions at \(10^{2}-10^{4}\) au, host stars with masses \(M_{\star}\geq 1M_{\sun}\) and stellar ages of \(10^{2}-10^{3}\) Myr and, apart from the non-detection of a distant belt, all of these parameters match with what is known about the KIC 8462852 system. However the model found, on average, the fraction of their main sequence lifetimes that stars spend with large objects excited to high eccentricities is \(2.7\times 10^{-4}\), with a spread between \(10^{-9}-10^{-1}\). This leads to a probability of observing one or more Kepler stars to have KIC 8462852-like dimming events due to this mechanism of \(1.3\times 10^{-3}\). Hence, though it is possible that the Kozai mechanism might be the cause, it is much more likely than not that another mechanism is responsible, such as scattering by one or more planets undergoing a dynamical instability or resonant destabilisation of planetesimals in a belt. This has potential consequences beyond the interpretation of KIC 8462852 as the eccentric Kozai mechanism is often invoked to explain phenomena such as extreme debris discs. Only by extending this model to these other scenarios can it be determined whether this mechanism occurs often enough to be a viable explanation.
## Acknowledgements
SDY thanks the Science and Technology Facilities Council (STFC) for a PhD studentship.
## Data Availability
This work makes use of the mass distribution of stars in the Kepler data which can be found at [https://exoplanetarchive.ipac.caltech.edu/docs/KeplerMission.html](https://exoplanetarchive.ipac.caltech.edu/docs/KeplerMission.html). Additionally, the N-body simulations were carried out using rebound which is freely available at [https://rebound.readthedocs.io/en/latest/](https://rebound.readthedocs.io/en/latest/).
|
2308.16570 | MONDEO: Multistage Botnet Detection | Mobile devices have widespread to become the most used piece of technology.
Due to their characteristics, they have become major targets for botnet-related
malware. FluBot is one example of botnet malware that infects mobile devices.
In particular, FluBot is a DNS-based botnet that uses Domain Generation
Algorithms (DGA) to establish communication with the Command and Control Server
(C2). MONDEO is a multistage mechanism with a flexible design to detect
DNS-based botnet malware. MONDEO is lightweight and can be deployed without
requiring the deployment of software, agents, or configuration in mobile
devices, allowing easy integration in core networks. MONDEO comprises four
detection stages: Blacklisting/Whitelisting, Query rate analysis, DGA analysis,
and Machine learning evaluation. It was created with the goal of processing
streams of packets to identify attacks with high efficiency, in the distinct
phases. MONDEO was tested against several datasets to measure its efficiency
and performance, being able to achieve high performance with RandomForest
classifiers. The implementation is available at github. | Duarte Dias, Bruno Sousa, Nuno Antunes | 2023-08-31T09:12:30Z | http://arxiv.org/abs/2308.16570v1 | # MONDOE: Multistage Botnet Detection
###### Abstract
Mobile devices have widespread to become the most used piece of technology. Due to their characteristics, they have become major targets for botnet-related malware. FluBot is one example of botnet malware that infects mobile devices. In particular, FluBot is a DNS-based botnet that uses Domain Generation Algorithms (DGA) to establish communication with the Command and Control Server (C2). MONDOE is a multistage mechanism with a flexible design to detect DNS-based botnet malware. MONDOE is lightweight and can be deployed without requiring the deployment of software, agents, or configuration in mobile devices, allowing easy integration in core networks. MONDOE comprises four detection stages: Blacklisting/Whitelisting, Query rate analysis, DGA analysis, and Machine learning evaluation. It was created with the goal of processing streams of packets to identify attacks with high efficiency, in the distinct phases. MONDOE was tested against several datasets to measure its efficiency and performance, being able to achieve high performance with RandomForest classifiers. The implementation is available at github.
Keywords:Botnet DDoS FluBot AIDS Mobile Malware
## 1 Introduction
Mobile technology has been a massive success since its introduction, amassing a large portion of the active devices on the internet. These devices are low-powered, portable, and can use both LAN and WAN, giving them great versatility.
The security aspect of technology has been historically neglected. Mobile devices are no different. Even with the introduction of measures such as Google Play Protect [1] and Apple App Security [2], malware is still able to sneak its way into users' devices.
Mobile-related malware can occur under many formats and with many objectives, data exfiltration and financial damage, as mentioned in [3]. In particular, this document focuses on DNS-based malware that uses Domain Name Generation (DGA) algorithms as an evasive tactic, to hide its communications with Command and Control (C2) server(s). Diverse solutions for DGA approaches are available, such as Intel DGA [4] and DGA Detective [5], having different performance levels and requirements.
### FluBot
In 2021, large amounts of botnet-focused malware started spreading on the internet. This malware can perform ransomware attacks and Distributed Denial of Service (DDoS), among other malware-specific functionalities [6; 7]. FluBot was one of the most reported, mainly due to its massive lateral spreading capabilities.
FluBot-infected applications act receivers for a Command and Control (C2) network. Upon installation, the malware first hides from the user using several evasive tactics, in order to ensure its long stay in the infected device. Its removal is also non-trivial, requiring a full device wipe. According to ProDaft's report [6], FluBot can emulate well-known banking sites, steal 2-Factor Authentication (2FA) codes sent via SMS, exfiltrate data, uninstall applications, perform actions on users' behalf, among other vulnerabilities.
Botnets, such as FluBot impact not only end-users but also network operators. Internet service providers (ISPs) having users with infected devices suffer from increased network traffic, reducing the overall quality of the network. For instance, Domain Generation Algorithms (DGA) perform large amounts of DNS queries before connecting to the Command and Control- C2 Server(s) [7].
### Malware Detection
There are several methods to detect malware-infected systems. The static analysis comprises the methods that can be used without running the malware. Analyzing the malware's checksum is a form of static analysis. If the malware had been previously identified, then the hash comparison would quickly identify the presence of the identified malware. More in-depth static analysis dissects the program, analyzing its code, in order to understand its behavior.
The dynamic analysis consists of evaluating the malware while it is running. This practice is more dangerous and should only be performed in a safe environment (a sandbox). The dynamic analysis allows the researcher to identify runtime behaviour that may otherwise be impossible to find. To combat dynamic analysis, some malware has evolved to avoid running if it detects it is running in a virtualized environment.
Signature-based malware detection aims to identify the malware by relying on a list of well-known behaviours (these include, network traffic patterns, file hashes, system calls, etc). Signature-based detection usually provides accurate results at the expense of speed of detection (can only detect malware whose behaviour has been previously documented).
Anomaly-based detection expands the concept of signature detection, detecting malware based on unusual activity patterns. Such patterns include network traffic, connection timing, and system behaviour through the analysis of system calls. Anomaly-based detection does not need a previous baseline to detect new malware, granting better results on dynamic malware changes, at the cost of lower accuracy rates.
According to Singh et al., [8], DNS-based botnet malware can be divided into five categories botnet: Anomaly, Flow, Flux, DGA, and Bot Infection. Following this classification, Flutbot is considered to be a DGA-based malware.
AI has proven to be a valuable method for the detection of botnets. Examples include HANABot [9], MABDS [10] or BotMark [11], which make use of techniques such as Reinforcement Learning, Adaboost, K-Nearest Neighbors (kNN), Support Vector Machines (SVM), and Isolated Forests.
### Mondeo
MONDEO was implemented as a proof-of-concept (POC), using a docker container running python-flask, using RESTful API responses, and was evaluated with datasets both created in the laboratory and by collecting data on a deployed DNS server. The results show the suitability of MONDEO to be deployed in network infrastructures of operators with minimal overhead and with accurate precision levels.
Regarding performance, MONDEO aims to be time efficient, balancing accuracy and speed of detection to deliver precise results using production-grade scenarios as a baseline for tool design.
MONDEO provides a solution that does not require software deployment, agents, or configuration in mobile devices. To achieve this, it comprises a flexible multistage pipeline, which processes streams of packets, providing a per-request floating-point classification (0 to 1, Non-infected to Infected). This approach combines anomaly detection, DGA-based detection, and machine learning in an effort to obtain better and faster results. MONDEO also innovates in the aspect that the results of some phases can be used to configure the initial phase, where blacklists and/or whitelists are applied.
The solution of MONDEO is available at github.
### Structure
The document considers the following structure. Section 2 presents an overview of the state-of-the-art botnet detection, as well as relevant literature on the Flubot malware. The section concludes by comparing the MONDEO approach to other state-of-the-art approaches. Section 3 details the MONDEO pipeline, carefully describing each step; Section 4 presents the evaluation methodology, while Section 5 documents the achieved results; Section 5 discusses possible changes and additions to be made to MONDEO; Section 6 concludes the document providing an overview of the technology as well as a final analysis.
## 2 Related Work and MONDEO positioning
This section presents the related work and positions MONDEO regarding the SoA.
### Related Work
Khraisat et al. [12] provide an overview of Network-based Intrusion Detection Systems (NIDS), Host-based Intrusion Detection systems (HIDS) Signature-based detection (SIDS), and Anomaly-Based detection (AIDS). It also provides an overview of algorithms and techniques used in machine learning detection approaches.
Manmeet Singh et al. [8] survey DNS-based botnet detection frameworks, documenting the usage of different techniques. This work divides detection into five categories: anomaly-based, flow-based, flux-based (e.g., using IP information), DGA-based, and Bot infection detection-based. According to the research, the majority of the works identified rely on data for DNS response-based features, which take a longer time to perform accurate detection.
Majda Wazzan et al. [13] survey botnet detection mechanisms for IoT devices, including connected cameras, routers, and Android devices. For threats such as Mirai and Reaper, detection techniques are documented for different malware phases such as Reconnaissance, Spread, and Attack. MONDEO shares similarities in detection techniques used such as Adaboost, K-Nearest Neighbors (kNN), Support Vector Machines (SVM), and Isolated Forests. Noticeably, detection using such techniques is performed during the attack phase, where infected devices establish communication with the C2 server and act actively in attacks.
Ying Xing et al. [14] survey botnet detection techniques including honeypot analysis, communication signatures (e.g. using white and blacklists), Deep Learning techniques (both based on Neural networks or based on Reinforcement Learning), statistical analysis, distributed approaches and also combination methods. The work includes recent approaches for Moving Target Defense (MTD), also detailing combination methods like HANABot [9] or MABDS [10] which use multiple techniques to detect botnet requiring the deployment of agents in the mobile devices and at the network side like honeypot agents.
Rosa et al. [15] provide an overview of machine learning techniques for malware detection in Industrial Autonomous Control Systems (IACS). For detection using Artificial intelligence (AI), the techniques with the best results mentioned are k-Nearest Neighbour (kNN), Support Vector Machines (SVM), and Isolated Forests.
Salsabila et al. [16] analyze FluBot using a combination of static and dynamic analysis. This paper focuses on disassembling the malware and analyzing the underlying code, as well as documenting the runtime behaviour of the malware in the infected device.
BotMark [11] is a botnet detection strategy based on several techniques and with multiple steps. Techniques include pre-processing, hybrid analysis, and bot detection. The approach was created to be protocol independent and therefore, the used features do not depend on the protocol used, for example, packets per second (PPS) and standard deviation of payload size. BotMark employs kNN techniques for clustering and identifying distinct communication flows. While the evaluation achieves high accuracy in datasets with Mirai, Ares, and Black
Energy, BotMark does not include whitelists or blacklists, meaning it can provide false negatives in the traffic patterns of legitimate applications like Redis or Zookeeper with similar characteristics of malware in botnets.
### MONDEO - Innovation and approach comparison
MONDEO is positioned as an approach that combines several phases with accuracy, efficiency and integration purposes. The multi-staged approach allows network operators to perform botnet detection using existing whitelist/blacklisting approaches, without requiring the installation of any in mobile phones.
When compared to other approaches that can detect FluBot-based attacks, Salsabila et al. [16] focus on static and runtime behaviour analysis, whereas MONDEO focuses on network-based detection. Bellizi et al. [17] analyze memory dumps using the JIT-MF framework to detect malware in the users' system, such as FluBot. MONDEO does not focus on the user device but on identifying malware from the perspective of the mobile carrier, presenting a solution that does not imply any contribution from the infected user. Chiscop et al. [18] identify FluBot based on its DGA behaviour, MONDEO combines DGA detection with other features, such that detection can be performed more efficiently.
Regarding other network-related botnet detection approaches, Singh et al. [8] present solutions where both DNS replies/answers, where MONDEO only require the usage of DNS requests for detection.
Other works also propose a multilayered approach to botnet detection. Almutiari et al. [9] propose HANABot, a machine-learning-based evaluation algorithm that uses features from both the network and the host. Wang et al. [10] propose BBDP, behaviour-based botnet detection in parallel. This approach uses 5 stages to detect botnets, consisting of traffic reduction, feature extraction, data partitioning, DNS detection, and TCP detection. Of the mentioned stages, MONDEO shares only the DNS detection stage.
The main MONDEO's novelty is related not to the underlying detection principles used, but to how the approach is taken and how resources are combined to ensure efficient and accurate detection.
## 3 MONDEO Implementation Details
MONDEO is a multistage pipeline configured for flexibility and efficiency. It contains 4 evaluation stages: 1- Whitelisting/Blacklisting; 2- Query Rate Analysis; 3- DGA Evaluation; and 4- Machine Learning Evaluation. The diverse stages rely on current practices regarding access control in networks like the whitelists/blacklists and requests ratios. The DGA and ML are specific to the Flubot malware behaviour. Figure 1 illustrates the pipeline.
Each phase may generate an evaluation resulting in one of 3 actions:
* **Benign** when part of a regular request. This is present in all the phases, except phase 2 which does not do this classification.
* **Infected** when belonging to malware requests.
* **Pass for next stage** when a phase considers it does not have enough information, having the packet transit to the next phase.
The last phase with Machine Learning models always produces a definite evaluation, either Benign or Infected.
The ML and DGA phases support a feedback loop which is able to automatically populate both white and blacklists, further improving efficiency, especially on the subsequent analysis, where requests can be evaluated in the first phase.
MONDEO uses solely DNS queries for botnet detection, which is relevant for mobile network operators, as most mobile users rely upon the default DNS servers on their devices, usually provided by the operator. Furthermore, by working on the side of the network operator, MONDEO removes the need for an end user to install agents, and applications to run on their devices.
### Phase 1 - Whitelisting/Blacklisting
A whitelist and blacklist correspond to lists where domains are catalogued and directly compared against the queried domain. If a domain is in the whitelist, the packet passes evaluation whereas if the packet is in the blacklist, the packet is immediately flagged as infected. For efficiency, a hash-based structure guarantees \(O(1)\) search complexity. If the list is too big to be reliably implemented using hash-based data structures, binary structures are a good second option, with an associated \(O(log(n)\) complexity for the search operation.
Figure 1: MONDEO overall stages and feedback loop
The options described above take into consideration a full (1-1) direct match. To improve the performance, the Free Level Domain (FLD) can be stripped from the full query domain, and, while it has slightly lower precision, it doesn't affect results too much. For example, the domain ".netflix.com" contains several FLD's, namely ftl.netflix.com, api-global.netflix.com.
The Proof of Concept (PoC) elaborated to evaluate MONDEO employs a linear search with (\(O(n)\)) complexity with FLD stripping.
The whitelist and blacklist can be dynamically improved by adding domains to the list according to the runtime decision of other phases (such as phases 3 and 4) on those domains. That is through the feedback loop. Nonetheless, it should be noted that adding to a whitelist without manual confirmation can be dangerous from a security standpoint.
### Phase 2 - Query Rate Analysis
Analyzing the FluBot action pattern reveals that, to establish a connection with the C2 server, FluBot spams several thousands of queries in a short period of time until it connects with the real C2 server.
This pattern can be detected by implementing a mechanism to detect a high query rate. An efficient approach makes implementation nontrivial, as the best structure would require an event-driven data structure, such as a doubly-linked list, where both ends of the lists are updated whenever a new packet is processed, ensuring the time interval, \(\Delta t\), is maintained.
A simpler but also efficient approach was used in the PoC. Instead of keeping track of all of the events (i.e. packets) at any given time window, the difference between any 2 contiguous packets is measured. For example, if _Packet_1_ arrived at timestamp \(t1=1\) and _Packet_2_ arrived at \(t2=2\) then they diverge in one time-unit, which cannot necessarily be measured in seconds. Using this approach sensitivity can be parameterized in 2 ways:
* \(\Delta F\) refers to the divergence interval in time units;
* \(K\) specifies the threshold in the number of packets that surpass the divergence threshold.
If \(\Delta F=0\) then packets must arrive with the same timestamp (the smallest value possible and, therefore, the least of packets are caught). \(K\) limits false positives delaying warnings. For example, in situations where a legitimate service makes 5 queries under \(\Delta F\) time, it is not reported for any \(K\leq 5\). Algorithm 1 summarises the steps that are performed in the Query Ratio analysis phase.
### Phase 3 - DGA Detection
Any packet passing phase 2 has its Fully Qualified Domain Name (FQDN) analyzed by a DGA checker. DGAs produce random-but-deterministic FQDNs names that are used in FluBot's DNS requests. The attacker registers FQDN(s) as legitimate C2 server(s), and if that one gets blacklisted, the attacker quickly registers a new one. This technique is much more resilient than using static IPs.
In the MONDEO PoC we have considered two main implementations of the DGA detector:
* DGA Intel (_DGAIntel_) [4], maintained by Intel
* DGA Detective (_DGADet_) [5], result from the H2020 SOCCRATES research project.
The open-source solutions perform analysis of the DNS requests, resulting in a floating point evaluation where 0 indicates that a domain is non-DGA generated and 1 that it is DGA-Generated. The acceptance/rejection criteria were defined with lower and upper boundaries as follows:
* \(lower\leq 0.1\) means immediate acceptance.
* \(upper\geq 0.9\) delineates immediate rejection.
### Phase 4 - Machine Learning Detection
The last phase of the pipeline performs a machine learning evaluation. As expected this phase takes more time and should evaluate, fewer requests to reduce the impact in the detection of Flubot. This phase produces a binary output where 0 is not infected, and 1 is an infected packet.
#### 3.4.1 Feature Selection
The features used are summarized in Table 1 and are taken from DNS requests. Some of the fields were converted to numeric values for efficiency concerns. The IP addresses used bit conversion for each decimal octet of an IP address version 4.
#### 3.4.2 Model Training
The training data included a total of 10.000 data points, from which there is a 50/50 split between a fabricated packet set using the Alexa Top 1 million domain list [19], and lab-generated malware samples (see Section 4.1).
The Alexa Top 1 million was used since it contains the most well-known, or used domains in the Internet. The names were used to generate DNS requests with names of these domains, such as www.facebook.com.
The trained ML model uses an 80/20 test/train split, where 80% of the data is used to train the model and the remaining 20% is used to test the accuracy of the model. The model is implemented in Python using the scikit-learn, using _RandomForest_, _IsolationForest_, _MPLC_, and _SVM_ model classifiers. Such models provide an acceptable tradeoff regarding classification accuracy and performance [20].
## 4 MONDEO Evaluation Methodology
This section documents the evaluation methodology to assess the performance of MONDEO.
### DNS Experimental Setup and Datasets
To create a realistic dataset a DNS server was configured using ISC BIND. The information present in the datasets included captured DNS packets from the regular (non-infected) DNS clients.
The data collected comes from volunteers, which configured their devices to use the DNS server. About 20 users participated in the experiment with their mobile phones and laptops. The collection of DNS data was performed during a period of three months.
To assess the behaviour of FluBot safely, Android Studio was used to sandbox the malware samples. The samples with FluBot malware were available in three distinct applications, UPS, Correos and DHL, as summarised in table 2.
The profile for the emulated device was based on Google's Pixel 4, running the Android API 29. This device is also used in related works assessing the behaviour of mobile devices [21, 22]. Malware was activated in specific time windows such that clear samples could be captured.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Feature ID** & **Description** & **ML Data Type** \\ \hline IP Src & Source IP performing request & Bit Conversion \\ IP Dst & Destination of DNS request & Bit Conversion \\ Length & Size of the Payload & Integer \\ DNS Flag & Info Regarding Flags & Boolean \\ DNS Questions & N. of requests in DNS message & Integer \\ Query Type & Qry Type: a, AAA, CNAME, ptr & integer \\ Qry Name Null & If DNS name is NULL or not & Boolean \\ Timestamp & Indication packet creation time & Integer \\ \hline \hline \end{tabular}
\end{table}
Table 1: Selected Feature Set
### Evaluation Methodology
The MONDEO framework was evaluated under two parameters: the accuracy of detection and the efficiency of detection.
#### 4.2.1 Machine Learning Model
The goal is to verify the findings in the state of the art [20] regarding the applicability of the ML classifiers models like _RandomForest_, _IsolationForest_, _MPLC_, and _SVM_.
To assess the accuracy of the machine-learning models, we have used the metrics provided by _scikit-learn_ for the diverse models, as summarised in Table 3.
#### 4.2.2 MONDEO Pipeline
MONDEO's evaluation was performed in a virtual machine configured with 4vCPUS and 16GB of RAM. MONDEO's performance was evaluated individually in each phase in terms of processing time per phase and the overall number of packets processed.
The solution is implemented in two dockers:
* **MONDEO core** assuring functionalities in Phase 1, Phase 2 and Phase 4.
* **MONDEO DGA** which implements Phase 3 only, running the specific solutions of the DGA Intel and DGA Detective.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Name** & **File(s)** & **Description** \\ \hline UPS & 83 & Application that mimics official UPS app \\ Correos & 108, Lab & Application that mimics Correos app \\ DHL & 125 & Application similar to DHL app for tracking \\ \hline \hline \end{tabular}
\end{table}
Table 2: FluBot malware sample information
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Metric** & **Unit** & **Description** \\ \hline Precision & \(0\leq x\leq 1\) & Ratio \(TP/(TP+FP)\) where TP is the number of true positives and FP the false positives. \\ \hline Recall & \(0\leq x\leq 1\) & Ratio \(TP/(TP+FN)\) measuring the percentage of actual infected samples correctly classified. \\ \hline F1-Score & \(0\leq x\leq 1\) & Determined by the formula \(2*(\frac{Precision*Recall}{Precision+Recall})\). 1 represents the best score and 0 the worst. \\ \hline Accuracy & \(0\leq x\leq 1\) & Determined by matching predicted labels with _y_true_ (real value). \\ \hline Support & \(x\geq 0\) & Number of occurrences of each class in _y_true_. \\ \hline \hline \end{tabular}
\end{table}
Table 3: Metrics for ML Accuracy
MONDEO core which in one containing the phases
The test data used was retrieved from the DNS packet collection setup (summarised in Table 4). In addition, test #2 included a crafted sample, based on _Alexa Top 1 Million_[19] list, which includes the most visited domains on the internet, as described in section 3. The dataset of Alexa increases the number of the tested domains, adding reliability to the experiments.
The metrics used to assess the performance of MONDEO are summarised in Table 5
To perform the measurement of time, the _TimeIt_ python package was included in the developed PoC. This module directly measures the time taken by Python methods, returning a 16-digit decimal value.
The resources consumed in terms of CPU and memory usage, as well as the amount of input and output traffic, are exchanged for the classification process.
## 5 Evaluation Results
This section presents the evaluation results, according to the evaluation methodology, and chosen models.
### ML Model Accuracy
The ML model accuracy results are summarised in Table 6. The model has higher accuracy and efficiency when the value of the diverse metrics is higher. The main goal of the distinction models is to assess the one that is more suited to MONDEO multistage pipeline.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Test** & **Type** & **File(s)** & **Description** \\ \hline \#1 & Infected & 83, 108, 125, Lab & With samples of malware \\ \#2 & Benign & 23, 240, Alexa & Only with regular DNS requests \\ \hline \hline \end{tabular}
\end{table}
Table 4: Tests in MONDEO Pipeline Evaluation
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Metric** & **Unit** & **Description** \\ \hline \multirow{2}{*}{Packets Processed} & \multirow{2}{*}{\%} & Ratio of packets processed in each phase, considering the total of captured packets \\ \hline Processing Time & ms & Time to process a packet in each phase \\ \hline \multirow{2}{*}{Classification} & n/a & Final classification of MONDEO, if the packet is flagged as Infected or is identified as Benign. \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance Metrics in the MONDEO Pipeline
In terms of accuracy, the _RandomForest_ is the classifier with the best performance, being able to correctly classify all the non-infected and infected samples. All the models are able to classify the non-infected samples but differ when classifying the infected samples, with MLPC providing the worst performance in terms of precision and accuracy. Given such results, the _RandomForest_ is chosen as the classifier to be used in the Pipeline. The results achieved with the ensemble modes like _RandomForest_ and _IsolationForest_ are in line with the achieved results in SoA for anomaly detection [20] (recall section 3.4).
### MONDEO Pipeline Performance
The results in the MONDEO Pipeline are presented considering tests summarised in Table 4, for three approaches: 1) _DGAInt_ - DGAIntel without a feedback loop, 2) _DGAInt-Fed_ - DGAIntel with the feedback loop, 3) _DGADet_ - DGA Detective without a feedback loop. It should be noted that the samples are not balanced, Test #1 contains a majority of _Infected_ samples while Test #2 contains only _Non-infected_ packets.
Figures 1(a) and 1(b) show the percentage of packets by the phase they left the pipeline in the three DGA approaches, for the _Benign_ and _Infected_ files. In both cases, the use of the feedback loop in the _DGAInt-Fed_ leads to higher ratios of packets being processed at the blacklist phase - _Phase 1_. Also, as expected the query analysis ratio in _Phase 2_ is able to identify a high percentage of packets (above 58%). _Phase 3_ with the employment of DGA is also responsible for classifying a significant portion of packets depending if the file is benign or malign, above 70% and above 20%, respectively.
Figure 2(a) and 2(b) illustrate the processing time of packets in each phase. The phase with higher processing time is the one associated with the _RandomForest_ ML model, as expected - _Phase 4_. Indeed the difference is around 60ms between the performance of _Phase 1_ and _Phase 4_, in the Benign and Infected files. In
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Classifier** & **Category** & **Precision** & **Recall** & **F1-score** & **Support** \\ \hline \multirow{4}{*}{RandomForest} & Benign (0) & 1.00 & 1.00 & 1.00 & 1014 \\ & Infected (1) & 1.00 & 1.00 & 1.00 & 986 \\ \cline{2-6} & Accuracy & & & 1.00 & 2000 \\ \hline \multirow{4}{*}{SVM} & Benign (0) & 1.00 & 0.51 & 0.68 & 953 \\ & Infected (1) & 0.69 & 1.00 & 0.82 & 1047 \\ \cline{2-6} & Accuracy & & & 0.77 & 2000 \\ \hline \multirow{4}{*}{MLPC} & Benign (0) & 1.00 & 0.00 & 0.00 & 1021 \\ & Infected (1) & 0.49 & 1.00 & 0.82 & 979 \\ \cline{2-6} & Accuracy & & & 0.49 & 2000 \\ \hline \multirow{4}{*}{IsolationForest} & Benign (0) & 1.00 & 0.00 & 0.00 & 955 \\ & Infected (1) & 0.57 & 0.75 & 0.65 & 1045 \\ \cline{1-1} \cline{2-6} & Accuracy & & & 0.39 & 2000 \\ \hline \hline \end{tabular}
\end{table}
Table 6: ML Accuracy of the Different Classifiers
Figure 2(a) there are also some flagged packets that are classified as belonging to malicious requests. In the same line as the ML model, _Phase 3_ is also the one that takes more time due to the invocation of the DGA algorithms. In the Infected test case, as depicted in Figure 2(b), the use of the feedback loop introduces additional processing, which impacts processing time in _Phase 3_ and _Phase 4_. The lower times in _Phase 1_ and _Phase 2_ demonstrate the relevance of supporting these filtering functionalities towards highly efficient filtering approaches, with the possibility of being incorporated into existing security mechanisms like firewalls.
### MONDEO Pipeline Overhead
This subsection details the overhead results of the pipeline in terms of used resources: CPU, memory and the exchanged traffic in each approach.
The reported values for the CPU and memory utilisation are obtained from the docker statistics, each was collected in regular intervals (each 2s) during the experiments.
Figure 3: Processing Time per Phase and Test
Figure 2: Ratio of Packets Process per Test
Figure 4 depicts the CPU utilisation ratio in the diverse tests. The approaches, leading to higher CPU utilisation are the ones that do not use the feedback loop, namely the _DGAInt_ and the _DGADet_. The reason is associated with a higher number of packets being processed in _Phase 3_ and _Phase 4_, as previously discussed.
As depicted in Figure 5, the memory usage ratio is low (below 6%) in all the tests. The difference between the _Benign_ and _Infected_ files is neglectable (below 0.1%), which demonstrates that MONDEO has a low memory footprint, despite having high ratios of CPU usage.
Figure 4: CPU usage ratio
Figure 5: Memory usage ratio
Figure 6 depicts the impact regarding the exchanged traffic within each test. This metric translates the amount of information (packets) that is sent to the dockers where MONDEO is implemented and the respective answers. According to the evaluated scenario, there is a tendency to have less exchange of information with the feedback loop. This is due to the fact of avoiding communication with the docker running the DGA service, impacting, therefore the input and output traffic in both containers.
## 6 Conclusions
This paper proposes and validates MONDEO as a multistage approach for botnet detection, targeting malware relies on the DNS protocol and DGA obfuscation technique. MONDEO is a flexible and scalable mechanism that does not require software deployment or configuration in mobile devices. Thus, it is suitable to be implemented by network operators to combat such types of malware.
MONDEO relies on well-known detection approaches employed for botnet detection, such as DGA and flow information characteristics, to process streams of packets with high efficiency. MONDEO also supports white and blacklist that can use feedback loops to improve efficiency.
The evaluation results demonstrate the suitability of MONDEO to be deployed in network infrastructures of operators with minimal overhead and with accurate precision levels.
Our next steps focus on the integration of policies and defence mechanisms, such that the results are mapped to mitigation tactics. Examples include blacklisting devices, and enabling dynamic policies through policy control. In addition, we will work with encrypted DNS traffic that is transmitted using HTTPS protocol.
Figure 6: Input and output overhead in bytes
## Acknowledgments
This work is funded by project AIDA (POCI-01-0247-FEDER045907), co-financed by the European Regional Development Fund (ERDF) through the Operational Program for Competitiveness and Internationalisation (COMPETE 2020) and by the Portuguese Foundation for Science and Technology (FCT) under CMU Portugal. This work is funded by the FCT - Foundation for Science and Technology, I.P./MCTES through national funds (PIDDAC), within the scope of CISUC R&D Unit - UIDB/00326/2020 or project code UIDP/00326/2020.
|
2309.07754 | Dynamic programming on bipartite tree decompositions | We revisit a graph width parameter that we dub bipartite treewidth, along
with its associated graph decomposition that we call bipartite tree
decomposition. Bipartite treewidth can be seen as a common generalization of
treewidth and the odd cycle transversal number. Intuitively, a bipartite tree
decomposition is a tree decomposition whose bags induce almost bipartite graphs
and whose adhesions contain at most one vertex from the bipartite part of any
other bag, while the width of such decomposition measures how far the bags are
from being bipartite. Adapted from a tree decomposition originally defined by
Demaine, Hajiaghayi, and Kawarabayashi [SODA 2010] and explicitly defined by
Tazari [Th. Comp. Sci. 2012], bipartite treewidth appears to play a crucial
role for solving problems related to odd-minors, which have recently attracted
considerable attention. As a first step toward a theory for solving these
problems efficiently, the main goal of this paper is to develop dynamic
programming techniques to solve problems on graphs of small bipartite
treewidth. For such graphs, we provide a number of para-NP-completeness
results, FPT-algorithms, and XP-algorithms, as well as several open problems.
In particular, we show that $K_t$-Subgraph-Cover, Weighted Vertex
Cover/Independent Set, Odd Cycle Transversal, and Maximum Weighted Cut are
$FPT$ parameterized by bipartite treewidth. We provide the following complexity
dichotomy when $H$ is a 2-connected graph, for each of $H$-Subgraph-Packing,
$H$-Induced-Packing, $H$-Scattered-Packing, and $H$-Odd-Minor-Packing problem:
if $H$ is bipartite, then the problem is para-NP-complete parameterized by
bipartite treewidth while, if $H$ is non-bipartite, then it is solvable in
XP-time. We define 1-${\cal H}$-treewidth by replacing the bipartite graph
class by any class ${\cal H}$. Most of the technology developed here works for
this more general parameter. | Lars Jaffke, Laure Morelle, Ignasi Sau, Dimitrios M. Thilikos | 2023-09-14T14:39:12Z | http://arxiv.org/abs/2309.07754v1 | # Dynamic programming on bipartite tree decompositions1,2
###### Abstract
We revisit a graph width parameter that we dub _bipartite treewidth_, along with its associated graph decomposition that we call _bipartite tree decomposition_. Bipartite treewidth can be seen as a common generalization of treewidth and the odd cycle transversal number. Intuitively, a bipartite tree decomposition is a tree decomposition whose bags induce almost bipartite graphs and whose adhesions contain at most one vertex from the bipartite part of any other bag, while the width of such decomposition measures how far the bags are from being bipartite. Adapted from a tree decomposition originally defined by Demaine, Hajiaghayi, and Kawarabayashi [SODA 2010] and explicitly defined by Tazari [Theor. Comput. Sci. 2012], bipartite treewidth appears to play a crucial role for solving problems related to odd-minors, which have recently attracted considerable attention. As a first step toward a theory for solving these problems efficiently, the main goal of this paper is to develop dynamic programming techniques to solve problems on graphs of small bipartite treewidth. For such graphs, we provide a number of para-NP-completeness results, FPT-algorithms, and XP-algorithms, as well as several open problems. In particular, we show that \(K_{t}\)-Subgraph-Cover, Weighted Vertex Cover/Independent Set, Odd Cycle Transversal, and Maximum Weighted Cut are FPT parameterized by bipartite treewidth. We also provide the following complexity dichotomy when \(H\) is a 2-connected graph, for each of the \(H\)-Subgraph-Packing, \(H\)-Induced-Packing, \(H\)-Scattered-Packing, and \(H\)-Odd-Minor-Packing problems: if \(H\) is bipartite, then the problem is para-NP-complete parameterized by bipartite treewidth while, if \(H\) is non-bipartite, then the problem is solvable in XP-time. Beyond bipartite treewidth, we define 1-\(\mathcal{H}\)-treewidth by replacing the bipartite graph class by any graph class \(\mathcal{H}\). Most of the technology developed here also works for this more general parameter.
**Keywords:** tree decomposition, bipartite graphs, dynamic programming, odd-minors, packing, maximum cut, vertex cover, independent set, odd cycle transversal
###### Contents
* 1 Introduction
* 2 Overview of our techniques
* 2.1 Dynamic programming algorithms
* 2.1.1 \(\mathsf{FPT}\)-algorithms
* 2.1.2 \(\mathsf{XP}\)-algorithms
* 2.2 Hardness results
* 3 Definitions
* 4 General dynamic programming to obtain \(\mathsf{FPT}\)-algorithms
* 4.1 Boundaries and nice problems
* 4.2 General dynamic programming scheme
* 4.3 Generalizations
* 4.4 Applications
* 4.4.1 \(K_{t}\)-Subgraph-Cover
* 4.4.2 Weighted Vertex Cover/Weighted Independent Set
* 4.4.3 Odd Cycle Transversal
* 4.4.4 Maximum Weighted Cut
* 5 \(\mathsf{XP}\)-algorithms for packing problems
* 5.1 (Induced) Subgraph packing
* 5.2 Odd-minor-packing
* 6 \(\mathsf{NP}\)-completeness on graphs of bounded \(\mathsf{btw}\)
* 6.1 Coloring
* 6.2 Hardness of covering problems
* 6.3 Hardness of packing problems
* 7 Further research
Introduction
A graph \(H\) is said to be an _odd-minor_ of a graph \(G\) if it can be obtained from \(G\) by iteratively removing vertices, edges, and contracting edge cuts. Hadwiger's conjecture [18], which is open since 1943, states that if a graph excludes \(K_{t}\) as a minor, then its chromatic number is at most \(t-1\). In 1993, Gerards and Seymour [21] generalized this conjecture to odd-minors, hence drawing attention to odd-minors: the Odd Hadwiger's conjecture states that if a graph excludes \(K_{t}\) as an odd-minor, then its chromatic number is at most \(t-1\). Since then, a number of papers regarding odd-minors appeared. Most of them focused to the resolution of the Odd Hadwiger's conjecture (see for instance [14], and [34] for a nice overview of the results), while some others aimed at extending the results of graph minor theory to odd-minors (see for instance [19, 8, 24]). In particular, Demaine, Hajiaghayi, and Kawarabayashi [8] provided a structure theorem which essentially states that graphs excluding an odd-minor can be obtained by clique-sums of almost-embeddable graphs and almost bipartite graphs. To prove this, they implicitly proved the following, which is described more explicitly by Tazari [35] (see Section3 for the corresponding definitions).
**Proposition 1.1** ([35], adapted from [8]).: _Let \(H\) be a fixed graph and let \(G\) be a given \(H\)-odd-minor-free graph. There exists a fixed graph \(H^{\prime}\), \(\kappa,\mu\in\mathbb{N}\) depending only on \(H\), and an explicit uniform algorithm that computes a rooted tree decomposition of \(G\) such that:_
* _the adhesion of two nodes has size at most_ \(\kappa\)_, and_
* _the torso of each bag_ \(B\) _either consists of a bipartite graph_ \(W_{B}\) _together with_ \(\mu\) _additional vertices (bags of Type 1) or is_ \(H^{\prime}\)_-minor-free (bags of Type 2)._
_Furthermore, the following properties hold:_
1. _Bags of Type 2 appear only in the leaves of the tree decomposition,_
2. _if_ \(B_{2}\) _is a bag that is a child of a bag_ \(B_{1}\) _in the tree decomposition, then_ \(|B_{2}\cap V(W_{B_{1}})|\leq 1\)_; and if_ \(B_{2}\) _is of Type 1, then_ \(|B_{1}\cap V(W_{B_{2}})|\leq 1\) _as well,_
3. _the algorithm runs in time_ \(\mathcal{O}_{H}(|V(G)|^{4})\)_, and_
4. _the_ \(\mu\) _additional vertices of the bags of Type 1, called_ apex _vertices, can be computed within the same running time._
It is worth mentioning that Condition2 of Proposition1.1 is slightly stronger than what is stated in [35], but it follows from the proof of [8, Theorem 4.1].
The tree decomposition described in Proposition1.1 seems hence adapted to study problems related to odd-minors. As a first step toward building a theory for solving such problems, we study in this paper a new type of tree decomposition, which we call _bipartite tree decomposition_, corresponding to the tree decompositions of Proposition1.1, but where all bags are only of Type 1. We also stress that this decomposition has also been implicitly used in [23] and is also introduced, under the same name, in [5]. The width of such tree decompositions is defined as the maximum number of apex vertices in a bag of the decomposition. Naturally, the _bipartite treewidth_ of a graph \(G\), denoted by \(\mathsf{btw}(G)\), is the minimum width over all bipartite tree decompositions of \(G\); a formal definition is given in Section3. It follows easily from the definition that \(\mathsf{btw}(G)=0\) if and only if \(G\) is bipartite (indeed, to prove the sufficiency, just take a single bag containing the whole bipartite graph, with no
apex vertices). More generally, for every graph \(G\) it holds that \(\mathsf{btw}(G)\leq\mathsf{oct}(G)\), where \(\mathsf{oct}\) denotes the size of a minimum odd cycle transversal, that is, a vertex set intersecting every odd cycle. On the other hand, since a bipartite tree decomposition is a tree decomposition whose width is not larger than the maximum size of a bag (in each bag, just declare all vertices as apices), for every graph \(G\) it holds that \(\mathsf{btw}(G)\leq\mathsf{tw}(G)+1\), where \(\mathsf{tw}\) denotes treewidth. Thus, bipartite treewidth can be seen as a common generalization of treewidth and the odd cycle transversal number. Hence, an \(\mathsf{FPT}\)-algorithm parameterized by \(\mathsf{btw}\) should generalize both \(\mathsf{FPT}\)-algorithms parameterized by \(\mathsf{tw}\) and by \(\mathsf{oct}\). Since our goal is to develop a theory for solving problems related to odd-minors, the first prerequisite is that bipartite treewidth is closed under odd-minors. Fortunately, this is indeed the case (cf. Lemma3.2). Interestingly, this would not be true anymore if, in Condition2 of Proposition1.1, the considered intersections were required to be upper-bounded by some integer larger than one (cf. Lemma3.3).
This type of tree decomposition has been already used implicitly by Kawarabayashi and Reed [23] in order to solve Odd Cycle Transversal parameterized by the solution size. Independently of our work, Campbell, Gollin, Hendrey, and Wiederrecht [5] are also currently studying bipartite tree decompositions. In particular, they provide universal obstructions characterizing bounded \(\mathsf{btw}\) in the form of a "grid theorem" (actually the result of [5] apply in the much more general setting of undirected group labeled graphs). They also designed an \(\mathsf{FPT}\)-approximation algorithm that can construct a bipartite tree decomposition in time \(g(k)\cdot n^{4}\log n\). This \(\mathsf{FPT}\)-approximation is an important prerequisite for our algorithmic results as it permits us to assume that, for the implementation of our algorithms, some (approximate) bipartite tree decomposition is provided in advance.
Our aim is to provide a general framework for the design of dynamic programming algorithms on bipartite tree decompositions and, more generally, on a broader type of decompositions that we call _1-\(\mathcal{H}\)-tree decompositions_. These decompositions generalize bipartite tree decompositions, in the sense that the role of bipartite graphs is replaced by a general graph class \(\mathcal{H}\).
**Our results.** In this article we formally introduce bipartite treewidth and bipartite tree decompositions (noticing that they were implicitly already used before, as discussed above). We then focus on the complexity of various problems when the bipartite treewidth of the input graph is taken as a parameter. In particular, we show the following (cf. Table1):
* While a graph with \(\mathsf{btw}\) at most \(k\) is \((k+2)\)-colorable (Lemma6.1), 3-Coloring is \(\mathsf{NP}\)-complete even on graphs of \(\mathsf{oct}\) of size three (Lemma6.2), and thus \(\mathsf{btw}\) at most three.
* \(K_{t}\)-Subgraph-Cover, Weighted Vertex Cover/Independent Set, Odd Cycle Transversal, and Maximum Weighted Cut are \(\mathsf{FPT}\) parameterized by \(\mathsf{btw}\) (cf. Subsection4.4). In particular, our \(\mathsf{FPT}\)-algorithms extend the domain where these well-studied problems can be solved in polynomial time to graphs that are "locally close to being bipartite". Furthermore, as \(\mathsf{btw}(G)\leq\mathsf{oct}(G)\) for any graph \(G\), we think that the fact that Odd Cycle Transversal is \(\mathsf{FPT}\) parameterized by \(\mathsf{btw}\) is relevant by itself, as it generalizes the well-known \(\mathsf{FPT}\)-algorithms parameterized by the solution size [28, 32]. We would like to mention that combining in a win-win manner our dynamic programming algorithm with the \(\mathsf{FPT}\)-approximation and the Grid Exclusion Theorem of [5] we may derive an \(\mathsf{FPT}\)-algorithm for Odd Cycle Transversal parameterized by the solution size, whose running time is
considerably better than the one in [5], which has been obtained independently by using the irrelevant vertex technique (see also [23]).
* Let \(H\) be a 2-connected graph. We prove that \(H\)-Minor-Packing is para-NP-complete parameterized by btw (Lemma 6.4). For each of the \(H\)-Subgraph-Packing, \(H\)-Induced-Subgraph-Packing, \(H\)-Scattered-Packing, and \(H\)-Odd-Minor-Packing problems (cf. Section 3 for the definitions), we obtain the following complexity dichotomy: if \(H\) is bipartite, then the problem is para-NP-complete parameterized by btw (in fact, even for \(\mathsf{btw}=0\)), and if \(H\) is non-bipartite, then the problem is solvable in XP-time. The hardness results are presented in Subsection 6.3 and the XP-algorithms in Section 5.
* In view of the definition of bipartite tree decompositions, it seems natural to consider, instead of bipartite graphs as the "free part" of the bags, any graph class \(\mathcal{H}\). This leads to the more general definition of _1-\(\mathcal{H}\)-tree decomposition_ and _1-\(\mathcal{H}\)-treewidth_ (cf. Section 3), with 1-\(\{\emptyset\}\)-treewidth being equivalent to the usual treewidth and 1-\(\mathcal{B}\)-treewidth being the bipartite treewidth if \(\mathcal{B}\) is the class of bipartite graphs. We introduce these more general definitions because our dynamic programming technologies easily extend to 1-\(\mathcal{H}\)-treewidth. It also seems natural to consider, instead of allowing at most one "bipartite vertex" in each adhesion, allowing any number \(q\) of them. For \(q=0\), this corresponds to the \(\mathcal{H}\)-treewidth defined in [10] (see also [1, 20] on the study of \(\mathcal{H}\)-treewidth for several instantiations of \(\mathcal{H}\)). However, as mentioned above, while 1-\(\mathcal{B}\)-treewidth is closed under odd-minors (Lemma 3.2), this is not the case anymore for \(q\geq 2\) (Lemma 3.3). For \(q\geq 2\), some problems remain intractable even when \(H\) is not bipartite. As an illustration of this phenomenon, we prove that \(H\)-Scattered-Packing (where there cannot be an edge in \(G\) among the copies of \(H\) to be packed) is para-NP-complete parameterized by \(q\)-\(\mathcal{B}\)-treewidth for \(q\geq 2\) even if \(H\) is not bipartite (Lemma 6.7).
In the statements of the running time of our algorithms, we always let \(n\) (resp. \(m\)) be the number of vertices (resp. edges) of the input graph of the considered problem.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Problem & Complexity & Constraints on \(H\)/Running time \\ \hline \(H\)(-Induced)-Subgraph/Odd-Minor & & \(H\) bipartite containing \(P_{3}\) as a subgraph \\ -Cover[36] & & \(H\) containing \(P_{3}\) as a subgraph \\ \(H\)-Minor-Cover[36] & & \(H\) obtaining \(P_{3}\) as a subgraph \\ \(H\)(-Induced)-Subgraph-Packing & para-NP-complete, \(k=0\) & \(H\) bipartite containing \(P_{3}\) as a subgraph \\ \(H\)-Minor-Packing & & \(H\) 2-connected with \(|V(H)|\geq 3\) \\ \(H\)-Odd-Minor-Packing & & \(H\) 2-connected bipartite with \(|V(H)|\geq 3\) \\ \(H\)-Scattered-Packing & & \(H\) 2-connected bipartite with \(|V(H)|\geq 2\) \\ \hline
3-Coloring & para-NP-complete, \(k=3\) & \\ \hline \(K_{t}\)-Subgraph-Cover & & \(\mathcal{O}(2^{k}\cdot(k^{t}\cdot(n+m)+m\sqrt{n}))\) \\ Independent Set & & \(\mathcal{O}(2^{k}\cdot(k\cdot(k+n)+m\sqrt{n}))\) \\ Weighted Independent Set & FPT & \(\mathcal{O}(2^{k}\cdot(k\cdot(k+n)+n\cdot m))\) \\ Odd Cycle Transversal & & \(\mathcal{O}(3^{k}\cdot k\cdot n\cdot(m+k^{2}))\) \\ Maximum Weighted Cut & & \(\mathcal{O}(2^{k}\cdot(k\cdot(k+n)+n^{\mathcal{O}(1)}))\) \\ \hline \(H\)-Subgraph-Packing & & \(H\) non-bipartite 2-connected \\ \(H\)-Induced-Subgraph-Packing & XP & \\ \(H\)-Scattered-Packing & & \(n^{\mathcal{O}(k)}\) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the results obtained in this article.
Related results.Other types of tree decompositions integrating some "free bipartite parts" have been defined recently. As we already mentioned, Eiben, Ganian, Hamm, and Kwon [10] defined _\(\mathcal{H}\)-treewidth_ for a fixed graph class \(\mathcal{H}\). The \(\mathcal{H}\)-treewidth of a graph \(G\) is essentially the minimum treewidth of the graph induced by some set \(X\subseteq V(G)\) such that the connected components of \(G\setminus X\) belong to \(\mathcal{H}\), and is equal to \(0\)-\(\mathcal{H}\)-treewidth minus one (cf. Section3). In particular, when \(\mathcal{H}\) is the class of bipartite graphs \(\mathcal{B}\), Jansen and de Kroon [20] provided an \(\mathsf{FPT}\)-algorithm to test whether the \(\mathcal{B}\)-treewidth of a graph is at most \(k\).
Recently, as a first step to provide a systematic theory for odd-minors, Gollin and Wiederrecht [15] defined the _\(\mathcal{H}\)-blind-treewidth_ of a graph \(G\), where \(\mathcal{H}\) is a property of annotated graphs. Then the \(\mathcal{H}\)-blind-treewidth is the smallest \(k\) such that \(G\) has a tree decomposition where every bag \(\beta(t)\) such that \((G,\beta(t))\notin\mathcal{H}\) has size at most \(k\). For the case where \(\mathcal{C}\) consists of every \((G,X)\) where every odd cycle in \(H\) as at most one vertex in \(X\), we obtain the \(\mathcal{C}\)-blind-treewidth, for which [15] gives an analogue of the Grid Exclusion Theorem [7, 33] under the odd-minor relation. Moreover, [15] provides an \(\mathsf{FPT}\)-algorithm for Independent Set parameterized by \(\mathcal{C}\)-blind-treewidth. According to [15], the _bipartite-blind treewidth_ of a graph \(G\) is lower-bounded by a function of the maximum treewidth over all non-bipartite blocks of \(G\). This immediately implies that bipartite-blind treewidth is lower-bounded by bipartite treewidth. Hence, our \(\mathsf{FPT}\)-algorithm for Independent Set is more general than the one of [15]. Independently of our work, [5] presents an \(\mathsf{FPT}\)-algorithm to solve Odd Cycle Transversal parameterized by \(\mathsf{btw}\) in time \(f(\mathsf{btw})\cdot n^{4}\log n\) (in fact, they solve a more general group labeled problem). Our algorithm for Odd Cycle Transversal (cf. Corollary4.6) is considerably faster.
##### Organization of the paper.
In Section2 we provide an overview of our techniques. In Section3 we give basic definitions on graphs and we define \(q\)-\(\mathcal{H}\)-treewidth. In Section4 we define a _nice reduction_, give a general dynamic programming algorithm to obtain \(\mathsf{FPT}\)-algorithms, and apply it to \(K_{t}\)-Subgraph-Cover, Weighted Independent Set/Vertex Cover, Odd Cycle Transversal, and Maximum Weighted Cut. In Section5 we provide another dynamic programming scheme to solve \(H\)-Subgraph-Packing and \(H\)-Odd-Minor-Packing in \(\mathsf{XP}\)-time. In Section6 we provide our hardness results. Finally, we present several question for further research in Section7.
## 2 Overview of our techniques
In this section we present an overview of the techniques that we use to obtain our results.
### Dynamic programming algorithms
Compared to dynamic programming on classical tree decompositions, there are two main difficulties for doing dynamic programming on (rooted) bipartite tree decompositions. The first one is that the bags in a bipartite tree decomposition may be arbitrarily large, which prevents us from applying typical brute-force approaches to define table entries. The second one, and apparently more important, is the lack of an upper bound on the number of children of each node of the decomposition. Indeed, unfortunately, a notion of "nice bipartite tree decomposition" preserving the width (even approximately) does not exist (cf. Lemma3.4). We discuss separately the main challenges involved in our \(\mathsf{FPT}\)-algorithms and in our \(\mathsf{XP}\)-algorithms.
#### 2.1.1 Fpt-algorithms
In fact, for most of the considered problems, in order to obtain FPT-algorithms parameterized by btw, it would be enough to bound the number of children as a function of btw, but we were not able to come up with a general technique that achieves this property (cf. Lemma3.4). For particular problems, however, we can devise ad-hoc solutions. Namely, for \(K_{t}\)-Subgraph-Cover, Weighted Vertex Cover/Independent Set, Odd Cycle Transversal, and Maximum Weighted Cut parameterized by btw, we overcome the above issue by managing to replace the children by constant-sized bipartite gadgets. More specifically, we guess an annotation of the "apex" vertices of each bag \(t\), whose number is bounded by btw, that essentially tells which of these vertices go to the solution or not (with some extra information depending on each particular problem; for instance, for Odd Cycle Transversal, we also guess the side of the bipartition of the non-solution vertices). Having this annotation, each adhesion of the considered node \(t\) with a child contains, by the definition of bipartite tree decompositions, at most one vertex \(v\) that is not annotated. At this point, we crucially observe that, for the considered problems, we can make local computation for each child, independent from the computations at other children, depending only on the values of the optimum solutions at that child that are required to contain or to exclude \(v\) (note that we need to be able to keep this extra information at the tables of the children). Using the information given by these local computations, we can replace the children of \(t\) by constant-sized bipartite gadgets (sometimes empty) so that the newly built graph, which we call a _nice reduction_, is an equivalent instance modulo some constant. If a nice reduction can be efficiently computed for a problem \(\Pi\), then we say that \(\Pi\) is a _nice problem_ (cf. Subsection4.1). The newly modified bag has bounded oct, so we can then use an FPT-algorithm parameterized by oct to find the optimal solution with respect to the guessed annotation.
##### An illustrative example.
Before entering into some more technical details and general definitions, let us illustrate this idea with the Weighted Vertex Cover problem. The formal definition of bipartite tree decomposition can be found in Section3 (in fact, for the more general setting of \(1\)-\(\mathcal{H}\)-treewidth), but for this informal explanation, it is enough to suppose that we want to compute the dynamic programming tables at a bag associated with a node \(t\) of the rooted tree given by the decomposition, at that the vertices of the bag at \(t\) are partitioned into two sets: \(\beta(t)\) induces a bipartite graph and its complement, denoted by \(\alpha(t)\), corresponds to the apex vertices, whose size is bounded by the parameter, namely btw. The first step is to guess, in time at most \(2^{\texttt{btw}}\), which vertices in \(\alpha(t)\) belong to the desired minimum vertex cover. After such a guess, all the vertices in \(\alpha(t)\) can be removed from the graph, by also removing the neighborhood of those that were not taken into the solution. The definition of bipartite tree decomposition implies that, in each adhesion with a child of the current bag, there is at most one "surviving" vertex. Let \(v\) be such a vertex belonging to the adhesion with a child \(t^{\prime}\) of \(t\). Suppose that, inductively, we have computed in the tables for \(t^{\prime}\) the following two values, subject to the choice that we made for \(\alpha(t)\): the minimum weight \(w_{v}\) of a vertex cover in the graph below \(t^{\prime}\) that contains \(v\), and the minimum weight \(w_{\bar{v}}\) of a vertex cover in the graph below \(t^{\prime}\) that does not contain \(v\). Then, the trick is to observe that, having these two values at hand, we can totally forget the graph below \(t^{\prime}\): it is enough to delete this whole graph, except for \(v\), and attach a new pendant edge \(vu\), where \(u\) is a new vertex, such that \(v\) is given weight \(w_{v}\) and \(u\) is given weight \(w_{\bar{v}}\). It is easy to verify that this gadget mimics, with respect to the current bag, the behavior of including vertex \(v\) or not in the solution for the child \(t^{\prime}\). Adding this gadget for every child results in a bipartite graph, where we can just solve Weighted Vertex
Cover in polynomial time using a classic algorithm [25, 31], and add the returned weight to our tables. The running time of this whole procedure, from the leaves to the root of the decomposition, is clearly \(\mathsf{FPT}\) parameterized by the bipartite treewidth of the input graph.
##### Extensions and limitations.
Note that the algorithm sketched above for Weighted Vertex Cover is problem-dependent, in particular the choice of the gadgets for the children, and the fact of deleting the neighborhood of the vertices chosen in the solution. Which type of replacements and reductions can be afforded in order to obtain an \(\mathsf{FPT}\)-algorithm for bipartite treewidth? For instance, concerning the gadgets for the children, as far as the considered problem can be solved in polynomial time on bipartite graphs, we could attach to the "surviving" vertices an arbitrary bipartite graph instead of just an edge. If we assume that the considered problem is \(\mathsf{FPT}\) parameterized by \(\mathsf{oct}\) (which is a reasonable assumption, as \(\mathsf{btw}\) generalizes \(\mathsf{oct}\)), then one could think that it may be sufficient to devise gadgets with bounded \(\mathsf{oct}\). Unfortunately, this will not work in general: even if each of the gadgets has bounded \(\mathsf{oct}\) (take, for instance, a triangle), since we do not have any upper bound, in terms of \(\mathsf{btw}\), on the number of children (hence, the number of different adhesions), the resulting graph after the gadget replacement may have unbounded \(\mathsf{oct}\). In order to formalize the type of replacements and reductions that can be allowed, we introduce in Section4 the notions of _nice reduction_ and _nice problem_. Additional insights into these definitions, which are quite lengthy, are provided in Section4.1, along with an illustration depicted in Figure1.
Another sensitive issue is that of "guessing the vertices into the solution". While this is quite simple for Weighted Vertex Cover (either a vertex is in the solution, or it is not), for some other problems we may have to guess a richer structure in order to have enough information to combine the tables of the children into the tables of the current bag. This is the reason for which, in the general dynamic programming scheme that we present in Section4, we deal with _annotated problems_, i.e., problems that receive as input, apart from a graph, a collection of annotated sets in the form of a partition \(\mathcal{X}\) of some \(X\subseteq V(G)\). For instance, for Weighted Vertex Cover, we define its _annotated extension_, which we call Annotated Weighted Vertex Cover, that takes as an input a graph \(G\) and two disjoint sets \(R\) and \(S\) of vertices of \(G\), and asks for a minimum vertex cover \(S^{\star}\) such that \(S\subseteq S^{\star}\) and \(S^{\star}\cap R=\emptyset\).
##### General dynamic programming scheme.
Our general scheme essentially says that if a problem \(\Pi\) has an annotated extension \(\Pi^{\prime}\) that is
* a nice problem and
* solvable in \(\mathsf{FPT}\)-time parameterized by \(\mathsf{oct}\),
then \(\Pi\) is solvable in \(\mathsf{FPT}\)-time parameterized by \(\mathsf{btw}\). More specifically, it is enough to prove that \(\Pi^{\prime}\) is solvable in time \(f(|X|)\cdot n^{\mathcal{O}(1)}\) on an instance \((G,\mathcal{X})\) such that \(G\setminus X\) is bipartite, where \(\mathcal{X}\) is a partition of \(X\) corresponding to the annotation. This general dynamic programming algorithm works in a wider setting, namely for a general graph class \(\mathcal{H}\) that plays the role of bipartite graphs, as far as the annotated extension \(\Pi^{\prime}\) is what we call _\(\mathcal{H}\)-nice_; cf. Lemma4.1 for the details.
##### Applications.
We then apply this general framework to give \(\mathsf{FPT}\)-algorithms for several problems parameterized by bipartite treewidth. For each of \(K_{t}\)-Subgraph-Cover (Subsubsection4.4.1), Weighted Vertex Cover/Independent Set (Subsubsection4.4.2), Odd Cycle Transversal
(Subsubsection 4.4.3), and Maximum Weighted Cut (Subsubsection 4.4.4), we prove that the problem has an annotated extension that is 1) nice and 2) solvable in \(\mathsf{FPT}\)-time parameterized by \(\mathsf{oct}\), as discussed above.
To prove that an annotated problem has a nice reduction, we essentially use two ingredients. Given two compatible boundaried graphs \(\mathbf{F}\) and \(\mathbf{G}\) with boundary \(X\) (a boundaried graph is essentially a graph along with some labeled vertices that form a boundary, see the formal definition in Subsection 4.1), an annotated problem is usually nice if the following hold:
* (_Gluing property_) Given that we have guessed the annotation \(\mathcal{X}\) in the boundary \(X\), a solution compatible with the annotation is optimal in the graph \(\mathbf{F}\oplus\mathbf{G}\) obtained by gluing \(\mathbf{F}\) and \(\mathbf{G}\) if and only if it is optimal in each of the two glued graphs. In this case, it means that the optimum on \((\mathbf{F}\oplus\mathbf{G},\mathcal{X})\) is equal to the optimum on \((F,\mathcal{X})\) modulo some constant depending only on \(G\) and \(\mathcal{X}\).
* (_Gadgetization_) Given that we have guessed the annotation in the boundary \(X\setminus\{v\}\) for some vertex \(v\) in \(X\), there is a small boundaried graph \(G^{\prime}\), that is bipartite (maybe empty), such that the optimum on \((\mathbf{F}\oplus\mathbf{G},\mathcal{X})\) is equal to the optimum on \((\mathbf{F}\oplus\mathbf{G}^{\prime},\mathcal{X})\) modulo some constant depending only on \(G\) and \(\mathcal{X}\).
The gluing property seems critical to show that a problem is nice. This explains why we solve \(H\)-Subgraph-Cover only when \(H\) is a clique. For any graph \(H\), Annotated \(H\)-Subgraph-Cover is defined similarly to Annotated Weighted Vertex Cover by specifying vertices that must or must not be taken in the solution. If \(H\) is a clique, then we crucially use the fact that \(H\) is a subgraph of \(\mathbf{F}\oplus\mathbf{G}\) if and only if it is a subgraph of either \(F\) or \(G\) to prove that Annotated \(H\)-Subgraph-Cover has the gluing property. However, we observe that if \(H\) is not a clique, then Annotated \(H\)-Subgraph-Cover does not have the gluing property (see Lemma 4.3). This is the main difficulty that we face to solve \(H\)-Subgraph-Cover in the general case.
Note also that if we define the annotated extension of Odd Cycle Transversal in a similar fashion (that is, a set \(S\) of vertices contained in the solution and a set \(R\) of vertices that do not belong to the solution), then we can prove that this annotated extension does not have the gluing property. However, if we define Annotated Odd Cycle Transversal as the problem that takes as an input a graph \(G\) and three disjoint sets \(S,X_{1},X_{2}\) of vertices of \(G\) and aims at finding an odd cycle transversal \(S^{\star}\) of minimum size such that \(S\subseteq S^{\star}\) and \(X_{1}\) and \(X_{2}\) are on different sides of the bipartition obtained after removing \(S^{\star}\), then Annotated Odd Cycle Transversal does have the gluing property (see Lemma 4.9).
For Maximum Weighted Cut, the annotation is pretty straightforward: we use two annotation sets \(X_{1}\) and \(X_{2}\), corresponding to the vertices that will be on each side of the cut. It is easy to see that this annotated extension has the gluing property.
Finding the right gadgets is the main difficulty to prove that a problem is nice. As explained above, for Annotated Weighted Vertex Cover, we replace the boundaried graph \(\mathbf{G}\) by an edge that simulates the behavior of \(G\) with respect to \(v\), which is the only vertex that interest us (see Lemma 4.7). For Annotated Maximum Weighted Cut, if \(\mathcal{X}=(X_{1},X_{2})\), the behavior of \(G\) can be simulated by an edge between \(v\) and a vertex in \(X_{1}\) of weight equal to the optimum on \((G,(X_{1},X_{2}\cup\{v\}))\) and an edge between \(v\) and a vertex in \(X_{2}\) of weight equal to the optimum on \((G,(X_{1}\cup\{v\},X_{2}))\) (see Lemma 4.14). For Annotated \(K_{t}\)-Subgraph-Cover, if \(\mathcal{X}=(R,S)\), depending on the optimum on \((G,(R\cup\{v\},S))\) and the one on \((G,(R,S\cup\{v\}))\), we can show that the optimum on \((\mathbf{F}\oplus\mathbf{G},\mathcal{X})\) is equal to the optimum on \((F,\mathcal{X})\) or \((F\setminus\{v\},\mathcal{X})\) modulo some constant
(see Lemma 4.4). For Annotated Odd Cycle Transversal, if \(\mathcal{X}=(S,X_{1},X_{2})\), we can show that the optimum on \((\mathbf{F}\oplus\mathbf{G},\mathcal{X})\) is equal modulo some constant to the optimum on either \((F,\mathcal{X})\), or \((F\setminus\{v\},\mathcal{X})\), or \((F^{\prime},\mathcal{X})\), where \(F^{\prime}\) is obtained from \(F\) by adding an edge between \(v\) and either a vertex of \(X_{1}\) or a vertex of \(X_{2}\) (see Lemma 4.10).
Finally, let us now mention some particular ingredients used to prove that the considered annotated problems are solvable in time \(f(|X|)\cdot n^{\mathcal{O}(1)}\) on an instance \((G,\mathcal{X})\) such that \(G\setminus X\) is bipartite, where \(\mathcal{X}\) is a partition of a vertex set \(X\) corresponding to the annotation. For Annotated \(K_{t}\)-Subgraph-Cover and Annotated Weighted Vertex Cover, this is simply a reduction to (Weighted Vertex) Cover on bipartite graphs. For Odd Cycle Transversal, we adapt the algorithm of Reed, Smith, and Vetta [32] that uses iterative compression to solve Annotated Odd Cycle Transversal in \(\mathsf{FPT}\)-time parameterized by \(\mathsf{oct}\), so that it takes annotations into account (Lemma 4.12). As for Maximum Weighted Cut parameterized by \(\mathsf{oct}\), the most important trick is to reduce to a \(K_{5}\)-odd-minor-free graph, and then use known results of Grotschel and Pulleyblank [16] and Guenin [17] to solve the problem in polynomial time (Proposition 4.1).
#### 2.1.2 \(\mathsf{XP}\)-algorithms
We now sketch some of the basic ingredients of the \(\mathsf{XP}\)-algorithms that we present in Section 5 for \(H(\text{-Induced})\)-Subgraph/Scattered/Odd-Minor-Packing. The main observation is that, if \(H\) is \(2\)-connected and non-bipartite, since the "non-apex" part of each bag is bipartite and \(H\) is non-bipartite, in any \(H\)-subgraph/induced/scattered/odd-minor-packing and every bag of the decomposition, there are at most \(\mathsf{btw}\) occurrences of \(H\) that intersect that bag. We thus guess these occurrences, and how they intersect the children, which allow us to reduce the number of children by just deleting those not involved in the packing. The guess of these occurrences is the dominant term in the running time of the resulting \(\mathsf{XP}\)-algorithm using this method. Note that for \(H(\text{-Induced})\)-Subgraph/Scattered-Packing, we can indeed easily guess those occurrences in \(\mathsf{XP}\)-time parameterized by \(\mathsf{btw}\), as the total size of the elements of the packing intersecting a given bag is bounded by a function of \(\mathsf{btw}\) and \(H\). However, for \(H\)-Odd-Minor-Packing, this is not the case anymore, as an element of the packing may contain an arbitrary number of vertices in the bipartite part of a bag. We overcome this issue as follows. As stated in Lemma 3.1, the existence of an \(H\)-odd-minor is equivalent to the existence of a so-called _odd \(H\)-expansion_, which is essentially a collection of trees connected by edges preserving the appropriate parities of the resulting cycles. In an odd \(H\)-expansion, the _branch vertices_ are those that have degree at least three, or that are incident to edges among different trees (cf. Subsection 5.2). Note that, in an odd \(H\)-expansion, the number of branch vertices depends only on \(H\). Equipped with this property, we first guess, at a given bag, the branch vertices of the packing that intersect that bag. Note that this indeed yields an \(\mathsf{XP}\) number of choices, as required. Finally, for each such a choice, we use an \(\mathsf{FPT}\)-algorithm of Kawarabayashi, Reed, and Wollan [24] solving the Parity \(k\)-Disjoint Paths to check whether the guessed packing exists or not. This approach is formalized in Lemma 5.3.
It is worth mentioning that, as discussed in Section 7, we leave as an open problem the existence of \(\mathsf{FPT}\)-algorithms for the above packing problems parameterized by \(\mathsf{btw}\).
### Hardness results
Finally, we discuss some of the tools that we use to obtain the para-\(\mathsf{NP}\)-completeness results summarized in Table 1, which can be found in Section 6. We present a number of different reductions,
some of them consisting of direct simple reductions, such as the one we provide for 3-Coloring in Lemma6.2.
Except for 3-Coloring, all the considered problems fall into two categories: covering or packing problems. For the first family (cf. Subsection6.2), the para-NP-completeness is an immediate consequence of a result of Yannakakis [36] that characterizes hereditary graph classes \(\mathcal{G}\) for which Vertex Deletion to \(\mathcal{G}\) on bipartite graphs is polynomial-time solvable and those for which Vertex Deletion to \(\mathcal{G}\) remains NP-complete (cf. Proposition6.1)
For the packing problems (cf. Subsection6.3), we do not have such a general result as for the covering problems, and we provide several reductions for different problems. For instance, we prove in Lemma6.3 that if \(H\) is a bipartite graph containing \(P_{3}\) as a subgraph, then \(H\)-Subgraph-Packing and \(H\)-Induced-Subgraph-Packing are NP-complete on bipartite graphs. The proof of Lemma6.3 consists in a careful analysis and a slight modification of a reduction of Kirkpatrick and Hell [26] for the problem of partitioning the vertex set of an input graph \(G\) into subgraphs isomorphic to a fixed graph \(H\). The hypothesis about containing \(P_{3}\) is easily seen to be tight.
For the minor version, we prove in Lemma6.4 that if \(H\) is a 2-connected graph with at least three vertices, then \(H\)-Minor-Packing is NP-complete on bipartite graphs. The proof uses a reduction from \(P_{3}\)-Subgraph-Packing on bipartite graphs, which was proved to be NP-complete by Monnot and Toulouse [30]. The 2-connectivity of \(H\) is crucially used in the proof. Given that odd-minors preserve cycle parity (Lemma3.1), when \(H\) is bipartite, \(H\)-Odd-Minor-Packing and \(H\)-Minor-Packing are the same problem on bipartite graphs (Lemma6.5). Hence, the same hardness result holds for \(H\)-Odd-Minor-Packing when \(H\) is 2-connected and bipartite.
In Lemma6.6 we prove that, if \(H\) is a 2-connected bipartite graph with at least one edge, then \(H\)-Scattered-Packing is NP-complete on bipartite graphs, by a simple reduction from the Induced Matching on bipartite graphs, which is known to be NP-complete [4].
Finally, in Lemma6.7 we prove that if \(H\) is a (non-necessarily bipartite) 2-connected graph containing an edge and \(q\in\mathbb{N}_{\geq 2}\), then \(H\)-Scattered-Packing is para-NP-complete parameterized by \(q\)-\(\mathcal{B}\)-treewidth. In fact, this reduction is exactly the same as the one of Lemma6.6, with the extra observation that, if \(G^{\prime}\) is the graph constructed in the reduction, then the "bipartite" treewidth of \(G^{\prime}\) is at most the one of \(H\) for \(q\geq 2\).
## 3 Definitions
In this section we give some definitions and preliminary results.
#### Sets and integers.
We denote by \(\mathbb{N}\) the set of non-negative integers. Given two integers \(p\) and \(q\), the set \([p,q]\) contains every integer \(r\) such that \(p\leq r\leq q\). For an integer \(p\geq 1\), we set \([p]=[1,p]\) and \(\mathbb{N}_{\geq p}=\mathbb{N}\setminus[0,p-1]\). For a set \(S\), we denote by \(2^{S}\) the set of all subsets of \(S\) and, given an integer \(r\in[|S|]\), we denote by \(\binom{S}{r}\) the set of all subsets of \(S\) of size \(r\).
###### Parameterized complexity.
A parameterized problem is a language \(L\subseteq\Sigma^{\star}\times\mathbb{N}\), where \(\Sigma^{\star}\) is a set of strings over a finite alphabet \(\Sigma\). An input of a parameterized problem is a pair \((x,k)\), where \(x\) is a string over \(\Sigma\) and \(k\in\mathbb{N}\) is a parameter. A parameterized problem is _fixed-parameter tractable_ (or FPT) if it can be solved in time \(f(k)\cdot|x|^{\mathcal{O}(1)}\) for some computable function \(f\). A parameterized problem is XP if it can be solved in time \(f(k)\cdot|x|^{g(k)}\) for some computable functions
\(f\) and \(g\). A parameterized problem is para-NP-complete if it is NP-complete for some fixed value \(k\) of the parameter.
##### Partitions.
Given \(p\in\mathbb{N}\), a _\(p\)-partition_ of a set \(X\) is a tuple \((X_{1},\ldots,X_{p})\) of pairwise disjoint subsets of \(X\) such that \(X=\bigcup_{i\in[p]}X_{i}\). We denote by \(\mathcal{P}_{p}(X)\) the set of all \(p\)-partitions of \(X\). Given a partition \(\mathcal{X}\in P_{p}(X)\), its domain \(X\) is also denoted as \(\cup\mathcal{X}\). A _partition_ is a \(p\)-partition for some \(p\in\mathbb{N}\). Note that this corresponds to the usual definition of an ordered near-partition, since we allow empty sets in a \(p\)-partition and since the order matters. Given \(Y\subseteq X\), \(\mathcal{X}=(X_{1},\ldots,X_{p})\in\mathcal{P}_{p}(X)\), and \(\mathcal{Y}=(Y_{1},\ldots,Y_{p})\in\mathcal{P}_{p}(Y)\), we say that \(\mathcal{Y}\subseteq\mathcal{X}\) if \(Y_{i}\subseteq X_{i}\) for each \(i\in[p]\). Given a set \(U\), two subsets \(X,A\subseteq U\), and \(\mathcal{X}=(X_{1},\ldots,X_{p})\in\mathcal{P}_{p}(X)\), \(\mathcal{X}\cap A\) denotes the partition \((X_{1}\cap A,\ldots,X_{p}\cap A)\) of \(X\cap A\).
##### Functions.
Given two sets \(A\) and \(B\), and two functions \(f,g:A\to 2^{B}\), we denote by \(f\cup g\) the function that maps \(x\in A\) to \(f(x)\cup g(x)\in 2^{B}\). Let \(f:A\to B\) be an injection. Let \(K\subseteq B\) be the image of \(f\). By convention, if \(f\) is referred to as a bijection, it means that we consider that \(f\) maps \(A\) to \(K\). Given a function \(w:A\to\mathbb{N}\), and \(A^{\prime}\subseteq A\), \(w(A^{\prime})=\sum_{x\in A^{\prime}}w(x)\).
##### Basic concepts on graphs.
All graphs considered in this paper are undirected, finite, and without loops or multiple edges. We use standard graph-theoretic notation and we refer the reader to [9] for any undefined terminology. For convenience, we use \(uv\) instead of \(\{u,v\}\) to denote an edge of a graph. Let \(G\) be a graph. In the rest of this paper we always use \(n\) for the cardinality of \(V(G)\), and \(m\) for the cardinality of \(E(G)\), where \(G\) is the input graph of the problem under consideration. For \(S\subseteq V(G)\), we set \(G[S]=(S,E\cap{S\choose 2})\) and use the shortcut \(G\setminus S\) to denote \(G[V(G)\setminus S]\). Given a vertex \(v\in V(G)\), we denote by \(N_{G}(v)\) the set of vertices of \(G\) that are adjacent to \(v\) in \(G\). Moreover, given a set \(A\subseteq V(G)\), \(N_{G}(A)=\bigcup_{v\in A}N_{G}(v)\setminus A\). For \(k\in\mathbb{N}\), we denote by \(P_{k}\) the path with \(k\) vertices, and we say that \(P_{k}\) has length \(k-1\) (i.e., the _length_ of a path is its number of edges). We denote by \(\mathsf{cc}(G)\) the set of connected components of a graph \(G\). For \(A,B\subseteq V(G)\), \(E(A,B)\) denotes the set of edges of \(G\) with one endpoint in \(A\) and the other in \(B\). We say that \(E^{\prime}\subseteq E(G)\) is an _edge cut_ of \(G\) if there is a partition \((A,B)\) of \(V(G)\) such that \(E^{\prime}=E(A,B)\). We say that a pair \((L,R)\in 2^{V(G)}\times 2^{V(G)}\) is a _separation_ of \(G\) if \(L\cup R=V(G)\) and \(E(L\setminus R,R\setminus L)=\emptyset\). The _order_ of \((L,R)\) is \(|L\cap R|\). \(L\cap G\) is called a _\(|L\cap R|\)-separator_ of \(G\). A graph \(G\) is _\(k\)-connected_ if, for any separation \((L,R)\) of \(G\) of order at most \(k-1\), either \(L\subseteq R\) or \(R\subseteq L\). A _cut vertex_ in a connected graph \(G\) is a vertex whose removal disconnects \(G\). A _block_ of \(G\) is a maximal connected subgraph of \(G\) without a cut vertex. A graph class \(\mathcal{H}\) is _hereditary_ if for any \(G\in\mathcal{H}\) and \(v\in V(G)\), \(G\setminus\{v\}\in\mathcal{H}\). The _torso_ of a set \(X\subseteq V(G)\), denoted by \(\mathsf{torso}_{G}(X)\), is the graph obtained from \(G[X]\) by making \(N_{G}(C)\) a clique for each \(C\in\mathsf{cc}(G\setminus X)\). Given to graphs \(G_{1}\) and \(G_{2}\), and \(q\in\mathbb{N}\), a _\(q\)-clique-sum_ of \(G_{1}\) and \(G_{2}\) is obtained from their disjoint union by identifying a \(q\)-clique of \(G_{1}\) with a \(q\)-clique of \(G_{2}\), and then possibly deleting some edges of that clique. A graph class \(\mathcal{G}\) is _closed under \(q\)-clique-sums_ if for each \(G_{1},G_{2}\in\mathcal{G}\), any \(q\)-clique-sum of \(G_{1}\) and \(G_{2}\) also belongs to \(\mathcal{G}\).
##### Colorings.
A _coloring_ on a graph \(G\) is a function \(c:V(G)\to\mathbb{N}\). Given \(v\in V(G)\), \(c(v)\) is called the _color_ of \(v\) by \(c\). Given \(k\in\mathbb{N}\), a _\(k\)-coloring_ is a coloring \(c:V(G)\to[k]\). Given a coloring \(c\) on a graph \(G\) and an edge \(uv\in E(G)\), we say that \(uv\) is _monochromatic_ if \(c(u)=c(v)\). Otherwise, we say the \(uv\) is _bichromatic_. A coloring \(c\) on a graph \(G\) is said to be _proper_ if every edge of \(g\) is bichromatic. We say that a graph \(G\) is _\(k\)-colorable_ if there exists a proper \(k\)-coloring on \(G\).
Minors and odd-minors.Let \(G\) and \(H\) be two graphs. An _\(H\)-expansion_ in \(G\) is a function \(\eta\) with domain \(V(H)\cup E(H)\) such that:
* for every \(v\in V(H)\), \(\eta(v)\) is a subgraph of \(G\) that is a tree \(T_{v}\), called _node_ of \(\eta\), such that each leaf of \(T_{v}\) is adjacent to a vertex of another node of \(\eta\), and \(\eta(v)\) is disjoint from \(\eta(w)\) for distinct \(v,w\in V(H)\), and
* for every \(uv\in E(H)\), \(\eta(uv)\) is an edge \(u^{\prime}v^{\prime}\) in \(G\), called _edge_ of \(\eta\), such that \(u^{\prime}\in V(\eta(u))\) and \(v^{\prime}\in V(\eta(v))\).
We denote by \(\bigcup\eta\) the subgraph \(\bigcup_{x\in V(H)\cup E(H)}\eta(x)\) of \(G\).
If there is an \(H\)-expansion in \(G\), then we say that \(H\) is a _minor_ of \(G\). The _contraction_ of an edge \(uv\) in simple graph \(G\) results in a simple graph \(G^{\prime}\) obtained from \(G\setminus\{u,v\}\) by adding a new vertex \(w\) adjacent to all the vertices in the set \(N_{G}(\{u,v\})\). Equivalently, a graph \(H\) is a _minor_ of a graph \(G\) if \(H\) can be obtained from a subgraph of \(G\) by contracting edges. We call such a subgraph _model_ of \(H\) in \(G\). Note that the image of \(\eta\) is a model of \(H\) in \(G\). We say that a graph \(G\) is _\(H\)-minor-free_ if \(G\) excludes the graph \(H\) as a minor.
**Lemma 3.1**.: _Let \(G\) and \(H\) be two graphs. The following statements are equivalent._
1. _There is an_ \(H\)_-expansion_ \(\eta\) _in_ \(G\) _and a 2-coloring of_ \(\bigcup\eta\) _that is proper in each node of_ \(\eta\) _and such that each edge of_ \(\eta\) _is monochromatic._
2. _There is an_ \(H\)_-expansion_ \(\eta\) _in_ \(G\) _such that every cycle in_ \(\bigcup\eta\) _has an even number of edges in_ \(\bigcup_{v\in V(H)}\eta(v)\)_._
3. _There is an_ \(H\)_-expansion_ \(\eta\) _in_ \(G\) _such that the length of every cycle_ \(C\) _in_ \(H\) _has the same parity as the length of the cycle_ \(\eta(C)\) _in_ \(\bigcup\eta\)_._
4. \(H\) _can be obtained from a subgraph of_ \(G\) _by contracting each edge of an edge cut._
Proof.: **1 \(\Rightarrow\) 2:** Let \(\eta\) be an \(H\)-expansion in \(G\) with a 2-coloring \(c\) of \(\bigcup\eta\) that is proper in each node of \(\eta\) and such that each edge of \(\eta\) is monochromatic. \(\bigcup\eta\) is a subgraph of \(G\). The edges of \(\bigcup_{v\in V(H)}\eta(v)\) are exactly the bichromatic edges of \(\eta\). Let \(C\) be a cycle in \(\bigcup\eta\). We transform \(C\) into a directed cyclic graph \(C^{\prime}\). The bichromatic edges in \(C^{\prime}\) have alternatively color 1-2 and color 2-1. Thus, since \(C^{\prime}\) is a cycle, the number of edges 1-2 and 2-1 is equal. Hence, the number of bichromatic edges in \(C\) is even.
**2 \(\Rightarrow\) 1:** Let \(\eta\) be an \(H\)-expansion in \(G\) such that every cycle in \(\bigcup\eta\) has an even number of edges in \(\bigcup_{v\in V(H)}\eta(v)\). Let \(v\) be an arbitrary vertex of \(H\). We color \(\eta(v)\) greedily to obtain a proper 2-coloring of the node. Since \(\eta(v)\) is a tree, there is only one proper 2-coloring up to isomorphism. We extend this isomorphism greedily to the entire \(\bigcup\) so that each edge of \(\eta\) is monochromatic and each node of \(\eta\) is properly 2-colored. Assume that there is a vertex \(v\) of \(\bigcup\eta\) that is not colorable by this greedy approach. Then \(v\) is part of a cycle \(C\) in \(\eta\) such that each other vertex of \(C\) is colored, but the neighbors \(u\) and \(w\) of \(v\) in \(C\) give contradictory instructions for the coloring of \(v\). If both \(uv\) and \(vw\) should both be monochromatic edges, but \(u\) has color 1 and \(v\) has color 2, then this means that \(C\) has an odd number of bichromatic edges, i.e., edges in \(\bigcup_{v\in V(H)}\eta(v)\). Hence the contradiction. A similar argument holds for any state (monochromatic or bichromatic) for \(uv\) and \(vw\). Thus, any vertex can be colored greedily. So \(\eta\) is an \(H\)-expansion in \(G\) with a 2-coloring \(c\) of \(\bigcup\eta\) that is proper in each node of \(\eta\) and such that each edge of \(\eta\) is monochromatic.
**2 \(\Leftrightarrow\) 3:** Let \(\eta\) be an \(H\)-expansion in \(G\). By definition, there is a one-to-one correspondence between the edges of \(\eta\) and the edges of \(H\). Therefore, \(C\) is a cycle of \(H\) if and only if \(\eta(C)\) is a cycle of \(\bigcup\eta\), and there are as many edges of \(\eta\) in \(\eta(C)\) as the number of edges in \(C\). The other edges of \(\eta(C)\) are in the nodes of \(\eta\), i.e., in \(\bigcup_{v\in V(H)}\eta(v)\). Thus, every cycle in \(\bigcup\eta\) has an even number of edges in \(\bigcup_{v\in V(H)}\eta(v)\) if and only if the length of every cycle \(C\) in \(H\) has the same parity as the length of the cycle \(\eta(C)\) in \(\bigcup\eta\).
**1 \(\Rightarrow\) 4:** Let \(\eta\) be an \(H\)-expansion in \(G\) with a 2-coloring \(c\) of \(\bigcup\eta\) that is proper in each node of \(\eta\) and such that each edge of \(\eta\) is monochromatic. \(\bigcup\eta\) is a subgraph of \(G\). Let \(X_{1}\) and \(X_{2}\) be the sets of vertices of \(\eta\) with color 1 and 2 respectively. Then \(E^{\prime}=E(X_{1},X_{2})\) is an edge cut of \(\bigcup\eta\) and by contracting \(E^{\prime}\) in \(\bigcup\eta\), we obtain \(H\).
**4 \(\Rightarrow\) 1:** Let \(G^{\prime}\) be a subgraph of \(G\) and \(E^{\prime}\) be an edge cut of \(G^{\prime}\) such that \(H\) can be obtained by contracting \(E^{\prime}\) in \(G^{\prime}\). Let \(E^{\prime\prime}=E(G^{\prime})\setminus E^{\prime}\). Let \(G^{\prime\prime}\) be a graph obtained from \(G^{\prime}\) by removing edges in \(E^{\prime}\) such that every connected component of \(G^{\prime}\setminus E^{\prime\prime}\) is a spanning tree. Then there is an \(H\)-expansion \(\eta\) in \(G\) such that \(\bigcup\eta=G^{\prime\prime}\) and the edges of \(\eta\) are exactly the edges in \(E^{\prime\prime}\). Let \((X_{1},X_{2})\) be the partition of \(V(G^{\prime})\) witnessing the edge cut \(E^{\prime}\). Then, if we give color 1 to the vertices of \(X_{1}\) and color 2 to the vertices of \(X_{2}\), there is a proper 2-coloring of every node of \(\eta\) and each edge of \(\eta\) is monochromatic.
An \(H\)-expansion for which one of the statements of Lemma 3.1 is true is called an _odd \(H\)-expansion_. Using statement 3, we say that such an \(H\)-expansion in \(G\)_preserves cycle parity_. If there is an \(H\)-expansion in \(G\), then we say that \(H\) is an _odd-minor_ of \(G\). Note that if \(H\) is an odd-minor of \(G\), then \(H\) is a minor of \(G\). However, the opposite does not always hold. For instance, \(K_{3}\) is not an odd-minor of \(C_{4}\). We say that a graph \(G\) is _\(H\)-odd-minor-free_ if \(G\) excludes the graph \(H\) as an odd-minor. In particular, observe that bipartite graphs are exactly the \(K_{3}\)-odd-minor-free graphs and that the forests are exactly the \(\{K_{3},C_{4}\}\)-odd-minor-free graphs.
**Treewidth.** A _tree decomposition_ of a graph \(G\) is a pair \((T,\chi)\) where \(T\) is a tree and \(\chi:V(T)\to 2^{V(G)}\) such that
* \(\bigcup_{t\in V(T)}\chi(t)=V(G)\),
* for every \(e\in E(G)\), there is a \(t\in V(T)\) such that \(\chi(t)\) contains both endpoints of \(e\), and
* for every \(v\in V(G)\), the subgraph of \(T\) induced by \(\{t\in V(T)\mid v\in\chi(t)\}\) is connected.
The _width_ of \((T,\chi)\) is equal to \(\max\big{\{}\left|\chi(t)\right|-1\bigm{|}t\in V(T)\big{\}}\) and the _treewidth_ of \(G\), denoted by \(\mathsf{tw}(G)\), is the minimum width over all tree decompositions of \(G\).
For every node \(t\in V(T)\), \(\chi(t)\) is called _bag_ of \(t\). Given \(tt^{\prime}\in E(T)\), the _adhesion_ of \(t\) and \(t^{\prime}\), denoted by \(\mathsf{adh}(t,t^{\prime})\), is the set \(\chi(t)\cap\chi(t^{\prime})\).
A _rooted tree decomposition_ is a triple \((T,\chi,r)\) where \((T,\chi)\) is a tree decomposition and \((T,r)\) is a _rooted tree_ (i.e., \(T\) is a tree and \(r\in V(T)\)). Given \(t\in V(T)\), we denote by \(\mathsf{ch}_{r}(t)\) the set of children of \(t\) and by \(\mathsf{par}_{r}(t)\) the parent of \(t\) (if \(t\neq r\)). We set \(\delta_{t}^{r}=\mathsf{adh}(t,\mathsf{par}_{r}(t))\), with the convention that \(\delta_{r}^{r}=\emptyset\). Moreover, we denote by \(G_{t}^{r}\) the graph induced by \(\bigcup_{t^{\prime}\in V(T_{t})}\chi(t^{\prime})\) where \((T_{t},t)\) is the rooted subtree of \((T,r)\). We may use \(\delta_{t}\) and \(G_{t}\) instead of \(\delta_{t}^{r}\) and \(G_{t}^{r}\) when there is no risk of confusion.
While our goal in this article is to study bipartite treewidth, defined below, we provide the following definition in a more general way, namely of a parameter that we call \(1\)-\(\mathcal{H}\)-treewidth, with
the hope of it finding some application in future work. We use the term 1-\(\mathcal{H}\)-treewidth to signify that the \(\mathcal{H}\)-part of each bag intersects each neighboring bag in at most one vertex. This also has the benefit of avoiding confusion with \(\mathcal{H}\)-treewidth defined in [10], which would be another natural name for this class of parameters.
\(1\)-\(\mathcal{H}\)-treewidth.Let \(\mathcal{H}\) be a graph class. A _1-\(\mathcal{H}\)-tree decomposition_ of a graph \(G\) is a triple \((T,\alpha,\beta)\), where \(T\) is a tree and \(\alpha,\beta:V(T)\to 2^{V(G)}\), such that
* \((T,\alpha\cup\beta)\) is a tree decomposition of \(G\),
* for every \(t\in V(T)\), \(\alpha(t)\cap\beta(t)=\emptyset\),
* for every \(t\in V(T)\), \(G[\beta(t)]\in\mathcal{H}\), and
* for every \(tt^{\prime}\in E(T)\), \(|(\alpha\cup\beta)(t^{\prime})\cap\beta(t)|\leq 1\).
The vertices in \(\alpha(t)\) are called _apex vertices_ of the node \(t\in V(T)\).
The _width_ of \((T,\alpha,\beta)\) is equal to \(\max\big{\{}\,|\alpha(t)|\,\bigm{|}\,t\in V(T)\big{\}}\). The _1-\(\mathcal{H}\)-treewidth_ of \(G\), denoted by \((1,\mathcal{H})\)-tw\((G)\), is the minimum width over all 1-\(\mathcal{H}\)-tree decompositions of \(G\).
A _rooted 1-\(\mathcal{H}\)-tree decomposition_ is a tuple \((T,\alpha,\beta,r)\) where \((T,\alpha,\beta)\) is a 1-\(\mathcal{H}\)-tree decomposition and \((T,r)\) is a rooted tree.
Given that \((T,\alpha\cup\beta)\) is a tree decomposition, we naturally extend all definitions and notations concerning treewidth to 1-\(\mathcal{H}\)-treewidth.
Observe also that a tree decomposition \((T,\chi)\) is also a 1-\(\mathcal{H}\)-tree decomposition for every graph class \(\mathcal{H}\), in the sense that \((T,\chi,o)\) is a 1-\(\mathcal{H}\)-tree decomposition, where \(o:V(T)\to\ \emptyset\). Therefore, for every graph class \(\mathcal{H}\) and every graph \(G\), \((1,\mathcal{H})\)-tw\((G)\leq\)tw\((G)+1\).
If \(\mathcal{H}\) is the graph class containing only the empty graph, then a 1-\(\mathcal{H}\)-tree decomposition is exactly a tree decomposition.
**Remark.** In [10], a parameter with a similar name is defined, namely \(\mathcal{H}\)_-treewidth_. The \(\mathcal{H}\)-treewidth of a graph \(G\) is essentially the minimum treewidth of the graph induced by some set \(X\subseteq V(G)\) such that the connected components of \(G\setminus X\) belong to \(\mathcal{H}\). This actually corresponds to the 0-\(\mathcal{H}\)-treewidth (minus one), which is defined by replacing the "1" by a "0" in the last item of the definition of a 1-\(\mathcal{H}\)-tree decomposition above. Indeed, let \((T,\alpha,\beta)\) be a 0-\(\mathcal{H}\)-tree decomposition of a graph \(G\) of width \(k\). Note that, for each distinct \(t,t^{\prime}\in V(T)\), \(\beta(t)\cap\beta(t^{\prime})=\emptyset\). Let \(X=\bigcup_{t\in V(T)}\alpha(t)\). Then \((T,\alpha)\) is a tree decomposition of \(X\) of width \(k-1\). Moreover, for each \(t\in V(T)\), \(G[\beta(t)]\in\mathcal{H}\), and therefore the connected components of \(G\setminus X\) belong to \(\mathcal{H}\).
##### Bipartite treewidth [adapted from [8, 35]].
A graph \(G\) is _bipartite_ if there is a partition \((A,B)\) of \(V(G)\) such that \(E(G)=E(A,B)\). We denote the class of bipartite graphs by \(\mathcal{B}\). We focus here on the case where \(\mathcal{H}=\mathcal{B}\). Then, we use the term _bipartite treewidth_ instead of 1-\(\mathcal{H}\)-treewidth, and denote it by btw. As mentioned in the introduction, this definition had already been used (more or less implicitly) in [8, 35].
Given that the bipartite graphs are closed under 1-clique-sums, we have the following.
**Observation 3.1**.: _A graph has bipartite treewidth zero if and only if it is bipartite._
Moreover, Campbell, Gollin, Hendrey, and Wiederrecht [5] recently announced an \(\mathsf{FPT}\)-approximation algorithm to construct a bipartite tree decomposition.
**Proposition 3.1** ([5]).: _There exists functions \(f_{1},f_{2},g:\mathbb{N}\to\mathbb{N}\) and an algorithm that, given a graph \(G\) and \(k\in\mathbb{N}\), outputs, in time \(g(k)\cdot n^{4}\log n\), either a report that \(\mathsf{btw}(G)\geq f_{1}(k)\), or a bipartite tree decomposition of \(G\) of width at most \(f_{2}(k)\)._
Bipartite treewidth is not closed under minors, given that contracting an edge in a bipartite graph (which has bipartite treewidth zero) may create a non-bipartite graph (which has positive bipartite treewidth). However, bipartite treewidth is closed under odd-minors, which is a desirable property to deal with odd-minor related problems.
**Lemma 3.2**.: _Bipartite treewidth is closed under odd-minor containment._
Proof.: Let \(G\) be a graph and \(H\) be an odd-minor of \(G\). We want to prove that \(\mathsf{btw}(H)\leq\mathsf{btw}(G)\). By Lemma3.1, there is a subgraph \(G^{\prime}\) of \(G\) and an edge cut \(E^{\prime}\) such that \(H\) is obtained from \(G^{\prime}\) by contracting every edge in \(E^{\prime}\).
Since we only removed vertices and edges to obtain \(G^{\prime}\) from \(G\), \(\mathsf{btw}(G^{\prime})\leq\mathsf{btw}(G)\). It remains to show that \(\mathsf{btw}(H)\leq\mathsf{btw}(G^{\prime})\). Let \((T,\alpha^{\prime},\beta^{\prime})\) be a bipartite tree decomposition of \(G^{\prime}\). We transform \((T,\alpha^{\prime},\beta^{\prime})\) to a bipartite tree decomposition \((T,\alpha,\beta)\) of \(H\) as follows. For each \(e=uv\in E^{\prime}\) and for each \(t\in V(T)\) such that \(\{u,v\}\cap(\alpha^{\prime}\cup\beta^{\prime})(t)\neq\emptyset\),
* if \(\{u,v\}\cap\alpha^{\prime}(t)\neq\emptyset\), then the vertex \(v_{e}\) resulting from contracting \(e\) is placed in \(\alpha(t)\),
* otherwise, \(v_{e}\) is placed in \(\beta(t)\).
For each \(v\in V(G^{\prime})\) that is not involved in any contraction and for each \(t\in V(T)\), if \(v\in\alpha^{\prime}(t)\) (resp. \(v\in\beta^{\prime}(t)\)), then \(v\in\alpha(t)\) (resp. \(v\in\beta(t)\)).
Let us show that \((T,\alpha,\beta)\) is indeed a bipartite tree decomposition of \(H\). It is easy to see that \((T,\alpha\cup\beta)\) is a tree decomposition of \(H\), since it is obtained from \((T,\alpha^{\prime}\cup\beta^{\prime})\) by contracting the edge set \(E^{\prime}\), and that treewidth is minor-closed. For simplicity, we identify the vertices in \(\alpha\cup\beta\) with the vertices of \(H\). Given that an edge with at least one endpoint in \(\alpha^{\prime}(t)\) contracts to a vertex in \(\alpha(t)\), no new vertex is added to \(\beta(t)\), and therefore, for any \(t^{\prime}\in V(T)\setminus\{t\}\), \(|(\alpha\cup\beta)(t^{\prime})\cap\beta(t)|\leq 1\).
It remains to prove that, for each \(t\in V(T)\), \(H[\beta(t)]\) is bipartite. Let \(t\in V(T)\). Let \(E_{t}\) be the set of edges of \(E^{\prime}\) with both endpoints in \(\beta^{\prime}(t)\). We have to prove that the bipartite graph induced by \(\beta^{\prime}(t)\) in \(G^{\prime}\) remains bipartite after contracting \(E_{t}\). \(E_{t}\) is an edge cut of \(G^{\prime}[\beta^{\prime}(t)]\), witnessed by some vertex partition \((A_{1},A_{2})\). Given a proper 2-coloring \((B_{1},B_{2})\) of the set vertex of \(G^{\prime}[\beta^{\prime}(t)]\), which is bipartite, keep the same color for the vertices in \(A_{1}\), and change the color of the vertices in \(A_{2}\), i.e., define the coloring \((C_{1},C_{2})=((B_{1}\cap A_{1})\cup(B_{2}\cap A_{2}),(B_{2}\cap A_{1})\cup(B _{1}\cap A_{2}))\). Thus, the monochromatic edges are exactly the edges of \(E_{t}\). Therefore, contracting \(E_{t}\) gives a proper 2-coloring of \(H[\beta(t)]\), so \(H[\beta(t)]\) is bipartite. Thus, \((T,\alpha,\beta)\) is a bipartite tree decomposition of \(H\).
Moreover, since the contraction of an edge with both endpoints in \(\beta^{\prime}(t)\) is a vertex in \(\beta(t)\), it follows that \(|\alpha(t)|\leq|\alpha^{\prime}(t)|\) for every \(t\in V(T)\). Therefore, \(\mathsf{btw}(H)\leq\mathsf{btw}(G^{\prime})\).
A natural generalization of bipartite treewidth is replace the "1" in the last item of the definition of 1-\(\mathcal{H}\)-tree decomposition by any \(q\in\mathbb{N}\), hence defining \(q\)_-\(\mathcal{H}\)-tree decompositions_ and \(q\)_-\(\mathcal{H}\)-treewidth_, denoted by \((q,\mathcal{H})\)-\(\mathsf{tw}(G)\). For \(q\geq 2\), however, \(q\)-\(\mathcal{B}\)-treewidth is not closed under odd-minors, as we prove in Lemma3.3 below. Additionnaly, given that for \(q\in\{0,1\}\), \(\mathsf{torso}_{G\setminus\alpha(t)}(\beta(t))=G[\beta(t)]\), we could replace the third item by the property "for every \(t\in V(T)\), \(\mathsf{torso}_{G\setminus\alpha(t)}(\beta(t))\in\mathcal{H}\)", hence
defining what we call \(q\)_-torso-\(\mathcal{H}\)-tree decompositions_ and \(q\)_-torso-\(\mathcal{H}\)-treewidth_, denoted by \((q,\mathcal{H})^{\star}\)-\(\mathsf{tw}(G)\). However, we prove in Lemma3.3 that, for \(q\geq 2\), \((q,\mathcal{B})^{\star}\)-\(\mathsf{tw}\) is also not closed under odd-minors. These facts, in our opinion, provide an additional justification for the choice of \(q=1\) in the definition of bipartite treewidth.
**Lemma 3.3**.: _For \(q\geq 2\), \(q\)(-torso)-\(\mathcal{B}\)-treewidth is not closed under odd-minor containment. In particular, for any \(t\geq 3\), there exist a graph \(G\) and an odd-minor \(H\) of \(G\), such that \((q,\mathcal{B})\mathsf{-btw}(G)=0\) and \((q,\mathcal{B})\mathsf{-btw}(H)=t-2\), and \((q,\mathcal{B})^{\star}\mathsf{-btw}(G)\leq 1\) and \((q,\mathcal{B})^{\star}\mathsf{-btw}(H)=t-2\)._
Proof.: Let \(t\in\mathbb{N}_{\geq 4}\) and let \(K_{t}^{\prime}\) (resp. \(K_{t}^{\prime\prime}\)) be the graph obtained from \(K_{t}\) by subdividing every edge once (resp. twice). Let \(V^{\prime}=\{v_{1},\ldots,v_{t}\}\) be the set of vertices of \(K_{t}^{\prime\prime}\) of degree at least three. \(K_{t}\) is an odd-minor of \(K_{t}^{\prime\prime}\) since \(K_{t}\) can be obtained from \(K_{t}^{\prime\prime}\) by contracting the edge cut \(E(V^{\prime},V(K_{t}^{\prime\prime})\setminus V^{\prime})\). Note also that \(K_{t}^{\prime}\) is bipartite. We show that taking \(G=K_{t}\) and \(H=K_{t}^{\prime\prime}\) satisfies the statement of the lemma.
Given that \(K_{t}\) is a complete graph, it has to be fully contained in one bag of any tree decomposition, so in particular of any \(q\)(-torso)-\(\mathcal{B}\)-tree decomposition. Since the smallest odd cycle transversal of \(K_{t}\) has size \(t-2\), we have that \((q,\mathcal{B})^{\star}\mathsf{-tw}(K_{t})=(q,\mathcal{B})\mathsf{-tw}(K_{t })=t-2\).
Let us first prove that \((q,\mathcal{B})\mathsf{-tw}(K_{t}^{\prime\prime})=0\). For \(i,j\in[t]\) with \(i<j\), let \(e_{i,j}\) be the \(3\)-path between \(v_{i}\) and \(v_{j}\). Let \(T\) be a tree with one central vertex \(x_{0}\) and, for each \(i,j\in[t]\) with \(i<j\), a vertex \(x_{i,j}\) only adjacent to \(x_{0}\) (thus, \(T\) is a star). Let \(\beta(x_{0})=V^{\prime}\) and \(\beta(x_{i,j})=V(e_{i,j})\) for each \(i,j\in[t]\) with \(i<j\). Let \(\alpha(x)=\emptyset\) for each \(x\in V(T)\). \(V^{\prime}\) is an independent set, so \(G[V^{\prime}]\) is bipartite. Note that paths are bipartite. Moreover, each adhesion contains at most two vertices of \(\beta(x_{0})\) and two vertices of \(\beta(x_{i,j})\). Hence, \((T,\alpha,\beta)\) is a \(q\)-\(\mathcal{B}\)-tree decomposition of \(K_{t}^{\prime\prime}\) and has width zero.
Let us now prove that \((q,\mathcal{B})^{\star}\mathsf{-btw}(K_{t}^{\prime\prime})\leq 1\). Let \(u_{i,j}\) and \(w_{i,j}\) be the internal vertices of \(e_{i,j}\), such that \(u_{i,j}\) is adjacent to \(v_{i}\). Let \(V_{1}\) (resp. \(V_{2}\)) be the set of vertices \(u_{i,j}\) (resp. \(w_{i,j}\)). We construct an \(q\)-torso-\(\mathcal{B}\)-tree decomposition \((T,\alpha^{\prime},\beta^{\prime})\) of \(K_{t}^{\prime\prime}\) as follows. We set \(\alpha^{\prime}(x_{0})=\emptyset\) and \(\beta^{\prime}(x_{0})=V^{\prime}\cup V_{2}\). For each \(i,j\in[t]\) with \(i<j\), we set \(\alpha^{\prime}(x_{i,j})=\{v_{i}\}\) and \(\beta^{\prime}(x_{i,j})=\{u_{i,j},w_{i,j}\}\). Observe that \(\mathsf{torso}_{K_{t}^{\prime\prime}\setminus\alpha^{\prime}(x_{0})}(\beta^{ \prime}(x_{0}))=K_{t}^{\prime}\), since each path \(v_{i}\)-\(u_{i,j}\)-\(w_{i,j}\) is replaced by an edge \(v_{i}w_{i,j}\). Thus, it is bipartite. Similarly, the torso at each other node of \(T\) is an edge, and hence is bipartite as well. Moreover, each adhesion contains at most two vertices of \(\beta^{\prime}(x_{0})\) and one vertex of \(\beta^{\prime}(x_{i,j})\). Hence, \((T,\alpha^{\prime},\beta^{\prime})\) is indeed an \(q\)-torso-\(\mathcal{B}\)-tree decomposition of \(K_{t}^{\prime\prime}\) and has width one. So \((q,\mathcal{B})^{\star}\mathsf{-btw}(K_{t}^{\prime\prime})\leq 1\).
Hence, \(q\)(-torso)-\(\mathcal{B}\)-treewidth is not closed under odd-minor containment and the gap between a graph and an odd-minor of this graph can be arbitrarily large.
As mentioned in Subsection2.1, one of the main difficulties for doing dynamic programming on (rooted) bipartite tree decompositions is the lack of a way to upper-bound the number of children of each node of the decomposition. As shown in the next lemma, the notion of "nice tree decomposition" is not generalizable to bipartite tree decompositions.
**Lemma 3.4**.: _For any \(t\in\mathbb{N}\), there exists a graph \(G\) such that \(\mathsf{btw}(G)=1\) and any bipartite tree decomposition of \(G\) whose nodes all have at most \(t\) neighbors has width at least \(t-1\)._
Proof.: Let \(G\) be the graph obtained from \(K_{t,t}\) by gluing a new pendant triangle \(T_{v}\) to each vertex \(v\) of \(K_{t,t}\) (that is, \(v\) is identified with one vertex of its pendant triangle). Let \(T\) be the star \(K_{1,2t}\), with vertex set \(\{t_{0}\}\cup\{t_{v}\mid v\in V(K_{t,t})\}\). Let \(\alpha(t_{0})=\emptyset\), \(\beta(t_{0})=V(K_{t,t})\), and for every \(v\in V(K_{t,t})\), \(\alpha(t_{v})=\{v\}\) and \(\beta(t_{v})=V(T_{v})\setminus\{v\}\). It can be easily verified that \(\mathcal{T}=(T,\alpha,\beta)\) is a bipartite tree decomposition of \(G\) of width one. Given that \(G\) is not bipartite, 3.1 implies that \(\mathsf{btw}(G)=1\). Note
that node \(t_{0}\) has \(2t\) neighbors. For any bipartition \((A,B)\) of \(V(K_{t,t})\) such that \(A,B\neq\emptyset\), we have \(|E(A,B)|\geq t\). Hence, for any bipartite tree decomposition \(\mathcal{T}^{\prime}\) of \(G\) such that \(V(K_{t,t})\) is not totally contained in one bag, there is an adhesion of two bags of size at least \(t\), so the width of \(\mathcal{T}^{\prime}\) is at least \(t-1\). If \(V(K_{t,t})\) is fully contained in one bag, however, the only way to reduce the number of children to \(k\), for some integer \(k\), is to add \(2t-k\) of the pendant triangles inside the same bag. But then this bag has odd cycle transversal number at least \(2t-k\), so the obtained bipartite tree decomposition has width at least \(2t-k\). Hence, if we want that \(k\leq t\), then the corresponding width is at least \(2t-k\geq t\).
## 4 General dynamic programming to obtain \(\mathsf{FPT}\)-algorithms
In this section, we give introduce a framework for giving \(\mathsf{FPT}\)-algorithms for problems parameterized by the width of a given bipartite tree decomposition of the input graph. In Subsection 4.1 we introduce the main technical notion of a _nice problem_ and the necessary background, and in Subsection 4.2 we provide dynamic programming algorithms for nice problems. In Subsection 4.4, we give applications to concrete problems.
### Boundaried graphs and nice problems
Before we motivate the definition of nice problems, we introduce the necessary background and operations on boundaried graphs.
########## Boundaries
Let \(t\in\mathbb{N}\). A _\(t\)-boundaried graph_ is a triple \(\mathbf{G}=(G,B,\rho)\) where \(G\) is a graph, \(B\subseteq V(G)\), \(|B|=t\), and \(\rho:B\to\mathbb{N}\) is an injection. We say that \(B\) is the _boundary_ of \(\mathbf{G}\) and we write \(B=\mathsf{bd}(G)\). We call \(\mathbf{G}\)_trivial_ if all its vertices belong to the boundary. We say that two \(t\)-boundaried graphs \(\mathbf{G}_{1}=(G_{1},B_{1},\rho_{1})\) and \(\mathbf{G}_{2}=(G_{2},B_{2},\rho_{2})\) are _isomorphic_ if \(\rho_{1}(B_{1})=\rho_{2}(B_{2})\) and there is an isomorphism from \(G_{1}\) to \(G_{2}\) that extends the bijection \(\rho_{2}^{-1}\circ\rho_{1}\). A triple \((G,B,\rho)\) is a _boundaried graph_ if it is a \(t\)-boundaried graph for some \(t\in\mathbb{N}\). We denote by \(\mathcal{B}^{t}\) the set of all (pairwise non-isomorphic) \(t\)-boundaried graphs. A boundaried graph \(\mathbf{F}\) is a _boundaried induced subgraph_ (resp. _boundaried subgraph_) of \(\mathbf{G}\) if \(\mathbf{F}\) can be obtained from \(\mathbf{G}\) by removing vertices (resp. and edges). A boundaried graph \(\mathbf{F}\) is a _boundaried odd-minor_ of \(\mathbf{G}\) if \(\mathbf{F}\) can be obtained from a bounderied subgraph \(\mathbf{G}^{\prime}\) of \(\mathbf{G}\) by contracting an edge cut such that every vertex in \(\mathsf{bd}(\mathbf{G}^{\prime})\) is on the same side of the cut. We say that two boundaried graphs \(\mathbf{G}_{1}=(G_{1},B_{1},\rho_{1})\) and \(\mathbf{G}_{2}=(G_{2},B_{2},\rho_{2})\) are _compatible_ if \(\rho_{1}(B_{1})=\rho_{2}(B_{2})\) and \(\rho_{2}^{-1}\circ\rho_{1}\) is an isomorphism from \(G_{1}[B_{1}]\) to \(G_{2}[B_{2}]\). Given two boundaried graphs \(\mathbf{G}_{1}=(G_{1},B_{1},\rho_{1})\) and \(\mathbf{G}_{2}=(G_{2},B_{2},\rho_{2})\), we define \(\mathbf{G}_{1}\oplus\mathbf{G}_{2}\) as the unboundaried graph obtained if we take the disjoint union of \(G_{1}\) and \(G_{2}\) and, for every \(i\in\rho_{1}(B_{1})\cap\rho_{2}(B_{2})\), we identify vertices \(\rho_{1}^{-1}(i)\) and \(\rho_{2}^{-1}(i)\). If \(v\) is the result of the identification of \(v_{1}:=\rho_{1}^{-1}(i)\) and \(v_{2}:=\rho_{2}^{-1}(i)\) then we say that \(v\) is the _heir_ of \(v_{i}\) from \(\mathbf{G}_{i},i\in[2]\). If \(v\) is either a vertex of \(G_{1}\) where \(\rho_{1}(v)\not\in\rho_{1}(B_{1})\cap\rho_{2}(B_{2})\) (if \(v\in B_{1}\)) or a vertex of \(G_{2}\) where \(\rho_{2}(v)\not\in\rho_{1}(B_{1})\cap\rho_{2}(B_{2})\) (if \(v\in B_{2}\)), then \(v\) is also a (non-identified) vertex of \(\mathbf{G}_{1}\oplus\mathbf{G}_{2}\) and is a _heir_ of itself (from \(\mathbf{G}_{1}\) or \(\mathbf{G}_{2}\) respectively). For \(i\in[2]\), and given an edge \(vu\) in \(\mathbf{G}_{1}\oplus\mathbf{G}_{2}\), we say that \(vu\) is the _heir_ of an edge \(v^{\prime}u^{\prime}\) from \(\mathbf{G}_{i}\) if \(v^{\prime}\) (resp. \(u^{\prime}\)) is the hair of \(v\) (resp. \(u\)) from \(\mathbf{G}_{i}\) and \(v^{\prime}u^{\prime}\) is an edge of \(G_{i}\). If \(x^{\prime}\) is an hair of \(x\) from \(\mathbf{G}=(G,B,\rho)\) in \(\mathbf{G}^{\prime}\), then we write \(x=\mathsf{heir}_{\mathbf{G},\mathbf{G}^{\prime}}(x^{\prime})\). If \(B^{\prime}\subseteq B\), then \(\mathsf{heir}_{\mathbf{G},\mathbf{G}^{\prime}}(B)=\bigcup_{v\in B^{\prime}} \mathsf{heir}_{\mathbf{G},\mathbf{G}^{\prime}}(x^{\prime})\). We also define \(\mathbf{G}_{1}\boxplus\mathbf{G}_{2}\) as the boundaried graph \((\mathbf{G}_{1}\oplus\mathbf{G}_{2},B,\rho)\), where \(B\) is the sets of all hairs from \(\mathbf{G}_{1}\) and \(\mathbf{G}_{2}\) and \(\rho:B\to\mathbb{N}\) is the union of \(\rho_{1}\) and \(\rho_{2}\) after identification. Note that in circumstances
where \(\boxplus\) is repetitively applied, the her relation is maintained due to its transitivity. Moreover, we define \(\mathbf{G}_{1}\triangleright\mathbf{G}_{2}\) as the unboundaried graph \(G\) obtained from \(\mathbf{G}_{1}\oplus\mathbf{G}_{2}\) by removing all heirs from \(\mathbf{G}_{2}\) that are not heirs from \(\mathbf{G}_{1}\) and all heirs of edges from \(\mathbf{G}_{2}\) that are not heirs of edges from \(\mathbf{G}_{1}\). Note that \(\triangleright\) is not commutative.
All algorithms we give for problems on graphs of bounded btw follow the same strategy. To avoid unnecessary repetition, we introduce a framework that captures the features that the problems have in common with respect to their algorithms using bipartite tree decompositions. Naturally, the algorithms use dynamic programming along a rooted bipartite tree decomposition. However, as the bags in the decomposition can now be large, we cannot apply brute-force approaches to define table entries as we can, for instance, on standard tree decompositions when dealing with treewidth.
Suppose we are at a node \(t\) with children \(t_{1},\ldots,t_{d}\). Since the size of \(\alpha(t)\) is bounded by the width, we can store all possible ways in which solutions interact with \(\alpha(t)\). Moreover, since each adhesion has at most one vertex from \(\beta(t)\), the size of each adhesion is still bounded in terms of the bipartite treewidth. Therefore, we can store one table entry for each way in which a solution can interact with the adhesion \(\delta_{t}\) of \(t\) and its parent. However, since the size of \(\beta(t)\) is unbounded, there can now be an exponential (in \(n\)) number of choices of table entries that are "compatible" with the choice \(\mathcal{X}\) made for \(\delta_{t}\), so we cannot simply brute-force them to determine the optimum value corresponding to \(\mathcal{X}\). To overcome this, we apply the following strategy: First, since the size of \(\alpha(t)\) is bounded in terms of the bipartite treewidth, we guess which choice \(\mathcal{A}\) of the interaction of the solution with \(\alpha(t)\cup\delta_{t}\) that extends \(\mathcal{X}\) leads to the optimum partial solution. For each \(i\in[d]\), there may be a vertex \(v_{t_{i}}\in\delta_{t_{i}}\cap\beta(t)\) whose interaction with the partial solution remained undecided. We replace, for each \(i\in[d]\), the subgraph \(G_{t_{i}}\setminus\delta_{t_{i}}\) with a simply structured subgraph that simulates the behaviour of the table at \(t_{i}\) when it comes to the decision of how \(v_{t_{i}}\) interacts with the solution, under the choice of \(\mathcal{A}\) for \(\alpha(t)\cup\delta_{t}\). The crux is that the resulting graph will have an odd cycle transversal that is bounded in terms of the size of \(\alpha(t)\), so we can apply known FPT-algorithms parameterized by odd cycle transversal to determine the value of the table entry. These notions can be formalized not only for bipartite treewidth, but for any \(1\)-\(\mathcal{H}\)-treewidth, so we present them in full generality here. We also depart from using tree decompositions explicitly, and state them in an equivalent manner in the language of boundaried graphs.
First, let us formalize the family of problems we consider which we refer to as _optimization problems_. Here, solutions correspond in some sense to partitions of the vertex set, and we want to optimize some property of such a partition. For instance, if we consider Odd Cycle Transversal, then this partition has three parts, one for the vertices in the solution, and one part for each part of the bipartition of the vertex set of the graph obtained by removing the solution vertices, and we want to minimize the size of the first part of the partition. (It will become clear later why we keep one separate part for each part of the bipartition.) In Maximum Cut, the partition simply points to which side of the cut each vertex is on, and we want to maximize the number of edges going across.
A \(p\)_-partition-evaluation function on graphs_ is a function \(f\) that receives as input a graph \(G\) along with a \(p\)-partition \(\mathcal{P}\) of its vertices and outputs a non-negative integer. Given such a function \(f\) and some choice \(\mathsf{opt}\in\{\max,\min\}\) we define the associated graph parameter \(\mathsf{p}_{f,\mathsf{opt}}\) where, for every graph \(G\),
\[\mathsf{p}_{f,\mathsf{opt}}(G)=\mathsf{opt}\{f(G,\mathcal{P})\mid\ \mathcal{P}\text{ is a $p$-partition of $V(G)$}\}.\]
An _optimization problem_ is a problem that can be expressed as follows.
\begin{tabular}{|l|} \hline
**Input**: A graph \(G\). \\
**Objective**: Compute \\ \(\mathsf{p}_{f,\mathsf{opt}}(G)\). \\ \hline \end{tabular} To represent the case when we made some choices for the (partial) solution to an optimization problem, such as \(\mathcal{A}\) above, we consider _annotated_ versions of such problems. They extend the function \(\mathsf{p}_{f,\mathsf{opt}}\) so to receive as input, apart from a graph, a set of annotated sets in the form of a partition \(\mathcal{X}\in\mathcal{P}_{p}(X)\) of some \(X\subseteq V(G)\). More formally, the _annotated extension_ of \(\mathsf{p}_{f,\mathsf{opt}}\) is the parameter \(\hat{\mathsf{p}}_{f,\mathsf{opt}}\) such that
\[\hat{\mathsf{p}}_{f,\mathsf{opt}}(G,\mathcal{X})=\mathsf{opt}\{f(G,\mathcal{ P})\ |\ \ \mathcal{P}\text{ is a $p$-partition of $V(G)$ with $\mathcal{X}\subseteq\mathcal{P}$}\}.\]
Observe that \(\mathsf{p}_{f,\mathsf{opt}}(G)=\hat{\mathsf{p}}_{f,\mathsf{opt}}(G,\emptyset^ {p})\), for every graph \(G\). The problem \(\Pi^{\prime}\) is a _\(p\)-annotated extension_ of the optimization problem \(\Pi\) if \(\Pi\) can be expressed by some \(p\)-partition-evaluation function \(f\) and some choice \(\mathsf{opt}\in\{\max,\min\}\), and that \(\Pi^{\prime}\) can be expressed as follows.
\begin{tabular}{|l|} \hline
**Input**: A graph \(G\) and \(\mathcal{X}\in\mathcal{P}_{p}(X)\) for some \(X\subseteq V(G)\). \\
**Objective**: Compute \(\hat{\mathsf{p}}_{f,\mathsf{opt}}(G,\mathcal{X})\). \\ \hline \end{tabular} We also say that \(\Pi^{\prime}\) is a _\(p\)-annotated problem_.
Let us turn to the main technical tool introduced in this section that formalizes the above idea, namely the _nice reduction_. First, we may assume that the vertices of \(G\) are labeled injectively via \(\sigma\). Then, each graph \(G_{t_{i}}\), for \(i\in[d]\), naturally corresponds to a boundaried graph \(\mathbf{G_{i}}=(G_{t_{i}},\delta_{t_{i}},\sigma_{|\delta_{t_{i}}})\); from now on let \(X_{i}=\delta_{t_{i}}\). The part of \(G_{t}\) that will be modified can be viewed as a boundaried graph \(\mathbf{G}\) which is essentially obtained as \(\boxplus_{i\in[d]}\mathbf{G_{i}}\). However, as we want to fix a choice of how the partial solution interacts with \(\delta_{t}\), we include these corresponding vertices in \(\mathbf{G}\) as well, modeled as a trivial boundaried graph \(\mathbf{X}\), making \(\mathbf{G}=\mathbf{X}\boxplus(\boxplus_{i\in[d]}\mathbf{G_{i}})\).
Denote the boundary of \(\mathbf{G}\) by \(X\). The set \(X\) is partitioned into \((A,B)\), corresponding to \((X\cap(\alpha(t)\cup\delta(t)),X\cap(\beta(t)\setminus\delta(t)))\), and the fact that the adhesion between \(t_{i}\) and \(t\) had at most one vertex in common with \(\beta(t)\) now materializes as the fact that for each \(i\in[d]\), \(\mathbf{G}\) has at most one vertex outside of \(A\) that are heirs of vertices in \(\mathbf{G_{i}}\). Fixing a choice for \(\alpha(t)\cup\delta_{t}\) now corresponds to choosing a partition \(\mathcal{A}\) of the set \(A\). As we assume that all table entries at the children have been computed, we assume knowledge of all values \(\hat{\mathsf{p}}_{f,\mathsf{opt}}(G_{i},\mathcal{X}_{i})\), for all \(i\in[d]\) and \(\mathcal{X}_{i}\in\mathcal{P}_{p}(X_{i})\). This finishes the motivation of the input of a nice reduction.
Taking the pair \((\mathbf{G},\mathcal{A})\), it outputs a tuple \((\mathbf{G}^{\prime}=(G^{\prime},X^{\prime},\rho^{\prime}),\mathcal{A}^{\prime },s^{\prime})\), with the desired properties, that is: \(\mathbf{G}^{\prime}\) can be constructed by gluing \(d^{\prime}\) boundaried graphs plus one trivial one, similarly to \(\mathbf{G}\). \(\mathcal{A}^{\prime}\) is a \(p\)-partition of a set \(A^{\prime}\subseteq V(G^{\prime})\) whose size is at most the size of \(A\) plus a constant.
No matter what the structure of the graph of the vertices in \((\alpha\cup\beta)(t)\) looked like (remember, so far we carved out only the adhesions), the solutions are preserved, up to the offset of \(s^{\prime}\). This is modeled by saying that for each boundaried graph \(\mathbf{F}\) (which corresponds to the remainder of the bag at \(t\)) compatible with \(\mathbf{G}\), \(\hat{\mathsf{p}}_{f,\mathsf{opt}}(\mathbf{G}\oplus\mathbf{F},\mathcal{A})= \hat{\mathsf{p}}_{f,\mathsf{opt}}(\mathbf{G}^{\prime}\triangleright\mathbf{F}, \mathcal{A}^{\prime})+s^{\prime}.\) The reason why we use the \(\triangleright\)-operator in the right-hand side of the equation is the gadgeteering happening in the later sections. To achieve the "solution-preservation", we might have to add or remove vertices, or change adjacencies between vertices in \(X_{i}\).
The last condition corresponds to our aim that if the bag at \(t\) induce a graph of small \(\mathsf{oct}\) (now, a small modulator to a graph class \(\mathcal{H}\)), then the entire graph resulting from the operation
\((\mathbf{G}^{\prime}\triangleright\mathbf{F})\) should have a small modulator to \(\mathcal{H}\) (namely \(A^{\prime}\)). All remaining conditions are related to the efficiency of the nice reduction.
For an illustration of the following definition, see Figure 1.
##### Nice problem and nice reduction.
Let \(p\in\mathbb{N}\), let \(\mathcal{H}\) be a graph class, and let \(\Pi\) be a \(p\)-annotated problem corresponding to some choice of \(p\)-partition-evaluation function \(f\) and some \(\mathsf{opt}\in\{\max,\min\}\). We say that \(\Pi\) is a _\(\mathcal{H}\)-nice problem_ if there exists an algorithm that receives as input
* a boundaried graph \(\mathbf{G}=(G,X,\rho)\),
* a trivial boundaried graph \(\mathbf{X}=(G[X],X,\rho_{X})\) and a collection \(\{\mathbf{G}_{i}=(G_{i},X_{i},\rho_{i})\mid i\in[d]\}\) of boundaried graphs, such that \(d\in\mathbb{N}\) and \(\mathbf{G}=\mathbf{X}\boxplus(\boxplus_{i\in[d]}\mathbf{G}_{i})\),
* a partition \((A,B)\) of \(X\) such that for all \(i\in[d]\), \(|\mathsf{heir}_{\mathbf{G}_{i},\mathbf{G}}(X_{i})\setminus A|\leq 1\),
* some \(\mathcal{A}\in\mathcal{P}_{p}(A)\), and
* for every \(i\in[d]\) and each \(\mathcal{X}_{i}\in\mathcal{P}_{p}(X_{i})\), the value \(\hat{\mathfrak{p}}_{f,\mathsf{opt}}(G_{i},\mathcal{X}_{i})\),
and outputs, in time \(\mathcal{O}(|A|\cdot d)\), a tuple \((\mathbf{G}^{\prime}=(G^{\prime},X^{\prime},\rho^{\prime}),\mathcal{A}^{\prime },s^{\prime})\), called _\(\mathcal{H}\)-nice reduction of the pair \((\mathbf{G},\mathcal{A})\) with respect to \(\Pi\)_, such that the following hold.
* There is a set \(A^{\prime}\subseteq V(G^{\prime})\) such that \(|A^{\prime}|=|A|+\mathcal{O}(1)\), and \(\mathcal{A}^{\prime}\in\mathcal{P}_{p}(A^{\prime})\).
* There is a trivial boundaried graph \(\mathbf{X}^{\prime}=(G[X^{\prime}],X^{\prime},\rho_{X^{\prime}})\) and a collection \(\{\mathbf{G}^{\prime}_{i}=(G^{\prime}_{i},X^{\prime}_{i},\rho^{\prime}_{i})\mid i \in[d^{\prime}]\}\), where \(d^{\prime}\in\mathbb{N}\), of boundaried graphs such that \(\mathbf{G}^{\prime}=\mathbf{X}^{\prime}\boxplus(\boxplus_{i\in[d^{\prime}]} \mathbf{G}^{\prime}_{i})\) and \(|V(G^{\prime})|\leq|X|+\mathcal{O}(|B|)\), \(|E(G^{\prime})|\leq|E(G[X])|+\mathcal{O}(|B|)\).
* For any boundaried graph \(\mathbf{F}\) compatible with \(\mathbf{G}\), it holds that \[\hat{\mathfrak{p}}_{f,\mathsf{opt}}(\mathbf{G}\oplus\mathbf{F},\mathcal{A}) = \hat{\mathfrak{p}}_{f,\mathsf{opt}}(\mathbf{G}^{\prime}\triangleright \mathbf{F},\mathcal{A}^{\prime})+s^{\prime}.\]
* For any boundaried graph \(\mathbf{F}=(F,X_{F},\rho_{F})\) compatible with \(\mathbf{G}\), if \(\bar{F}\setminus A_{F}\in\mathcal{H}\), where \(\bar{F}=(\mathbf{F}\oplus\mathbf{G})[\mathsf{heir}_{\mathbf{F},\mathbf{G}\oplus \mathbf{F}}(V(F))]\) and \(A_{F}=\mathsf{heir}_{\mathbf{G},\mathbf{G}\oplus\mathbf{F}}(A)\), then \((\mathbf{G}^{\prime}\triangleright\mathbf{F})\setminus A^{\prime}\in\mathcal{H}\).
All the definitions of this section are naturally generalizable to graphs with weights on the vertices or/and edges. Given such a weight function \(w\), we extend \(f(G,\mathcal{P})\), \(\mathsf{p}_{\mathsf{fopt}}(G)\), \(\hat{\mathsf{p}}_{\mathsf{fopt}}(G,\mathcal{X})\), \((\mathbf{G},\mathcal{A})\), and \((\mathbf{G}^{\prime},\mathcal{A}^{\prime},s^{\prime})\) to \(f(G,\mathcal{P},w)\), \(\mathsf{p}_{\mathsf{fopt}}(G,w)\), \(\hat{\mathsf{p}}_{\mathsf{fopt}}(G,\mathcal{X},w)\), \((\mathbf{G},\mathcal{A},w)\), and \((\mathbf{G}^{\prime},\mathcal{A}^{\prime},s^{\prime},w^{\prime})\), respectively.
### General dynamic programming scheme
We now have all the ingredients for our general scheme dynamic programming algorithm on bipartite tree decompositions. We essentially prove that if a problem \(\Pi\) has an annotated extension that is \(\mathcal{B}\)-nice and solvable in \(\mathsf{FPT}\)-time parameterized by \(\mathsf{oct}\), then \(\Pi\) is solvable in \(\mathsf{FPT}\)-time parameterized by \(\mathsf{btw}\). This actually holds for more general \(\mathcal{H}\).
**Lemma 4.1**.: _Let \(p\in\mathbb{N}\). Let \(\mathcal{H}\) be a graph class. Let \(\Pi\) be an optimization problem. Let \(\Pi^{\prime}\) be a problem that is:_
* \(a\) \(p\)_-annotated extension of_ \(\Pi\) _corresponding to some choice of_ \(p\)_-partition-evaluation function_ \(g\) _and some_ \(\mathsf{opt}\in\{\max,\min\}\)_,_
* \(\mathcal{H}\)_-nice, and_
* _solvable on instances_ \((G,\mathcal{X})\) _such that_ \(G\setminus\cup\mathcal{X}\in\mathcal{H}\) _in time_ \(f(|\cup\mathcal{X}|)\cdot n^{c}\cdot m^{d}\)_, for some_ \(c,d\in\mathbb{N}\)_._
_Then, there is an algorithm that, given a graph \(G\) and a 1-\(\mathcal{H}\)-tree decomposition of \(G\) of width \(k\), computes \(\mathsf{p}_{f,\mathsf{opt}}(G)\) in time \(\mathcal{O}(p^{k}\cdot f(k+\mathcal{O}(1))\cdot(k\cdot n)^{c}\cdot m^{d})\) (or \(\mathcal{O}(p^{k}\cdot f(k+\mathcal{O}(1))\cdot(m+k^{2}\cdot n)^{d})\) if \(c=0\))._
Proof.: Let \(\mathsf{Alg}\) be the algorithm that solves instances \((G,\mathcal{X})\) such that \(G\setminus\cup\mathcal{X}\in\mathcal{H}\) in time \(f(|\cup\mathcal{X}|)\cdot n^{c}\cdot m^{d}\).
Let \((T,\alpha,\beta,r)\) be a rooted 1-\(\mathcal{H}\)-tree decomposition of \(G\) of width at most \(k\). Let \(\sigma:V(G)\to\mathbb{N}\) be an injection. For \(t\in V(T)\), let \(\mathbf{G}_{t}=(G_{t},\delta_{t},\sigma_{|\delta_{t}})\), let \(X_{t}=\alpha(t)\cup\delta_{t}\cup\bigcup_{t^{\prime}\in\mathsf{ch}_{r}(t)} \delta_{t^{\prime}}\), let \(\mathbf{X}_{t}=(G[X_{t}],X_{t},\sigma_{|X_{t}})\), let \(\mathbf{H}_{t}=\mathbf{X}_{t}\boxplus(\boxplus_{t^{\prime}\in\mathsf{ch}_{r}(t )}\mathbf{G}_{t^{\prime}})\), let \(\mathbf{F}_{t}\) be such that \(G_{t}=\mathbf{F}_{t}\oplus\mathbf{H}_{t}\). let \(A_{t}=\alpha(t)\cup\delta_{t}\), and let \(B_{t}=X_{t}\setminus A_{t}=X_{t}\cap\beta(t)\setminus\delta_{t}\). Note that \(|\mathsf{bd}(\mathbf{G}_{t^{\prime}})\setminus A_{t}|\leq 1\) for \(t^{\prime}\in\mathsf{ch}_{r}(t)\).
We proceed in a bottom-up manner to compute \(s_{t}^{\mathcal{X}}:=\hat{\mathsf{p}}_{g,\mathsf{opt}}(G_{t},\mathcal{X})\), for each \(t\in V(T)\), for each \(\mathcal{X}\in\mathcal{P}_{p}(\delta_{t})\). Hence, given that \(\delta_{r}=\emptyset\), \(s_{r}^{\emptyset}=\mathsf{p}_{g,\mathsf{opt}}(G)\).
Let \(t\in V(T)\). By induction, for each \(t^{\prime}\in\mathsf{ch}_{r}(t)\) and for each \(\mathcal{X}_{t^{\prime}}\in\mathcal{P}_{p}(\delta_{t^{\prime}})\), we compute the value \(s_{t^{\prime}}^{\mathcal{X}_{t^{\prime}}}\). Let \(\mathcal{X}\in\mathcal{P}_{p}(\delta_{t})\). Let \(\mathcal{Q}\) be the set of all \(\mathcal{A}\in\mathcal{P}_{p}(A_{t})\) such that \(\mathcal{A}\cap\delta_{t}=\mathcal{X}\). Let \(\mathcal{A}\in\mathcal{Q}\). Since \(\Pi^{\prime}\) is \(\mathcal{H}\)-nice, there is an \(\mathcal{H}\)-nice reduction \((\mathbf{H}_{\mathcal{A}},\mathcal{A}^{\prime},s_{\mathcal{A}})\) of \((\mathbf{H}_{t},\mathcal{A})\) with respect to \(\Pi^{\prime}\). Hence, \(\hat{\mathsf{p}}_{g,\mathsf{opt}}(G_{t},\mathcal{A})=\hat{\mathsf{p}}_{g, \mathsf{opt}}(\mathbf{H}_{\mathcal{A}}\triangleright\mathbf{F}_{t},\mathcal{A}^{ \prime})+s_{\mathcal{A}}\). Let us compute \(\hat{\mathsf{p}}_{g,\mathsf{opt}}(\mathbf{H}_{\mathcal{A}}\triangleright \mathbf{F}_{t},\mathcal{A}^{\prime})\).
By definition of a \(\mathcal{H}\)-reduction, \((\mathbf{H}_{\mathcal{A}}\triangleright\mathbf{F}_{t})\setminus(\cup\mathcal{A}^{ \prime})\in\mathcal{H}\). Hence, we can compute \(\hat{\mathsf{p}}_{g,\mathsf{opt}}(\mathbf{H}_{\mathcal{A}}\triangleright \mathbf{F}_{t},\mathcal{A}^{\prime})\), and thus \(\hat{\mathsf{p}}_{g,\mathsf{opt}}(G_{t},\mathcal{A})\), using \(\mathsf{Alg}\) on the instance \((\mathbf{H}_{\mathcal{A}}\triangleright\mathbf{F}_{t},\mathcal{A}^{\prime})\). Finally, \(s_{t}^{\mathcal{X}}=\mathsf{opt}_{\mathcal{A}\in\mathcal{Q}}\hat{\mathsf{p}}_{g, \mathsf{opt}}(G_{t},\mathcal{A})\).
It remains to calculate the complexity. Throughout, we make use of the fact that \(p\) is a fixed constant. We can assume that \(T\) has at most \(n\) nodes: for any pair of nodes \(t,t^{\prime}\) with \((\alpha\cup\beta)(t)\subseteq(\alpha\cup\beta)(t^{\prime})\), we can contract the edge \(tt^{\prime}\) of \(T\) to a new vertex \(t^{\prime\prime}\) with \(\alpha(t^{\prime\prime})=\alpha(t^{\prime})\) and \(\beta(t^{\prime\prime})=\beta(t^{\prime})\). This defines a valid 1-\(\mathcal{H}\)-tree decomposition of same width. For any leaf \(t\) of \(T\), there is a vertex \(u\in V(G)\) that only belongs to the bag of \(t\). From this observation, we can inductively associate
each node of \(T\) to a distinct vertex of \(G\). So this \(\mathcal{H}\)-tree decomposition has at most \(n\) bags. Hence, if \(c_{t}=|\mathsf{ch}_{r}(t)|\), then we have \(\sum_{t\in V(T)}c_{t}\leq n\). Let also \(n_{t}=|(\alpha\cup\beta)(t)|\) and \(m_{t}=|E(G[(\alpha\cup\beta)(t)])|\). Note that \(|A_{t}|=|\alpha(t)|+|\delta_{t}\cap\beta(t)|\leq k+1\) and that \(|B_{t}|=|\bigcup_{t^{\prime}\in V(T)}\delta_{t^{\prime}}\cap\beta(t)|\leq c_{t}\), so \(|X_{t}|\leq k+1+c_{t}\). Moreover, the properties of the tree decompositions imply that the vertices in \(\beta(t)\setminus X_{t}\) are only present in node \(t\). Then, \(\sum_{t\in V(T)}n_{t}=\sum_{t\in V(T)}(|X_{t}|+|\beta(t)\setminus X_{t}|)= \mathcal{O}(k\cdot n)\). Also, let \(\bar{m}_{t}\) be the number of edges only present in the bag of node \(t\). The edges that are present in several bags are those in the adhesion of \(t\) and its neighbors. \(t\) is adjacent to its \(|c_{t}|\) children and its parent, and an adhesion has size at most \(k+1\). Thus, \(\sum_{t\in V(T)}m_{t}\leq\sum_{t\in V(T)}(\bar{m}_{t}+k^{2}(1+c_{t}))=\mathcal{ O}(m+k^{2}\cdot n)\).
There are \(p^{|A_{t}|}\leq p^{k+1}=\mathcal{O}(p^{k})\) partitions of \(\mathcal{P}_{p}(A_{t})\). For each of them, we compute in time \(\mathcal{O}(k\cdot c_{t})\) a \(\mathcal{H}\)-nice reduction \((\mathbf{H}_{\mathcal{A}},\mathcal{A}^{\prime},s_{\mathcal{A}})\) with \(|\cup\mathcal{A}^{\prime}|=|A_{t}|+\mathcal{O}(1)=k+\mathcal{O}(1)\) and with \(\mathcal{O}(|B_{t}|)=\mathcal{O}(c_{t})\) additional vertices and edges. We thus solve \(\Pi^{\prime}\) on \((\mathbf{H}_{\mathcal{A}}\triangleright\mathbf{F}_{t},\mathcal{A}^{\prime})\) in time \(f(k+\mathcal{O}(1))\cdot\mathcal{O}((n_{t}+c_{t})^{c}\cdot(m_{t}+c_{t})^{d})\). Hence, the running time is \(\mathcal{O}(p^{k}\cdot f(k+\mathcal{O}(1))\cdot(k\cdot n)^{c}\cdot m^{d})\) (or \(\mathcal{O}(p^{k}\cdot f(k+\mathcal{O}(1))\cdot(m+k^{2}\cdot n)^{d})\) if \(c=0\)).
### Generalizations
For the sake of simplicity, we assumed in Lemma4.1 that the problem \(\Pi\) under consideration takes as input just a graph. However, a similar statement still holds if we add labels/weights on the vertices/edges of the input graph. This is in particular the case for Weighted Independent Set (Subsubsection4.4.2) and Maximum Weighted Cut (Subsubsection4.4.4) where the vertices or edges are weighted. Furthermore, while we omit the proof here, with some minor changes to the definition of a nice problem, a similar statement would also hold for \(q\)(-torso)-\(\mathcal{H}\)-treewidth.
Moreover, again for the sake of simplicity, we assumed that \(\Pi^{\prime}\) is solvable in \(\mathsf{FPT}\)-time, while other complexities such as \(\mathsf{XP}\)-time could be considered. Similarly, in the definition of the nice reduction, the contraints \(|A^{\prime}|=|A|+\mathcal{O}(1)\), \(|V(G^{\prime})|\leq|X|+\mathcal{O}(|B|)\), \(|E(G^{\prime})|\leq|E(G[X])|+\mathcal{O}(|B|)\) can be modified. In both cases, the dynamic programming algorithm still holds, but the running time of Lemma4.1 changes.
To give a precise running time for \(K_{t}\)-Subgraph-Cover (Subsubsection4.4.1), Weighted Independent Set (Subsubsection4.4.2), and Maximum Weighted Cut (Subsubsection4.4.4) below, let us observe that, if \(\Pi^{\prime}\) is solvable in time \(f(|\cup\mathcal{X}|)\cdot n^{\prime c}\cdot m^{\prime d}\), where \(G^{\prime}=G\setminus\cup\mathcal{X}\), \(n^{\prime}=|V(G^{\prime})|\), and \(m^{\prime}=|E(G^{\prime})|\), then the running time of Lemma4.1 is better. Indeed, in the proof of the complexity of Lemma4.1, we now solve \(\Pi^{\prime}\) on \((\mathbf{H}_{\mathcal{A}}\triangleright\mathbf{F},\mathcal{A}^{\prime})\) in time \(f(k+\mathcal{O}(1))\cdot\mathcal{O}((n^{\prime}_{t}+c_{t})^{c}\cdot(m^{\prime }_{t}+c_{t})^{d})\), where \(n^{\prime}_{t}=|\beta(t)|\) and \(m^{\prime}_{t}=|E(G[\beta(t)])|\). We have \(\sum_{t\in V(T)}n^{\prime}_{t}=\sum_{t\in V(T)}(|B|+|\beta(t)\cap\delta_{t}|+| \beta(t)\setminus X|)=\mathcal{O}(n)\) and \(\sum_{t\in V(T)}m^{\prime}_{t}\leq m\). Hence, the total running time is \(\mathcal{O}(p^{k}\cdot(k\cdot n+f(k+\mathcal{O}(1))\cdot n^{c}\cdot m^{d}))\).
### Applications
We now apply the above framework to give \(\mathsf{FPT}\)-algorithms for several problems parameterized by bipartite treewidth, that is, \(1\)-\(\mathcal{B}\)-treewidth where \(\mathcal{B}\) is the class of bipartite graphs. Thanks to Lemma4.1, this now reverts to showing that the problem under consideration has an \(\mathcal{B}\)-nice annotated extension that is solvable in \(\mathsf{FPT}\) time when parameterized by \(\mathsf{oct}\). Several of the presented results actually hold for other graph classes \(\mathcal{H}\), not necessarily only bipartite graphs.
All of the problems of this section have the following property, that seems critical to show that a problem is \(\mathcal{H}\)-nice.
Gluing property.Let \(\Pi\) be a \(p\)-annotated problem corresponding to some choice of \(p\)-partition-evaluation function \(f\) and some \(\texttt{opt}\in\{\max,\min\}\). We say that \(\Pi\) has the _gluing property_ if, given two compatible bounded graphs \(\mathbf{F}\) and \(\mathbf{G}\) with boundary \(X\), \(\mathcal{X}\in\mathcal{P}_{p}(X)\), and \(\mathcal{P}\in\mathcal{P}_{p}(V(\mathbf{F}\oplus\mathbf{G}))\) such that \(\mathcal{X}\subseteq\mathcal{P}\), then \(\hat{\mathfrak{p}}_{f,\texttt{opt}}(\mathbf{F}\oplus\mathbf{G},\mathcal{X})=f (\mathbf{F}\oplus\mathbf{G},\mathcal{P})\) if and only if \(\hat{\mathfrak{p}}_{f,\texttt{opt}}(F,\mathcal{X})=f(F,\mathcal{P}\cap V(F))\) and \(\hat{\mathfrak{p}}_{f,\texttt{opt}}(G,\mathcal{X})=f(G,\mathcal{P}\cap V(G))\).
For the sake of simplicity, with a slight abuse of notation, we identify in this section a vertex with its heir.
Let \(\Pi^{\prime}\) be an annotated extension of some problem \(\Pi\). Given an instance \((\mathbf{G}=\mathbf{X}\mathbb{H}(\mathbb{H}_{i\in[d]}\mathbf{G}_{i}),(A,B), \mathcal{A})\) for a \(\mathcal{B}\)-nice reduction with respect to \(\Pi^{\prime}\), we know that the boundary of each \(G_{i}\) contains at most one vertex of \(B\), and hence that is not annotated. To show that \(\Pi^{\prime}\) is \(\mathcal{B}\)-nice, we thus essentially need to show how to reduce a graph \(\mathbf{F}\oplus\mathbf{G}\) to a graph \(F^{\prime}\) when the boundary of \(\mathbf{F}\) and \(\mathbf{G}\) is totally annotated (and hence that \(\Pi^{\prime}\) has the gluing property), and when the boundary of \(\mathbf{F}\) and \(\mathbf{G}\) has a single vertex \(v\) that is not annotated. To show that \(\Pi\) is \(\mathsf{FPT}\) parameterized by btw, it then suffices to prove that \(\Pi^{\prime}\) is \(\mathsf{FPT}\) parameterized by oct on instances where a minimal odd cycle transversal is annotated.
#### 4.4.1 \(K_{t}\)-Subgraph-Cover
Let \(\mathcal{G}\) be a graph class. We define the problem Vertex Deletion to \(\mathcal{G}\) as follows.
\begin{tabular}{|l|} \hline (Weighted) Vertex Deletion to \(\mathcal{G}\) \\
**Input**: A graph \(G\) (and a weight function \(w:V(G)\to\mathbb{N}\)). \\
**Objective**: Find the set \(S\subseteq V(G)\) of minimum size (resp. weight) such that \(G\setminus S\in\mathcal{G}\). \\ \hline \end{tabular}
If \(\mathcal{G}\) is the class of edgeless (resp. acyclic, planar, bipartite, (proper) interval, chordal) graphs, then we obtain the Vertex Cover (resp. Feedback Vertex Set, Vertex Planarization, Odd Cycle Transversal, (proper) Interval Vertex Deletion, Chordal Vertex Deletion) problem. Also, given a graph \(H\), if \(\mathcal{G}\) is the class of graphs that do not contain \(H\) as a subgraph (resp. a minor/odd-minor/induced subgraph), then the corresponding problem is called \(H\)-Subgraph-Cover (resp. \(H\)-Minor-Cover/\(H\)-Odd-Minor-Cover/\(H\)-Induced-Subgraph-Cover).
Let \(H\) be a graph and \(w:V(G)\to\mathbb{N}\) be a weight function (constant equal to one in the unweighted case). We define \(f_{H}\) as the \(2\)-partition-evaluation function where, for every graph \(G\), for every \((R,S)\in\mathcal{P}_{2}(V(G))\),
\[f_{H}(G,(R,S))=\begin{cases}+\infty&\text{if $H$ is a subgraph of $G\setminus S$},\\ w(S)&\text{otherwise}.\end{cases}\]
Seen as an optimization problem, (Weighted) \(H\)-Subgraph-Cover is the problem of computing \(\mathfrak{p}_{f_{H},\min}(G)\). We call its annotated extension (Weighted) Annotated \(H\)-Subgraph-Cover. In other words, (Weighted) Annotated \(H\)-Subgraph-Cover is defined as follows.
\begin{tabular}{|l|} \hline (Weighted) Annotated \(H\)-Subgraph-Cover \\
**Input**: A graph \(G\), two disjoint sets \(R,S\subseteq V(G)\) (and a weight function \(w:V(G)\to\mathbb{N}\)). \\
**Objective**: Find, if it exists, the minimum size (resp. weight) of a set \(S^{\star}\subseteq V(G)\) such that \(R\cap S^{\star}=\emptyset\), \(S\subseteq S^{\star}\), and \(G\setminus S^{\star}\) does not contain \(H\) as a subgraph. \\ \hline \end{tabular}
Remark that, given a graph \(G\) and a set \(X\subseteq V(G)\), \(X\) is a vertex cover if and only if \(V(G)\setminus X\) is a independent set. Hence, the size/weight of a minimum vertex cover is equal to the size/weight of a maximum independent set. So, seen as optimization problems, (Weighted) Vertex Cover and (Weighted) Independent Set are equivalent problems.
In order to prove that (Weighted) Annotated \(K_{t}\)-Subgraph-Cover is a nice problem, we first prove that (Weighted) Annotated \(K_{t}\)-Subgraph-Cover has the gluing property.
**Lemma 4.2** (Gluing property).: (Weighted) Annotated \(K_{t}\)-Subgraph-Cover _has the gluing property. More precisely, given two boundaried graphs \(\mathbf{F}=(F,B_{F},\rho_{F})\) and \(\mathbf{G}=(G,B_{G},\rho_{G})\), a weight function \(w:V(\mathbf{F}\oplus\mathbf{G})\to\mathbb{N}\), a set \(X\subseteq V(\mathbf{F}\oplus\mathbf{G})\) such that \(B_{F}\cap B_{G}\subseteq X\), and \(\mathcal{X}=(R,S)\in\mathcal{P}_{2}(X)\), we have_
\[\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{F}\oplus\mathbf{G},\mathcal{X},w) =\hat{\mathfrak{p}}_{f_{K_{t}},\min}(F,\mathcal{X}\cap V(F),w)+\hat{ \mathfrak{p}}_{f_{K_{t}},\min}(G,\mathcal{X}\cap V(G),w)-w(S\cap B_{F}\cap B_ {G}).\]
Proof.: Let \(\bar{w}=w(S\cap B_{F}\cap B_{G})\). Observe that \(K_{t}\) is a subgraph of \(\mathbf{F}\oplus\mathbf{G}\) if and only if \(K_{t}\) is a subgraph of \(F\) or of \(G\).
Let \(\mathcal{P}=(R^{\star},S^{\star})\in\mathcal{P}_{2}(V(\mathbf{F}\oplus\mathbf{ G}))\) be such that \(\mathcal{X}\subseteq\mathcal{P}\) and \(\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{F}\oplus\mathbf{G},\mathcal{X},w) =f_{K_{T}}(\mathbf{F}\oplus\mathbf{G},\mathcal{P},w).\) Then \(K_{t}\) is not a subgraph of \(F\setminus(S^{\star}\cap V(F))\) nor \(G\setminus(S^{\star}\cap V(G))\). Therefore,
\[\hat{\mathfrak{p}}_{f_{K_{T}},\min}(\mathbf{F}\oplus\mathbf{G}, \mathcal{P},w) =w(S^{\star})\] \[=w(S^{\star}\cap V(F))+w(S^{\star}\cap V(G))-w(S^{\star}\cap B_{ F}\cap B_{G})\] \[\geq\hat{\mathfrak{p}}_{f_{K_{t}},\min}(F,\mathcal{X}\cap V(F),w )+\hat{\mathfrak{p}}_{f_{K_{t}},\min}(G,\mathcal{X}\cap V(G),w)-\bar{w}.\]
Reciprocally, let \(\mathcal{P}_{H}=(R_{H},S_{H})\in\mathcal{P}_{2}(V(H))\) be such that \(\mathcal{X}\cap V(H)\subseteq\mathcal{P}_{H}\) and \(\hat{\mathfrak{p}}_{f_{K_{t}},\min}(H,\mathcal{X}\cap V(H))=f_{K_{t}}(H, \mathcal{P}_{H})\) for \(H\in\{F,G\}\). Then \(K_{t}\) is not a subgraph of \((\mathbf{F}\oplus\mathbf{G})\setminus(S_{F}\cup\hat{S}_{G})\), so
\[\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{F}\oplus\mathbf{G}, \mathcal{X},w) \leq w(S_{F}\cup S_{G})\] \[=w(S_{F})+w(S_{G})-\bar{w}\] \[=\hat{\mathfrak{p}}_{f_{K_{t}},\min}(F,\mathcal{X}\cap V(F),w)+ \hat{\mathfrak{p}}_{f_{K_{t}},\min}(G,\mathcal{X}\cap V(G),w)-\bar{w}.\]
The main obstacle to find an \(\mathsf{FPT}\)-algorithm parameterized by \((1,\mathcal{H})\)-\(\mathsf{tw}\) for (Weighted) Annotated \(H\)-Subgraph-Cover, for \(H\) that is not a clique, is the fact that the problem does not have the gluing property.
**Lemma 4.3**.: _If \(H\) is not a clique, then (Weighted) Annotated \(H\)-Subgraph-Cover does not have the gluing property._
Proof.: Since \(H\) is not a clique, there are two vertices \(u,v\in V(H)\) that are not adjacent. Let \(V^{\prime}=V(H)\setminus\{u,v\}\) and let \(\sigma:V^{\prime}\to\mathbb{N}\) be an injection. Let \(\mathbf{F}=(H\setminus\{u\},V^{\prime},\sigma)\) and \(\mathbf{G}=(H\setminus\{v\},V^{\prime},\sigma)\). Then \(\mathbf{F}\oplus\mathbf{G}\) is isomorphic to \(H\). Let \(\mathcal{X}=(V^{\prime},\emptyset)\) and \(\mathcal{P}=(H\setminus\{u\},\{u\})\supseteq\mathcal{X}\). Then we have \(\hat{\mathfrak{p}}_{f_{H},\min}(\mathbf{F}\oplus\mathbf{G},\mathcal{X})=f_{H}( \mathbf{F}\oplus\mathbf{G},\mathcal{P})=1\). However, \(\hat{\mathfrak{p}}_{f_{H},\min}(G,\mathcal{X}\cap V(G))=0<f_{H}(G,\mathcal{P} \cap V(G))=1\).
We now show how to reduce a graph \(\mathbf{F}\oplus\mathbf{G}\) to a graph \(F^{\prime}\) when the boundary of \(\mathbf{F}\) and \(\mathbf{G}\) has a single vertex \(v\) that is not annotated.
**Lemma 4.4** (Gadgetization).: _Let \(t\in\mathbb{N}\). Let \(\mathbf{F}=(F,B_{F},\rho_{F})\) and \(\mathbf{G}=(G,B_{G},\rho_{G})\) be two boundaried graphs, let \(X\subseteq V(\mathbf{F}\oplus\mathbf{G})\) be such that \(B_{F}\cap B_{G}\subseteq X\), let \(v\in B_{F}\cap B_{G}\), and let \(\mathcal{X}=(R,S)\in\mathcal{P}_{2}(X\setminus\{v\})\). Let \(\mathcal{X}^{+}=(R,S\cup\{v\})\) and \(\mathcal{X}^{-}=(R\cup\{v\},S)\). For \(a\in\{+,-\}\), let \(s^{a}=\hat{\mathfrak{p}}_{f_{K_{t}},\min}(G,\mathcal{X}^{a}\cap V(G))\). Let \(\bar{s}=|S\cap B_{F}\cap B_{G}|\). Then,_
\[\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{F}\oplus\mathbf{G},\mathcal{X})= \begin{cases}s^{+}+\hat{\mathfrak{p}}_{f_{K_{t}},\min}(F,\mathcal{X}^{+}\cap V (F))-\bar{s}-1&\text{if }s^{+}\leq s^{-},\\ s^{-}+\hat{\mathfrak{p}}_{f_{K_{t}},\min}(F,\mathcal{X}\cap V(F))-\bar{s}&\text {otherwise.}\end{cases}\]
Proof.: For \(H\in\{F,G\}\) and for \(a\in\{+,-\}\), let \(S^{a}_{H}\subseteq V(H)\) be such that \(\mathcal{X}^{a}\cap V(H)\subseteq(V(H)\setminus S^{a}_{H},S^{a}_{H})\) and \(\hat{\mathfrak{p}}_{f_{K_{t}},\min}(H,\mathcal{X}^{a}\cap V(H))=f_{K_{t}}(H,( V(H)\setminus S^{a}_{H},S^{a}_{H}))=|S^{a}_{H}|\). So \(s^{+}=|S^{+}_{G}|\) and \(s^{-}=|S^{-}_{G}|\). Let \(t^{+}=|S^{+}_{F}|\) and \(t^{-}=|S^{-}_{F}|\).
\(S^{+}_{F}\cap S^{+}_{G}=(S\cap B_{F}\cap B_{G})\cup\{v\}\) and \(S^{-}_{F}\cap S^{-}_{G}=S\cap B_{F}\cap B_{G}\). Hence, using Lemma 4.2, we have that
\[\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{F}\oplus\mathbf{G}, \mathcal{X}) =\min\{\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{F}\oplus\mathbf{ G},\mathcal{X}^{+}),\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{F}\oplus \mathbf{G},\mathcal{X}^{-})\}\] \[=\min\{t^{+}+s^{+}-1,t^{-}+s^{-}\}-\bar{s}.\]
Note that we always have \(s^{+}\leq s^{-}+1\) since \(G\setminus(S^{-}_{G}\cup\{v\})\) does not contain \(K_{t}\) as a subgraph, and thus \(|S^{-}_{G}\cup\{v\}|\geq p_{f_{K_{t}},\min}(G,\mathcal{X}^{+}\cap V(G))\). Similarly, \(t^{+}\leq t^{-}+1\).
Thus, if \(s^{+}\leq s^{-}\), then \(t^{+}+s^{+}-1\leq t^{-}+s^{+}\leq t^{-}+s^{-}.\) Given that \(t^{+}=\hat{\mathfrak{p}}_{f_{K_{t}},\min}(F,\mathcal{X}^{+}\cap V(F))\), it follows that
\[\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{F}\oplus\mathbf{G},\mathcal{X})= \hat{\mathfrak{p}}_{f_{K_{t}},\min}(F,\mathcal{X}^{+}\cap V(F))+s^{+}-\bar{s} -1.\]
And if \(s^{-}<s^{+}\), then \(s^{+}=s^{-}+1\), so \(\min\{t^{+}+s^{+}-1,t^{-}+s^{-}\}=\min\{t^{+},t^{-}\}+s^{-}=\hat{\mathfrak{p}}_{f _{K_{t}},\min}(F,\mathcal{X}\cap V(F))+s^{-}\). It follows that
\[\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{F}\oplus\mathbf{G},\mathcal{X})= \hat{\mathfrak{p}}_{f_{K_{t}},\min}(F,\mathcal{X}\cap V(F))+s^{-}-\bar{s}.\]
Observe that, contrary to Lemma 4.2, the Lemma 4.4 only holds in the unweighted case. Indeed, in the weighted case, we now have \(s^{+}\leq s^{-}+w(v)\), and thus, when \(s^{+}\in[s^{-}+1,s^{-}+w(v)-1]\), we do not know what happens.
Using Lemma 4.2 and Lemma 4.4, we can now prove that Annotated \(K_{t}\)-Subgraph-Cover is \(\mathcal{H}\)-nice. Essentially, given an instance \((\mathbf{G}=\mathbf{X}\boxplus(\boxplus_{i\in[d]}\mathbf{G}_{i}),(A,B),(R,S))\), we reduce \(\mathbf{G}\) to \(\mathbf{X}\) and further remove some vertices of \(B\) that can be optimally added to \(S\), and show that the resulting boundaried graph is equivalent to \(\mathbf{G}\) modulo some constant \(s\).
**Lemma 4.5** (Nice problem).: _Let \(\mathcal{H}\) be a hereditary graph class. Given \(t\in\mathbb{N}\), Annotated \(K_{t}\)-Subgraph-Cover is \(\mathcal{H}\)-nice._
Proof.: Let \(\mathbf{G}=(G,X,\rho)\) be a boundaried graph, let \(\mathbf{X}=(G[X],X,\rho_{X})\) be a trivial boundaried graph and let \(\{\mathbf{G}_{i}=(G_{i},X_{i},\rho_{i})\mid i\in[d]\}\) be a collection of boundaried graphs, such that \(\mathbf{G}=\mathbf{X}\boxplus(\boxplus_{i\in[d]}\mathbf{G}_{i})\), let \((A,B)\) be a partition of \(X\) such that for all \(i\in[d]\), \(|X_{i}\setminus A|\leq 1\), and let \(\mathcal{A}=(R,S)\in\mathcal{P}_{2}(A)\). Suppose that we know, for every \(i\in[d]\) and each \(\mathcal{X}_{i}\in\mathcal{P}_{2}(X_{i})\), the value \(\hat{\mathfrak{p}}_{f_{K_{t}},\min}(G_{i},\mathcal{X}_{i})\).
Let \((\mathbf{H}_{0},S_{0},s_{0})=(\mathbf{G},S,0)\). For \(i\) going from \(1\) up to \(d\), we construct \((\mathbf{H}_{i},S_{i},s_{i})\) from \((\mathbf{H}_{i-1},S_{i-1},s_{i-1})\) such that for any boundaried graph \(\mathbf{F}\) compatible with \(\mathbf{G}\),
\[\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{G}\oplus\mathbf{F},\mathcal{A})= \hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{H}_{i}\oplus\mathbf{F},\mathcal{A} _{i})+s_{i},\]
where \(\mathcal{A}_{i}=(R,S_{i})\). This is obviously true for \(i=0\).
Let \(i\in[d]\). Let \(\mathbf{H}_{i}\) be the boundaried graph such that \(\mathbf{H}_{i-1}=\mathbf{H}_{i}\boxplus\mathbf{G}_{i}\). By induction, \(\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{G}\oplus\mathbf{F},\mathcal{A})= \hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{H}_{i-1}\oplus\mathbf{F}, \mathcal{A}_{i-1})+s_{i-1}\).
_Suppose first that \(X_{i}\subseteq R\cup S_{i-1}\). Let \(\mathcal{P}_{i}=(R^{i},S^{i})\in\mathcal{P}_{2}(X)\) be such that \(\mathcal{A}_{i-1}\subseteq\mathcal{P}_{i}\) and \(\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{H}_{i-1}\oplus\mathbf{F},\mathcal{ A}_{i-1})=\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{H}_{i-1}\oplus\mathbf{F}, \mathcal{P}_{i}).\) According to Lemma 4.2,_
\[\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{H}_{i-1}\oplus\mathbf{F },\mathcal{P}_{i}) =\hat{\mathfrak{p}}_{f_{K_{t}},\min}(F,\mathcal{P}_{i})+\hat{ \mathfrak{p}}_{f_{K_{t}},\min}(H_{i-1},\mathcal{P}_{i})-|X\cap S^{i}|\] \[=\hat{\mathfrak{p}}_{f_{K_{t}},\min}(F,\mathcal{P}_{i})+\hat{ \mathfrak{p}}_{f_{K_{t}},\min}(H_{i},\mathcal{P}_{i})+\hat{\mathfrak{p}}_{f_{ K_{t}},\min}(G_{i},\mathcal{P}_{i}\cap X_{i})\] \[\quad-|X_{i}\cap S^{i}|-|X\cap S^{i}|\] \[=\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{H}_{i}\oplus \mathbf{F},\mathcal{P}_{i})+\hat{\mathfrak{p}}_{f_{K_{t}},\min}(G_{i}, \mathcal{A}_{i-1}\cap X_{i})-|S_{i-1}\cap X_{i}|.\]
Since this is the case for all such \(\mathcal{P}_{i}\), it implies that
\[\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{H}_{i-1}\oplus\mathbf{F}, \mathcal{A}_{i-1})=\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{H}_{i}\oplus \mathbf{F},\mathcal{A}_{i-1})+\hat{\mathfrak{p}}_{f_{K_{t}},\min}(G_{i}, \mathcal{A}_{i-1}\cap X_{i})-|S_{i-1}\cap X_{i}|.\]
Therefore,
\[\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{G}\oplus\mathbf{F},\mathcal{A})= \hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{H}_{i}\oplus\mathbf{F},\mathcal{ A}_{i})+s_{i},\]
where \(\mathcal{A}_{i}=\mathcal{A}_{i-1}\) and \(s_{i}=s_{i-1}+\hat{\mathfrak{p}}_{f_{K_{t}},\min}(G_{i},\mathcal{A}_{i-1} \cap X_{i})-|S_{i-1}\cap X_{i}|.\)
_Otherwise, there is \(v_{i}\in V(G_{i-1})\) such that \(X_{i}\setminus(R\cup S_{i-1})=\{v_{i}\}\). Let \(\mathcal{X}_{i}^{+}=(R,S_{i-1}\cup\{v_{i}\})\cap X_{i}\) and \(\mathcal{X}_{i}^{-}=(R\cup\{v_{i}\},S_{i-1})\cap X_{i}\). Let \(\mathcal{P}_{i}=(R^{i},S^{i})\in\mathcal{P}_{2}(X\setminus\{v_{i}\})\) be such that \(\mathcal{A}_{i-1}\subseteq\mathcal{P}_{i}\) and \(\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{H}_{i-1}\oplus\mathbf{F},\mathcal{ A}_{i-1})=\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{H}_{i-1}\oplus \mathbf{F},\mathcal{P}_{i}).\) Note that_
\[\mathbf{H}_{i-1}\oplus\mathbf{F}=(\mathbf{H}_{i}\boxplus\mathbf{G}_{i})\oplus \mathbf{F}=(\mathbf{H}_{i}\boxplus\mathbf{F})\oplus\mathbf{G}_{i}.\]
_For \(a\in\{+,-\}\), let \(s_{i}^{a}=\mathfrak{p}_{f_{K_{t}},\min}(G,\mathcal{X}_{i}^{a})\). Then, using Lemma 4.4, we have the following case distinction._
\[\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{H}_{i-1}\oplus\mathbf{F },\mathcal{P}_{i}) =\hat{\mathfrak{p}}_{f_{K_{t}},\min}((\mathbf{H}_{i}\boxplus\mathbf{F}) \oplus\mathbf{G}_{i}\oplus\mathbf{F},\mathcal{P}_{i})\] \[=\begin{cases}s_{i}^{+}+\hat{\mathfrak{p}}_{f_{K_{t}},\min}( \mathbf{H}_{i}\oplus\mathbf{F},(R^{i},S^{i}\cup\{v_{i}\}))-|S_{i-1}\cap X_{i}|- 1&\text{if }s_{i}^{+}\leq s_{i}^{-}\\ s_{i}^{-}+\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{H}_{i}\oplus\mathbf{F}, \mathcal{P}_{i})-|S_{i-1}\cap X_{i}|&\text{otherwise.}\end{cases}\]
_Since this is the case for every such \(\mathcal{P}_{i}\), by setting \((\mathbf{H}_{i},S_{i},s_{i})=(\mathbf{H}_{i},S_{i-1}\cup\{v_{i}\},s_{i-1}+s^{+} -|S_{i-1}\cap X_{i}|-1)\) if \(s^{+}\leq s^{-}\) and \((\mathbf{H}_{i},S_{i},s_{i})=(\mathbf{H}_{i},S_{i-1},s_{i-1}+s^{-}-|S_{i-1} \cap X_{i}|)\) otherwise, we have that_
\[\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{G}\oplus\mathbf{F},\mathcal{A})= \hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{H}_{i}\oplus\mathbf{F},\mathcal{A}_{i})+s _{i}.\]
The boundaried graph \(\mathbf{H}_{\mathbf{d}}\) obtained at the end is isomorphic to \(\mathbf{X}\). Let \(S_{B}=S_{d}\setminus S\subseteq B\). Observe that
\[\hat{\mathfrak{p}}_{f_{K_{t}},\min}(\mathbf{H}_{d}\oplus\mathbf{F}, \mathcal{A}_{d}) =\hat{\mathfrak{p}}_{f_{K_{t}},\min}((\mathbf{H}_{d}\oplus\mathbf{F}) \setminus S_{B},\mathcal{A}_{d}\setminus S_{B})+|S_{B}|\] \[=\hat{\mathfrak{p}}_{f_{K_{t}},\min}((\mathbf{H}_{d}\setminus S_{B}) \triangleright\mathbf{F},\mathcal{A})+|S_{B}|\]
Hence,
\[\hat{\mathsf{p}}_{f_{K_{t}},\min}(\mathbf{G}\oplus\mathbf{F},\mathcal{A})=\hat{ \mathsf{p}}_{f_{K_{t}},\min}((\mathbf{X}\setminus S_{B})\triangleright\mathbf{F},\mathcal{A})+|S_{B}|+s_{d}.\]
Let \(\mathbf{H}=\mathbf{X}\setminus S_{B}\). \(|V(H)|\leq|X|\) and \(|E(H)|\leq E(G[X])\). Observe that \(\mathbf{H}\triangleright\mathbf{F}\) is isomorphic to \(\mathbf{F}\setminus S_{B}\). Thus, since \(\mathcal{H}\) is hereditary, if \(F\setminus A\) belongs to \(\mathcal{H}\), then so does \((\mathbf{H}\triangleright\mathbf{F})\setminus A\). Hence, \((\mathbf{X}\setminus S_{B},\mathcal{A},|S_{B}|+s_{d})\) follows every conditions so that it is a \(\mathcal{H}\)-nice reduction of \((G,\mathcal{A})\) with respect to Annotated \(K_{t}\)-Subgraph-Cover.
At each step \(i\), we compute \(|S_{i-1}\cap X_{i}|\), and thus, \(s_{i}\) in time \(\mathcal{O}(|A|)\) (since \(\hat{\mathsf{p}}_{f_{K_{t}},\min}(G_{i},\mathcal{X}_{i})\) is supposed to be known). \(\mathbf{H}_{i}\) and \(S_{i}\) are then constructed in time \(\mathcal{O}(1)\). Hence, the computation takes time \(\mathcal{O}(|A|\cdot d)\), and thus, Annotated \(K_{t}\)-Subgraph-Cover is \(\mathcal{H}\)-nice.
We now solve Annotated \(K_{t}\)-Subgraph-Cover for \(t\geq 3\) parameterized by \(\mathsf{oct}\). Note that Vertex Cover can be solved on bipartite graphs in time \(\mathcal{O}(m\sqrt{n})\) using a maximum matching algorithm [29] due to Konig's theorem [9]. Moreover, Weighted Vertex Cover can be solved on bipartite graphs in time \(\mathcal{O}(m\cdot n)\) using a flow algorithm [25, 31]. Indeed, let \(G=(A,B)\) be a bipartite graph and \(w:V(G)\to\mathbb{N}\) be a weight function. We construct a flow network \(N\) by connecting a source \(s\) to each vertex in \(A\) and a sink \(t\) to each vertex in \(B\). We give infinite capacity to the original edges of \(G\), and capacity \(w(v)\) to each edge connecting a vertex \(v\) and a terminal vertex. Every \(s-t\) cut in \(N\) corresponds to exactly one vertex cover and every vertex cover corresponds to an \(s-t\) cut. Thus a minimum cut of \(N\) gives a minimum weight vertex cover of \(G\).
**Lemma 4.6**.: _Let \(t\in\mathbb{N}_{\geq 3}\). There is an algorithm that, given a graph \(G\) and two disjoint sets \(R,S\subseteq V(G)\) such that \(G^{\prime}=G\setminus(R\cup S)\) is bipartite, solves Annotated \(K_{t}\)-Subgraph-Cover (resp. Weighted Annotated \(K_{t}\)-Subgraph-Cover) on \((G,R,S)\) in time \(\mathcal{O}(k^{t}\cdot(n^{\prime}+m^{\prime})+m^{\prime}\sqrt{n^{\prime}})\) (resp. \(\mathcal{O}(k^{t}\cdot(n^{\prime}+m^{\prime})+m^{\prime}\cdot n^{\prime})\)), where \(k=|R|\), \(n^{\prime}=|V(G^{\prime})|\), and \(m^{\prime}=|E(G^{\prime})|\)._
Proof.: Observe that we can assume that \(S=\emptyset\), since \(S^{\star}\) is an optimal solution for \((G,R,S)\) if and only if \(S^{\star}\setminus S\) is an optimal solution for \((G\setminus S,R,\emptyset)\). Thus, \(G\setminus R\) is bipartite, so for any occurrence of \(K_{t}\) contained in \(G\) (as a subgraph), at most two of its vertices belong to \(G\setminus R\). Thus, enumerating the occurrences of \(K_{t}\) takes time \(\mathcal{O}(k^{t}+k^{t-1}\cdot n^{\prime}+k^{t-2}\cdot m^{\prime})\). If \(G[R]\) contains an occurrence of \(K_{t}\), then Annotated \(K_{t}\)-Subgraph-Cover has no solution. So let us assume that \(G[R]\) contains no \(K_{t}\). For each occurrences of \(K_{t}\) in \(G\) that contains \(t-1\) vertices of \(R\) and one vertex \(v\in V(G)\setminus R\), we add \(v\) to \(S^{\star}\) and remove \(v\) from \(G\), since \(v\) has to be taken in the solution. Hence, all that remains are occurrences of \(K_{t}\) with \(t-2\) vertices in \(R\) and the two others in \(G\setminus R\). Let \(H\) be the graph induced by the edges of the occurrences of \(K_{t}\) in \(G\) with both endpoints in \(G\setminus R\). Each edge of \(H\) intersects any solution \(\bar{S}\) on \(G\) for Annotated \(K_{t}\)-Subgraph-Cover. Hence, \(\bar{S}\) is the union of \(S^{\star}\) and a minimum (weighted) vertex cover \(C\) of \(H\). Thus, \(C\) can be computed in time \(\mathcal{O}(m^{\prime}\sqrt{n^{\prime}})\) (resp. \(\mathcal{O}(m^{\prime}\cdot n^{\prime})\)). The running time of the algorithm is hence \(\mathcal{O}(k^{t}+k^{t-1}\cdot n^{\prime}+k^{t-2}\cdot m^{\prime}+m^{\prime} \sqrt{n^{\prime}})\) (resp. \(\mathcal{O}(k^{t}+k^{t-1}\cdot n^{\prime}+k^{t-2}\cdot m^{\prime}+m^{\prime} \cdot n^{\prime})\)).
We apply Lemma4.5 and Lemma4.6 to the dynamic programming algorithm of Lemma4.1 to obtain the following result.
**Corollary 4.1**.: _Let \(t\in\mathbb{N}_{\geq 3}\). Given a graph \(G\) and a bipartite tree decomposition of \(G\) of width \(k\), there is an algorithm that solves \(K_{t}\)-Subgraph-Cover on \(G\) in time \(\mathcal{O}(2^{k}\cdot(k^{t}\cdot(n+m)+m\sqrt{n}))\)._
We find a better running time when \(t=2\), i.e., for Vertex Cover/Independent Set.
**Observation 4.1**.: _Let \(\mathcal{H}\) be a hereditary graph class such that (Weighted) Vertex Cover can be solved on instances \((G,w)\) where \(G\in\mathcal{H}\) in time \(\mathcal{O}(n^{c}\cdot m^{d})\) for some \(c,d\in\mathbb{N}\). Then (Weighted) Annotated Vertex Cover is solvable on instance \((G,R,S,w)\) such that \(G^{\prime}=G\setminus(R\cup S)\in\mathcal{H}\) in time \(\mathcal{O}(k\cdot(k+n^{\prime})+n^{\prime c}\cdot m^{\prime d})\), where \(n^{\prime}=|V(G^{\prime})|\), \(m^{\prime}=|E(G^{\prime})|\), and \(k=|R|\)._
Proof.: Let \(G\) be a graph, \(w\) be a weight function, and \(R,S\subseteq V(G)\) be two disjoint sets such that \(G\setminus(R\cup S)\in\mathcal{H}\). If \(R\) is not an independent set, then \((G,R,S,w)\) has no solution. Hence, we assume that \(R\) is an independent set. Then \(S^{\star}\subseteq V(G)\) is a solution of Weighted Annotated Vertex Cover on \((G,R,S,w)\) if and only if \(S^{\star}=S_{B}\cup S\cup N_{G}(R)\) where \(S_{B}\) is a solution of Weighted Vertex Cover on \((G\setminus(R\cup S),w)\). Checking that \(R\) is an independent set takes time \(\mathcal{O}(k^{2})\) and then finding \(N_{G}(R)\) takes time \(\mathcal{O}(k\cdot n^{\prime})\), hence the result.
We apply Lemma4.5 and 4.1 to the dynamic programming algorithm of Lemma4.1 to obtain the following result.
**Corollary 4.2**.: _Let \(\mathcal{H}\) be a hereditary graph class. Suppose that Vertex Cover/Independent Set can be solved on \(\mathcal{H}\) in time \(\mathcal{O}(n^{c}\cdot m^{d})\). Then, given a graph \(G\) and a \(1\)-\(\mathcal{H}\)-tree decomposition of \(G\) of width \(k\), there is an algorithm that solves Vertex Cover/Independent Set on \(G\) in time \(\mathcal{O}(2^{k}\cdot(k\cdot(k+n)+n^{c}\cdot m^{d}))\)._
As a corollary of Corollary4.2, we obtain the following result concerning bipartite treewidth.
**Corollary 4.3**.: _Given a graph \(G\) and a bipartite tree decomposition of \(G\) of width \(k\), there is an algorithm that solves Vertex Cover/Independent Set on \(G\) in time \(\mathcal{O}(2^{k}\cdot(k\cdot(k+n)+m\sqrt{n}))\)._
#### 4.4.2 Weighted Vertex Cover/Weighted Independent Set
Given that Lemma4.4 only holds for \(K_{t}\)-Subgraph-Cover in the unweighted case, we propose here an analogous result that holds in the weighted case, when we restrict ourselves to \(t=2\), i.e., Weighted Vertex Cover. We already know that Weighted Vertex Cover has the gluing property (Lemma4.2). We now show how to reduce a graph \(\mathbf{F}\oplus\mathbf{G}\) to a graph \(F^{\prime}\) when the boundary of \(\mathbf{F}\) and \(\mathbf{G}\) has a single vertex \(v\) that is not annotated.
**Lemma 4.7** (Gadgetization).: _Let \(\mathbf{F}=(F,B_{F},\rho_{F})\) and \(\mathbf{G}=(G,B_{G},\rho_{G})\) be two bounderied graphs, let \(w:V(\mathbf{F}\oplus\mathbf{G})\to\mathbb{N}\) be a weight function, let \(X\subseteq V(\mathbf{F}\oplus\mathbf{G})\) be such that \(B_{F}\cap B_{G}\subseteq X\), let \(v\in B_{F}\cap B_{G}\), and let \(\mathcal{X}=(R,S)\in\mathcal{P}_{2}(X\setminus\{v\})\). Let \(\mathcal{X}^{+}=(R,S\cup\{v\})\) and \(\mathcal{X}^{-}=(R\cup\{v\},S)\). For \(a\in\{+,-\}\), let \(s^{a}=\hat{\mathfrak{p}}_{f_{K_{t}},\min}(G,\mathcal{X}^{a}\cap V(G),w)\). Let \(\tilde{w}=w(S\cap B_{F}\cap B_{G})\). Let \(\mathbf{G}^{\prime}=(G^{\prime},\{v\},\rho_{|\{v\}})\), where \(G^{\prime}\) is an edge \(vv^{-}\). Let \(w^{\prime}:V(\mathbf{F}\oplus\mathbf{G}^{\prime})\to\mathbb{N}\) be such that \(w^{\prime}(v)=s^{+}-\bar{w}\), \(w^{\prime}(v^{-})=s^{-}-\bar{w}\), and \(w^{\prime}(x)=w(x)\) otherwise. Then_
\[\hat{p}_{fK_{2},\min}(\mathbf{F}\oplus\mathbf{G},\mathcal{X},w)=\hat{p}_{fK_{2 },\min}(\mathbf{F}\oplus\mathbf{G}^{\prime},\mathcal{X},w^{\prime}).\]
Proof.: For \(a\in\{+,-\}\), let \(t^{a}=\hat{\mathfrak{p}}_{f_{K_{t}},\min}(F,\mathcal{X}^{a}\cap V(F),w)\).
Note that
\[s^{\prime-}:=\hat{\mathfrak{p}}_{f_{K_{2}},\min}(G^{\prime}, \mathcal{X}^{-}\cap V(G^{\prime}),w^{\prime}) =\hat{\mathfrak{p}}_{f_{K_{2}},\min}(G^{\prime},(\{v\},\emptyset),w ^{\prime})\] \[=w^{\prime}(v^{-})\] \[=s^{-}-\bar{w},\] \[s^{\prime+}:=\hat{\mathfrak{p}}_{f_{K_{2}},\min}(G^{\prime}, \mathcal{X}^{+}\cap V(G^{\prime}),w^{\prime}) =\hat{\mathfrak{p}}_{f_{K_{2}},\min}(G^{\prime},(\emptyset,\{v\}),w^{\prime})\] \[=w^{\prime}(v)\] \[=s^{+}-\bar{w},\] \[t^{\prime-}:=\hat{\mathfrak{p}}_{f_{K_{2}},\min}(F,\mathcal{X}^ {-}\cap V(F),w^{\prime}) =t^{-},\text{ and }\] \[t^{\prime+}:=\hat{\mathfrak{p}}_{f_{K_{2}},\min}(F,\mathcal{X}^ {+}\cap V(F),w^{\prime}) =t^{+}+w^{\prime}(v)-w(v).\]
Hence, using Lemma 4.2, we have that
\[\hat{\mathfrak{p}}_{f_{K_{2}},\min}(\mathbf{F}\oplus\mathbf{G}, \mathcal{X},w) =\min\{\hat{\mathfrak{p}}_{f_{K_{2}},\min}(\mathbf{F}\oplus\mathbf{ G},\mathcal{X}^{+},w),\hat{\mathfrak{p}}_{f_{K_{2}},\min}(\mathbf{F}\oplus \mathbf{G},\mathcal{X}^{-},w)\}\] \[=\min\{t^{+}+s^{\prime+}-w(v),t^{-}+s^{\prime-}\}\] \[=\min\{t^{\prime+}+s^{\prime+}-w^{\prime}(v),t^{\prime-}+s^{ \prime-}\}\] \[=\min\{\hat{\mathfrak{p}}_{f_{K_{2}},\min}(\mathbf{F}\oplus \mathbf{G}^{\prime},\mathcal{X}^{+},w^{\prime}),\hat{\mathfrak{p}}_{f_{K_{2}}, \min}(\mathbf{F}\oplus\mathbf{G}^{\prime},\mathcal{X}^{-},w^{\prime})\}\] \[=\hat{\mathfrak{p}}_{f_{K_{2}},\min}(\mathbf{F}\oplus\mathbf{G}^ {\prime},\mathcal{X},w^{\prime}).\]
Using Lemma 4.2 and Lemma 4.7, we can now prove that Weighted Annotated Vertex Cover is \(\mathcal{H}\)-nice. Essentially, given an instance \((\mathbf{G}=\mathbf{X}\boxplus(\boxplus_{i\in[d]}\mathbf{G}_{i}),(A,B),(R,S),w)\), we reduce \(\mathbf{G}\) to \(\mathbf{X}\) where we glue a 3-path to some vertices in \(B\). We then show that if the appropriate weight is given to each new vertex, then the resulting boundaried graph is equivalent to \(\mathbf{G}\) modulo some constant \(s\).
**Lemma 4.8** (Nice problem).: _Let \(\mathcal{H}\) be a graph class that is closed under 1-clique-sums and contains edges. Then Weighted Annotated Vertex Cover is \(\mathcal{H}\)-nice._
Proof.: Let \(\mathbf{G}=(G,X,\rho)\) be a boundaried graph, let \(w:V(G)\rightarrow\mathbb{N}\) be a weight function, let \(\mathbf{X}=(G[X],X,\rho_{X})\) be a trivial boundaried graph and let \(\{\mathbf{G}_{i}=(G_{i},X_{i},\rho_{i})\mid i\in[d]\}\) be a collection of boundaried graphs, such that \(\mathbf{G}=\mathbf{X}\boxplus(\boxplus_{i\in[d]}\mathbf{G}_{i})\), let \((A,B)\) be a partition of \(X\) such that for all \(i\in[d]\), \(|X_{i}\setminus A|\leq 1\), and let \(\mathcal{A}=(R,S)\in\mathcal{P}_{2}(A)\). Suppose that we know, for every \(i\in[d]\) and each \(\mathcal{X}_{i}\in\mathcal{P}_{2}(X_{i})\), the value \(\hat{\mathfrak{p}}_{f_{K_{t}},\min}(G_{i},\mathcal{X}_{i},w)\).
Let \(v_{1},\ldots,v_{|B|}\) be the vertices of \(B\). For \(i\in[|B|]\), let \(I_{i}=\{j\in[d]\mid X_{j}\setminus A=\{v_{i}\}\}\). Let \(I_{0}=\{j\in[d]\mid X_{j}\subseteq A\}\). Obviously, \((I_{i})_{i\in[0,|B|]}\) is a partition of \([d]\). Let \(\mathbf{G}^{\prime}_{i}=\boxplus_{j\in I_{i}}\mathbf{G}_{j}\).
Let \((\mathbf{H}_{-1},w_{-1},s_{-1})=(\mathbf{G},w,0)\). For \(i\) going from \(0\) up to \(|B|\), we construct \((\mathbf{H}_{i},w_{i},s_{i})\) from \((\mathbf{H}_{i-1},w_{i-1},s_{i-1})\) such that \(w_{i|V(F)\setminus\{v_{j}|j\leq i\}}=w_{|V(F)\setminus\{v_{j}|j\leq i\}}\) and \(w_{i|V(G^{\prime}_{j})}=w_{|V(G^{\prime}_{j})}\), for \(j>i\), and for any boundaried graph \(\mathbf{F}\) compatible with \(\mathbf{G}\),
\[\hat{\mathfrak{p}}_{f_{K_{2}},\min}(\mathbf{G}\oplus\mathbf{F}, \mathcal{A},w)=\hat{\mathfrak{p}}_{f_{K_{2}},\min}(\mathbf{H}_{i}\oplus \mathbf{F},\mathcal{A},w_{i})+s_{i}.\]
This is obviously true for \(i=-1\).
Let \(i\in[0,|B|]\). By induction, \(\hat{\mathsf{p}}_{f_{K_{2}},\min}(\mathbf{G}\oplus\mathbf{F},\mathcal{A},w)=\hat {\mathsf{p}}_{f_{K_{2}},\min}(\mathbf{H}_{i-1}\oplus\mathbf{F},\mathcal{A},w_{i -1})+s_{i-1}\). Let \(\mathbf{H}^{\prime}_{i}\) be the boundaried graph such that \(\mathbf{H}_{i-1}=\mathbf{H}^{\prime}_{i}\oplus\mathbf{G}^{\prime}_{i}\).
_Suppose first that \(i=0\)._ Let \(\mathcal{P}_{i}=(R^{i},S^{i})\in\mathcal{P}_{2}(X)\) be such that \(\mathcal{A}\subseteq\mathcal{P}_{i}\) and \(\hat{\mathsf{p}}_{f_{K_{2}},\min}(\mathbf{H}_{i-1}\oplus\mathbf{F},\mathcal{ A},w_{i-1})=\hat{\mathsf{p}}_{f_{K_{2}},\min}(\mathbf{H}_{i-1}\oplus\mathbf{F}, \mathcal{P}_{i},w_{i-1}).\) According to Lemma 4.2,
\[\hat{\mathsf{p}}_{f_{K_{2}},\min}(\mathbf{H}_{i-1}\oplus\mathbf{F},\mathcal{P}_{i},w_{i-1}) =\hat{\mathsf{p}}_{f_{K_{2}},\min}(F,\mathcal{P}_{i},w)+\hat{ \mathsf{p}}_{f_{K_{2}},\min}(H_{i-1},\mathcal{P}_{i},w_{i-1})-w(X\cap S^{i})\] \[=\hat{\mathsf{p}}_{f_{K_{2}},\min}(F,\mathcal{P}_{i},w)-w(X\cap S ^{i})+\hat{\mathsf{p}}_{f_{K_{2}},\min}(H_{i},\mathcal{P}_{i},w_{i-1})\] \[\quad+\sum_{j\in I_{i}}(\hat{\mathsf{p}}_{f_{K_{2}},\min}(G_{j}, \mathcal{P}_{i}\cap X_{j},w)-w(S^{i}\cap X_{j}))\] \[=\hat{\mathsf{p}}_{f_{K_{2}},\min}(\mathbf{H}^{\prime}_{i}\oplus \mathbf{F},\mathcal{P}_{i},w_{i-1})\] \[\quad+\sum_{j\in I_{i}}(\hat{\mathsf{p}}_{f_{K_{2}},\min}(G_{j}, \mathcal{A}\cap X_{j},w)-w(S\cap X_{j})).\]
Since this is the case for all such \(\mathcal{P}_{i}\), it implies that
\[\hat{\mathsf{p}}_{f_{K_{2}},\min}(\mathbf{H}_{i-1}\oplus\mathbf{F},\mathcal{ A},w_{i-1})=\hat{\mathsf{p}}_{f_{K_{2}},\min}(\mathbf{H}^{\prime}_{i}\oplus \mathbf{F},\mathcal{A},w_{i-1})+\sum_{j\in I_{i}}(\hat{\mathsf{p}}_{f_{K_{2}},\min}(G_{j},\mathcal{A}\cap X_{j},w)-w(S\cap X_{j})).\]
Therefore, if \(\mathbf{H}_{i}=\mathbf{H}^{\prime}_{i}\), \(w_{i}=w_{i-1}\) and \(s_{i}=s_{i-1}+\sum_{j\in I_{i}}(\hat{\mathsf{p}}_{f_{K_{2}},\min}(G_{j}, \mathcal{A}\cap X_{j},w)-w(S\cap X_{j}))\), then
\[\hat{\mathsf{p}}_{f_{K_{2}},\min}(\mathbf{G}\oplus\mathbf{F},\mathcal{A},w)= \hat{\mathsf{p}}_{f_{K_{2}},\min}(\mathbf{H}_{i}\oplus\mathbf{F},\mathcal{A},w _{i})+s_{i}.\]
_Otherwise, \(i\in[|B|]\) and \(X_{j}\setminus A=\{v_{i}\}\) for each \(j\in I_{i}\)._ Let \(X^{\prime}_{i}=\bigcup_{j\in I_{i}}X_{j}\). Let \(\mathcal{X}^{+}_{i}=(R,S\cup\{v_{i}\})\cap X^{\prime}_{i}\) and \(\mathcal{X}^{-}_{i}=(R\cup\{v_{i}\},S)\cap X^{\prime}_{i}\). Let \(\mathbf{H}^{\prime\prime}_{i}=(H^{\prime\prime}_{i},v_{i},\rho_{\{v_{i}\}})\) be the boundaried where \(H^{\prime\prime}_{i}\) is an edge \(v_{i}v_{i}^{-}\). Let \(\mathbf{H}_{i}=\mathbf{H}^{\prime}_{i}\oplus\mathbf{H}^{\prime\prime}_{i}\). Let \(s^{+}_{i}=\hat{\mathsf{p}}_{f_{K_{t}},\min}(G^{\prime}_{i},\mathcal{X}^{+}_{i},w)\) and \(s^{-}_{i}=\hat{\mathsf{p}}_{f_{K_{t}},\min}(G^{\prime}_{i},\mathcal{X}^{-}_{i},w)\). By Lemma 4.2,
\[s^{+}_{i}=\sum_{j\in I_{i}}(\hat{\mathsf{p}}_{f_{K_{t}},\min}(G_{j},\mathcal{X}^ {+}_{i}\cap X_{j},w)-w(S\cap X_{j})-w(v_{i}))+w(S\cap X^{\prime}_{i})+w(v_{i}),\]
and
\[s^{-}_{i}=\sum_{j\in I_{i}}(\hat{\mathsf{p}}_{f_{K_{t}},\min}(G_{j},\mathcal{X} ^{-}_{i}\cap X_{j},w)-w(S\cap X_{j}))+w(S\cap X^{\prime}_{i}).\]
Since the \(\hat{\mathsf{p}}_{f_{K_{t}},\min}(G_{j},\mathcal{X}^{a}_{i}\cap X_{j},w)\) are given, \(s^{+}_{i}\) and \(s^{-}_{i}\) can be computed in time \(\mathcal{O}(|A|\cdot|I_{i}|)\). Let \(w_{i}:V(\mathbf{F}\oplus\mathbf{H}^{\prime\prime}_{i})\to\mathbb{N}\) be such that \(w_{i}(v_{i})=s^{+}_{i}-w(S\cap B_{F}\cap B_{G})\), \(w_{i}(v^{-}_{i})=s^{-}_{i}-w(S\cap B_{F}\cap B_{G})\), and \(w_{i}(x)=w(x)\) otherwise. Let \(\mathcal{P}_{i}=(R^{i},S^{i})\subseteq\mathcal{P}_{2}(X\setminus\{v_{i}\})\) be such that \(\mathcal{A}\subseteq\mathcal{P}_{i}\) and \(\hat{\mathsf{p}}_{f_{K_{2}},\min}(\mathbf{H}_{i-1}\oplus\mathbf{F},\mathcal{A},w _{i-1})=\hat{\mathsf{p}}_{f_{K_{2}},\min}(\mathbf{H}_{i-1}\oplus\mathbf{F}, \mathcal{P}_{i},w_{i-1}).\) Then, using Lemma 4.7,
\[\hat{\mathsf{p}}_{f_{K_{2}},\min}(\mathbf{H}_{i-1}\oplus\mathbf{F},\mathcal{P}_{i},w_{i-1}) =\hat{\mathsf{p}}_{f_{K_{2}},\min}((\mathbf{H}^{\prime}_{i}\oplus \mathbf{F})\oplus\mathbf{G}^{\prime}_{i},\mathcal{P}_{i},w_{i-1})\] \[=\hat{\mathsf{p}}_{f_{K_{2}},\min}((\mathbf{H}^{\prime}_{i}\oplus \mathbf{F})\oplus\mathbf{H}^{\prime\prime}_{i},\mathcal{P}_{i},w_{i})\] \[=\hat{\mathsf{p}}_{f_{K_{2}},\min}(\mathbf{H}_{i}\oplus\mathbf{F}, \mathcal{P}_{i},w_{i})\]
Since this is the case for all such \(\mathcal{P}_{i}\), it implies that
\[\hat{\mathsf{p}}_{f_{K_{2}},\min}(\mathbf{H}_{i-1}\oplus\mathbf{F},\mathcal{A},w _{i-1})=\hat{\mathsf{p}}_{f_{K_{2}},\min}(\mathbf{H}_{i}\oplus\mathbf{F}, \mathcal{A},w_{i}).\]
Therefore, given \(s_{i}=s_{i-1}\),
\[\hat{\mathsf{p}}_{f_{K_{2}},\min}(\mathbf{G}\oplus\mathbf{F},\mathcal{A},w)=\hat {\mathsf{p}}_{f_{K_{2}},\min}(\mathbf{H}_{i}\oplus\mathbf{F},\mathcal{A},w_{i}) +s_{i}.\]
Observe that \(\mathbf{H}_{|\mathbf{B}|}=\mathbf{X}\oplus\left(\boxplus_{i\in[|B|]}\mathbf{H} _{i}^{\prime\prime}\right)\) and \(\mathbf{H}_{|\mathbf{B}|}\oplus\mathbf{F}=\mathbf{F}\oplus\left(\boxplus_{i \in[|B|]}\mathbf{H}_{i}^{\prime\prime}\right)\). Suppose that \(F\setminus A\in\mathcal{H}\). Given that \(\mathcal{H}\) is closed under 1-clique-sums and contains edges, that each \(H_{i}^{\prime\prime}\) is an edge, and that \(|\mathsf{bd}(\mathbf{H}_{i}^{\prime\prime})|=1\), it follows that \((\mathbf{H}_{|\mathbf{B}|}\oplus\mathbf{F})\setminus A\in\mathcal{H}\). Moreover, \(|V(H_{|B|})|=|X|+|B|\), and \(|E(H_{|B|})|=|E(G[X])|+|B|\). Hence, \((\mathbf{H}_{|B|},\mathcal{A},s_{|B|},w_{|B|})\) is a \(\mathcal{H}\)-nice reduction of \((\mathbf{G},\mathcal{A},w)\) with respect to Weighted Annotated Vertex Cover.
At each step \(i\), \(s_{i}\) is computable in time \(\mathcal{O}(|A|\cdot|I_{i}|)\), and \(\mathbf{H}_{i}\) and \(w_{i}\) are computable in time \(\mathcal{O}(1)\). Hence, the computation takes time \(\mathcal{O}(|A|\cdot d)\). Therefore, Weighted Annotated Vertex Cover is \(\mathcal{H}\)-nice.
We apply Lemma 4.8 and 4.1 to the dynamic programming algorithm of Lemma 4.1 to obtain the following result.
**Corollary 4.4**.: _Let \(\mathcal{H}\) be a graph class that is closed under 1-clique-sum and contains edges. Suppose that Weighted Vertex Cover can be solved on instances \((G,w)\) where \(G\in\mathcal{H}\) in time \(\mathcal{O}(n^{c}\cdot m^{d})\). Then, given a graph \(G\), a \(1\)-\(\mathcal{H}\)-tree decomposition of \(G\) of width \(k\), and a weight function \(w\), there is an algorithm that solves Weighted Vertex Cover/Weighted Independent Set on \((G,w)\) in time \(\mathcal{O}(2^{k}\cdot(k\cdot(k+n)+n^{c}\cdot m^{d}))\)._
Since the class \(\mathcal{B}\) of bipartite graphs is closed under 1-clique-sums, that \(P_{1}\in\mathcal{B}\), and that Weighted Vertex Cover can be solved on bipartite graphs in time \(\mathcal{O}(m\cdot n)\)[25, 31], we obtain the following result concerning bipartite treewidth using Corollary 4.4.
**Corollary 4.5**.: _Given a graph \(G\), a bipartite tree decomposition of \(G\) of width \(k\), and a weight function \(w\), there is an algorithm that solves Weighted Vertex Cover/Weighted Independent Set on \((G,w)\) in time \(\mathcal{O}(2^{k}\cdot(k\cdot(k+n)+n\cdot m))\)._
#### 4.4.3 Odd Cycle Transversal
Let \(H\) be a graph. We define \(f_{\mathsf{oct}}\) as the 3-partition-evaluation function where, for every graph \(G\) and for every \((S,X_{1},X_{2})\in\mathcal{P}_{3}(V(G))\),
\[f_{\mathsf{oct}}(G,(S,X_{1},X_{2}))=\begin{cases}|S|&\text{if $G\setminus S\in \mathcal{B}$, witnessed by the bipartition $(X_{1},X_{2})$},\\ +\infty&\text{otherwise}.\end{cases}\]
Hence, seen as an optimization problem, Odd Cycle Transversal is the problem of computing \(\mathsf{p}_{f_{\mathsf{oct}},\min}(G)\). We call its annotated extension Annotated Odd Cycle Transversal. In other words, Annotated Odd Cycle Transversal is defined as follows.
(Weighted) Annotated Odd Cycle Transversal
**Input**: A graph \(G\), three disjoint sets \(S,X_{1},X_{2}\subseteq V(G)\) (and a weight function \(w:V(G)\to\mathbb{N}\)).
**Objective**: Find, if it exists, a set \(S^{\star}\) of minimum size (resp. weight) such that \(S\subseteq S^{\star}\), \((X_{1}\cup X_{2})\cap S^{\star}=\emptyset\), and \(G\setminus S^{\star}\) is bipartite with \(X_{1}\) and \(X_{2}\) on different sides of the bipartition.
We first prove that (Weighted) Annotated Odd Cycle Transversal has the gluing property.
**Lemma 4.9** (Gluing property).: (Weighted) Annotated Odd Cycle Transversal _has the gluing property. More precisely, given two boundaried graphs \(\mathbf{F}=(F,B_{F},\rho_{F})\) and \(\mathbf{G}=(G,B_{G},\rho_{G})\), a weight function \(w:V(\mathbf{F}\oplus\mathbf{G})\to\mathbb{N}\), a set \(X\subseteq V(\mathbf{F}\oplus\mathbf{G})\) such that \(B_{F}\cap B_{G}\subseteq X\), and \(\mathcal{X}=(S,X_{1},X_{2})\in\mathcal{P}_{3}(X)\), we have_
\[\hat{\mathfrak{p}}_{f_{\text{\rm oct},\min}}(\mathbf{F}\oplus\mathbf{G}, \mathcal{X},w)=\hat{\mathfrak{p}}_{f_{\text{\rm oct},\min}}(F,\mathcal{X}\cap V (F),w)+\hat{\mathfrak{p}}_{f_{\text{\rm oct},\min}}(G,\mathcal{X}\cap V(G),w) -w(S\cap B_{F}\cap B_{G}).\]
Proof.: Let \(\mathcal{P}=(S^{\star},X_{1}^{\star},X_{2}^{\star})\in\mathcal{P}_{3}(V( \mathbf{F}\oplus\mathbf{G}))\) be such that \(\mathcal{X}\subseteq\mathcal{P}\) and \(\hat{\mathfrak{p}}_{f_{\text{\rm oct},\min}}(\mathbf{F}\oplus\mathbf{G}, \mathcal{X},w)=f_{\text{\rm oct}}(\mathbf{F}\oplus\mathbf{G},\mathcal{P},w).\) Then, for \(H\in\{F,G\}\), \(H\setminus(S^{\star}\cap V(H))\) is bipartite, witnessed by the \(2\)-partition \((X_{1}^{\star}\cap V(H),X_{2}^{\star}\cap V(H))\). Therefore, given \(\bar{w}=w(S\cap B_{F}\cap B_{G})\),
\[\hat{\mathfrak{p}}_{f_{\text{\rm oct},\min}}(\mathbf{F}\oplus \mathbf{G},\mathcal{X},w) =w(S^{\star})\] \[=w(S^{\star}\cap V(F))+w(S^{\star}\cap V(G))-w(S^{\star}\cap B_{ F}\cap B_{G})\] \[\geq\hat{\mathfrak{p}}_{f_{\text{\rm oct},\min}}(F,\mathcal{X} \cap V(F),w)+\hat{\mathfrak{p}}_{f_{\text{\rm oct},\min}}(G,\mathcal{X}\cap V (G),w)-\bar{w}.\]
Reciprocally, let \(\mathcal{P}_{H}=(S_{H},X_{1}^{H},X_{2}^{H})\in\mathcal{P}_{3}(V(H))\) be such that \(\mathcal{X}\cap V(H)\subseteq\mathcal{P}_{H}\) and \(\hat{\mathfrak{p}}_{f_{\text{\rm oct},\min}}(H,\mathcal{X}\cap V(H),w)=f_{ \text{\rm oct}}(H,\mathcal{P}_{H},w)\) for \(H\in\{F,G\}\). Since \(\mathcal{P}_{H}\cap B_{F}\cap B_{G}=\mathcal{P}_{G}\cap B_{F}\cap B_{G}\), it follows that \(X_{1}^{F}\cup X_{1}^{G}\) and \(X_{2}^{F}\cup X_{2}^{G}\) are two independent sets of \((\mathbf{F}\oplus\mathbf{G})\setminus(S_{F}\cup S_{G})\). Therefore, \((\mathbf{F}\oplus\mathbf{G})\setminus(S_{F}\cup S_{G})\) is a bipartite graph witnessed by \((X_{1}^{F}\cup X_{1}^{G},X_{2}^{F}\cup X_{2}^{G})\). Thus,
\[\hat{\mathfrak{p}}_{f_{\text{\rm oct},\min}}(\mathbf{F}\oplus \mathbf{G},\mathcal{X},w) \leq w(S_{F}\cup S_{G})\] \[=w(S_{F})+w(S_{G})-\bar{w}\] \[=\hat{\mathfrak{p}}_{f_{\text{\rm oct},\min}}(F,\mathcal{X}\cap V (F),w)+\hat{\mathfrak{p}}_{f_{\text{\rm oct},\min}}(G,\mathcal{X}\cap V(G),w)- \bar{w}.\]
We now show how to reduce a graph \(\mathbf{F}\oplus\mathbf{G}\) to a graph \(F^{\prime}\) when the boundary of \(\mathbf{F}\) and \(\mathbf{G}\) has a single vertex \(v\) that is not annotated. Similarly to Lemma 4.4, the proof of Lemma 4.10 only holds in the unweighted case.
**Lemma 4.10** (Gadgetization).: _Let \(\mathbf{F}=(F,B_{F},\rho_{F})\) and \(\mathbf{G}=(G,B_{G},\rho_{G})\) be two boundaried graphs, let \(X\subseteq V(\mathbf{F}\oplus\mathbf{G})\) be such that \(B_{F}\cap B_{G}\subseteq X\), let \(v\in B_{F}\cap B_{G}\), and let \(\mathcal{X}=(S,X_{1},X_{2})\in\mathcal{P}_{3}(X\setminus\{v\})\) with \(X_{1},X_{2}\neq\emptyset\). Let \(v_{1}\in X_{1}\) and \(v_{2}\in X_{2}\). Let \(\mathcal{X}_{S}=(S\cup\{v\},X_{1},X_{2})\), \(\mathcal{X}_{1}=(S,X_{1}\cup\{v\},X_{2})\), and \(\mathcal{X}_{2}=(S,X_{1},X_{2}\cup\{v\})\). For \(a\in\{S,1,2\}\), let \(s_{a}=\hat{\mathfrak{p}}_{f_{\text{\rm oct},\min}}(G,\mathcal{X}_{a}\cap V(G))\). Let \(\bar{s}=|S\cap B_{F}\cap B_{G}|\). For \(i\in[2]\), let \(F_{i}\) be the graph obtained from \(F\) by adding an edge \(vv_{i}\). Then we have the following case distinction._
\[\hat{\mathfrak{p}}_{f_{\text{\rm oct},\min}}(\mathbf{F}\oplus\mathbf{G}, \mathcal{X})=\begin{cases}s_{S}+\hat{\mathfrak{p}}_{f_{\text{\rm oct},\min}}(F, \mathcal{X}_{S}\cap V(F))-\bar{s}-1&\text{if }s_{S}\leq s_{1},s_{2},\\ s_{1}+\hat{\mathfrak{p}}_{f_{\text{\rm oct},\min}}(F,\mathcal{X}\cap V(F))-\bar{s}& \text{if }s_{1}=s_{2}<s_{S},\\ s_{1}+\hat{p}_{f_{\text{\rm oct},\min}}(F_{2},\mathcal{X}\cap V(F))-\bar{s}&\text{ if }s_{1}<s_{S},s_{2},\text{and}\\ s_{2}+\hat{p}_{f_{\text{\rm oct},\min}}(F_{1},\mathcal{X}\cap V(F))-\bar{s}&\text{ otherwise.}\end{cases}\]
Proof.: For \(H\in\{F,G\}\) and for \(a\in\{S,1,2\}\), let \(\mathcal{P}_{H}^{a}=(S_{H}^{a},X_{1,H}^{a},X_{2,H}^{a})\in\mathcal{P}_{3}(V(H))\) be such that \(\mathcal{X}_{a}\cap V(H)\subseteq\mathcal{P}_{H}^{a}\) and \(\hat{\mathfrak{p}}_{f_{\text{\rm oct},\min}}(H,\mathcal{X}_{a}\cap V(H))=f_{ \text{\rm oct}}(H,\mathcal{P}_{H}^{a})=|S_{H}^{a}|\). So \(s_{a}=|S_{G}^{a}|\). Let \(t_{a}=|S_{F}^{a}|\) for \(a\in\{S,1,2\}\).
\(S_{F}^{S}\cap S_{G}^{S}=(S\cap B_{F}\cap B_{G})\cup\{v\}\) and for \(a\in\{1,2\}\), \(S_{F}^{a}\cap S_{G}^{a}=S\cap B_{F}\cap B_{G}\). Hence, using Lemma 4.2, we have that
\[\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(\mathbf{F}\oplus\mathbf{ G},\mathcal{X}) =\min\{\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(\mathbf{F}\oplus \mathbf{G},\mathcal{X}_{a})\mid a\in\{S,1,2\}\}\] \[=\min\{t_{S}+s_{S}-1,t_{1}+s_{1},t_{2}+s_{2}\}-\bar{s}.\]
Note that \(s_{S}\leq s_{1}+1\), since \(G\setminus(S_{G}^{1}\cup\{v\})\) is bipartite, witnessed by the \(2\)-partition \((X_{1}\setminus\{v\},X_{2})\), and thus \(|S_{G}^{1}\cup\{v\}|\geq\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(G,\mathcal{ X}_{S}\cap V(G))\). Similarly, \(s_{S}\leq s_{2}+1\), \(t_{S}\leq t_{1}+1\), and \(t_{S}\leq t_{2}+1\).
Hence, if \(s_{S}\leq s_{1},s_{2}\), then \(\min\{t_{S}+s_{S}-1,t_{1}+s_{1},t_{2}+s_{2}\}=t_{S}+s_{S}-1\), so
\[\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(\mathbf{F}\oplus\mathbf{G},\mathcal{ X})=s_{S}+t_{S}-\bar{s}-1.\]
If \(s_{1}=s_{2}<s_{S}\), then \(s_{1}=s_{2}=s_{S}-1\). Thus, \(\min\{t_{S}+s_{S}-1,t_{1}+s_{1},t_{2}+s_{2}\}=\min\{t_{S},t_{1},t_{2}\}+s_{1}= \hat{\mathfrak{p}}_{f_{\text{oct},\min}}(F,\mathcal{X}\cap V(F))+s_{1}\), so
\[\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(\mathbf{F}\oplus\mathbf{G},\mathcal{ X})=s_{1}+\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(F,\mathcal{X}\cap V(F))- \bar{s}.\]
If \(s_{1}<s_{S},s_{2}\), then \(s_{1}+1=s_{S}\leq s_{2}\). We have \(t_{2}+s_{2}\geq t_{S}+s_{2}\geq t_{S}+s_{S}-1.\) Thus, \(\min\{t_{S}+s_{S}-1,t_{1}+s_{1},t_{2}+s_{2}\}=\min\{t_{S},t_{1}\}+s_{1}=s_{1}+ \min\{\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(F,\mathcal{X}_{S}\cap V(F)), \hat{\mathfrak{p}}_{f_{\text{oct},\min}}(F,\mathcal{X}_{1}\cap V(F))\}\). Hence, we just need to ensure that \(v\) cannot be added to \(X_{2}\), which is done by adding an edge between \(v\) and \(v_{2}\in X_{2}\). Therefore,
\[\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(\mathbf{F}\oplus\mathbf{G},\mathcal{ X})=s_{1}+\hat{p}_{f_{\text{oct},\min}}(F_{2},\mathcal{X}\cap V(F))-\bar{s}.\]
Otherwise, \(s_{2}<s_{S},s_{1}\) By symmetry, we similarly obtain
\[\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(\mathbf{F}\oplus\mathbf{G},\mathcal{ X})=s_{2}+\hat{p}_{f_{\text{oct},\min}}(F_{1},\mathcal{X}\cap V(F))-\bar{s}.\]
Using Lemma 4.9 and Lemma 4.10, we can now prove that Annotated Odd Cycle Transversal is \(\mathcal{H}\)-nice. Essentially, given an instance \((\mathbf{G}=\mathbf{X}\boxplus(\boxplus_{i\in[d]}\mathbf{G}_{i}),(A,B), \mathcal{A})\), we reduce \(\mathbf{G}\) to \(\mathbf{X}\) where we add two new vertices \(u_{1}\) and \(u_{2}\) in \(\mathcal{A}\) and add edges between \(u_{i}\) and some vertices in \(B\), for \(i\in[2]\), and show that the resulting boundaried graph is equivalent to \(\mathbf{G}\) modulo some constant \(s\).
**Lemma 4.11** (Nice problem).: _Let \(\mathcal{H}\) be a hereditary graph class. Annotated Odd Cycle Transversal is \(\mathcal{H}\)-nice._
Proof.: Let \(\mathbf{G}=(G,X,\rho)\) be a boundaried graph, let \(\mathbf{X}=(G[X],X,\rho_{X})\) be a trivial boundaried graph and let \(\{\mathbf{G}_{i}=(G_{i},X_{i},\rho_{i})\mid i\in[d]\}\) be a collection of boundaried graphs, such that \(\mathbf{G}=\mathbf{X}\boxplus(\boxplus_{i\in[d]}\mathbf{G}_{i})\), let \((A,B)\) be a partition of \(X\) such that for all \(i\in[d]\), \(|X_{i}\setminus A|\leq 1\), and let \(\mathcal{A}=(S,X_{1},X_{2})\in\mathcal{P}_{3}(A)\). Suppose that we know, for every \(i\in[d]\) and each \(\mathcal{X}_{i}\in\mathcal{P}_{3}(X_{i})\), the value \(\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(G_{i},\mathcal{X}_{i})\).
Let \(\mathbf{G}^{\prime}\) and \(\mathbf{X}^{\prime}\) be the boundaried graphs obtained from \(\mathbf{G}\) and \(\mathbf{X}\) respectively, by adding two new isolated vertices \(u_{1}\) and \(u_{2}\) in the boundary (with unused labels). Let \(\mathcal{A}^{\prime}=(S,X_{1}\cup\{u_{1}\},X_{2}\cup\{u_{2}\})\). This operation is done to ensure that \(X^{\prime}_{i}=X_{i}\cup\{u_{i}\}\) is non-empty for \(i\in[2]\). Obviously, \(\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(\mathbf{G}\oplus\mathbf{F},\mathcal{ A})=\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(\mathbf{G}^{\prime}\oplus\mathbf{F}, \mathcal{A}^{\prime})\).
Let \(v_{1},\ldots,v_{|B|}\) be the vertices of \(B\). For \(i\in[|B|]\), let \(I_{i}=\{j\in[d]\mid X_{j}\setminus A=\{v_{i}\}\}\). Let \(I_{0}=\{j\in[d]\mid X_{j}\subseteq A\}\). Obviously, \((I_{i})_{i\in[0,|B|]}\) is a partition of \([d]\).
Let \((\mathbf{H}_{-1},S_{-1},s_{-1},E_{-1})=(\mathbf{G}^{\prime},S,0,\emptyset)\). For \(i\) going from \(0\) up to \(|B|\), we construct \((\mathbf{H}_{i},S_{i},s_{i},E_{i})\) from \((\mathbf{H}_{i-1},S_{i},s_{i-1},E_{i-1})\) such that for any boundaried graph \(\mathbf{F}\) compatible with \(\mathbf{G}\),
\[\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(\mathbf{G}\oplus\mathbf{F},\mathcal{ A})=\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(\mathbf{H}_{i}\oplus\mathbf{F}_{i}, \mathcal{A}_{i})+s_{i},\]
where \(\mathbf{F}_{i}\) is the boundaried graph obtained from \(\mathbf{F}\) by adding the edges in \(E_{i}\) and \(\mathcal{A}_{i}=(S_{i},X_{1}\cup\{u_{1}\},X_{2}\cup\{u_{2}\}\). This is obviously true for \(i=-1\).
Let \(i\in[0,|B|]\). By induction, \(\hat{\mathfrak{p}}_{f_{K_{t},\min}}(\mathbf{G}\oplus\mathbf{F},\mathcal{A})= \hat{\mathfrak{p}}_{f_{K_{t},\min}}(\mathbf{H}_{i-1}\triangleright\mathbf{F}, \mathcal{A}_{i-1})+s_{i-1}\). Let \(\mathbf{G}^{\prime}_{i}=\mathbb{H}_{j\in I_{i}}\mathbf{G}_{j}\). Let \(\mathbf{H}^{\prime}_{i}\) be the boundaried graph such that \(\mathbf{H}_{i-1}=\mathbf{H}^{\prime}_{i}\oplus\mathbf{G}^{\prime}_{i}\).
_Suppose first that \(i=0\)._ Let \(\mathcal{P}_{i}=(S^{i},X^{i}_{1},X^{i}_{2})\in\mathcal{P}_{3}(X\cup\{u_{1},u_{ 2}\})\) be such that \(\mathcal{A}_{i-1}\subseteq\mathcal{P}_{i}\) and \(\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(\mathbf{H}_{i-1}\oplus\mathbf{F}_{i- 1},\mathcal{A}_{i-1})=\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(\mathbf{H}_{i -1}\oplus\mathbf{F}_{i-1},\mathcal{P}_{i}).\) According to Lemma 4.9,
\[\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(\mathbf{H}_{i-1}\oplus \mathbf{F}_{i-1},\mathcal{P}_{i}) =\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(F_{i-1},\mathcal{P}_{i} )+\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(H_{i-1},\mathcal{P}_{i})-|S^{i} \cap X|\] \[=\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(F_{i-1},\mathcal{P}_{i} )-|S^{i}\cap X|+\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(H_{i},\mathcal{P}_{i})\] \[\quad+\sum_{j\in I_{i}}(\hat{\mathfrak{p}}_{f_{\text{oct},\min}} (G_{j},\mathcal{P}_{i}\cap X_{j})-|S^{i}\cap X^{j}|)\] \[=\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(\mathbf{H}^{\prime}_{i }\oplus\mathbf{F}_{i-1},\mathcal{P}_{i})\] \[\quad+\sum_{j\in I_{i}}(\hat{\mathfrak{p}}_{f_{\text{oct},\min}} (G_{j},\mathcal{P}_{i}\cap X_{j})-|S_{i-1}\cap X^{j}|).\]
Since this is the case for all such \(\mathcal{P}_{i}\), it implies that
\[\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(\mathbf{H}_{i-1}\oplus\mathbf{F}_{i -1},\mathcal{A}_{i-1})=\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(\mathbf{H}^{ \prime}_{i}\oplus\mathbf{F}_{i-1},\mathcal{A}_{i-1})+\sum_{j\in I_{i}}(\hat{ \mathfrak{p}}_{f_{\text{oct},\min}}(G_{j},\mathcal{P}_{i}\cap X_{j})-|S_{i-1} \cap X^{j}|).\]
Therefore, if \(\mathbf{H}_{i}=\mathbf{H}^{\prime}_{i}\), \(S_{i}=S_{i-1}\), \(s_{i}=\sum_{j\in I_{i}}(\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(G_{j}, \mathcal{P}_{i}\cap X_{j})-|S\cap X^{j}|)\), and \(E_{i}=E_{i-1}\), then
\[\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(\mathbf{G}\oplus\mathbf{F},\mathcal{ A})=\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(\mathbf{H}_{i}\oplus\mathbf{F}_{i}, \mathcal{A}_{i})+s_{i}.\]
_Otherwise, \(i\in[|B|]\) and \(X_{j}\setminus A=\{v_{i}\}\) for each \(j\in I_{i}\)._ Let \(X^{\prime}_{i}=\bigcup_{j\in I_{i}}X_{j}\). Let \(\mathcal{X}^{S}_{i}=(S_{i-1}\cup\{v\},X_{1}\cup\{u_{1}\},X_{2}\cup\{u_{2}\})\), \(\mathcal{X}^{1}_{i}=(S_{i-1},X_{1}\cup\{u_{1},v\},X_{2}\cup\{u_{2}\})\), and \(\mathcal{X}^{2}_{i}=(S_{i-1},X_{1}\cup\{u_{1}\},X_{2}\cup\{u_{2},v\})\). For \(a\in\{S,1,2\}\), let \(s^{a}_{i}=\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(G^{\prime}_{i},\mathcal{X}^{ a}_{i}\cap X^{\prime}_{i})\). By Lemma 4.9,
\[s^{S}_{i}=\sum_{j\in I_{i}}(\hat{p}_{f_{\text{oct},\min}}(G_{j},\mathcal{X}^{S}_ {i}\cap X_{j})-|S_{i-1}\cap X_{j}|-1)+|S_{i-1}\cap X^{\prime}_{i}|+1\]
and, for \(a\in\{1,2\}\),
\[s^{a}_{i}=\sum_{j\in I_{i}}(\hat{p}_{f_{\text{oct},\min}}(G_{j},\mathcal{X}^{a}_ {i}\cap X_{j})-|S_{i-1}\cap X_{j}|)+|S_{i-1}\cap X^{\prime}_{i}|.\]
So \(s^{a}_{i}\) can be computed for \(a\in\{S,1,2\}\) in time \(\mathcal{O}(|A|\cdot|I_{i}|)\). For \(a\in\{1,2\}\), let \(\mathbf{H}^{a}_{i}\) and \(\mathbf{F}^{a}_{i}\) be the boundaried graphs obtained from \(\mathbf{H}^{\prime}_{i}\) and \(\mathbf{F}_{i-1}\), respectively, by adding the edge \(v_{i}u_{a}\). Let \(\mathcal{P}_{i}=(S^{i},X^{i}_{1},X^{i}_{2})\in\mathcal{P}_{3}(X\cup\{u_{1},u_{ 2}\}\setminus\{v_{i}\})\) be such that \(\mathcal{A}_{i-1}\subseteq\mathcal{P}_{i}\) and \(\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(\mathbf{H}_{i-1}\oplus\mathbf{F}_{i- 1},\mathcal{A}_{i-1})=\hat{\mathfrak{p}}_{f_{\text{oct},\min}}(\mathbf{H}_{i- 1}\oplus\mathbf{F}_{i-1},\mathcal{P}_{i}).\) Let \(\mathcal{P}^{S}_{i}=(S^{i}\cup\{v\},X^{i}_{1},X^{i}_{2})\), \(\mathcal{P}^{1}_{i}=(S^{i},X^{i}_{1}\cup\{v\},X^{i}_{2})\), and
\(\mathcal{P}_{i}^{2}=(S^{i},X_{1}^{1},X_{2}^{i}\cup\{v\})\). Note that for \(a\in\{S,1,2\}\), let \(s_{i}^{a}=\hat{\mathfrak{p}}_{f_{\text{\rm{oct}},\min}}(G_{i}^{\prime},\mathcal{ P}_{i}^{a}\cap X_{i}^{\prime})\). Then, using Lemma 4.10,
\[\hat{\mathfrak{p}}_{f_{\text{\rm{oct}},\min}}(\mathbf{H}_{i-1} \oplus\mathbf{F}_{i-1},\mathcal{P}_{i}) =\hat{\mathfrak{p}}_{f_{\text{\rm{oct}},\min}}((\mathbf{H}_{i}^{ \prime}\oplus\mathbf{F}_{i-1})\oplus\mathbf{G}_{i}^{\prime},\mathcal{P}_{i})\] \[=\begin{cases}s_{S}+\hat{\mathfrak{p}}_{f_{\text{\rm{oct}},\min} }(\mathbf{H}_{i}^{\prime}\oplus\mathbf{F}_{i-1},\mathcal{P}_{i}^{S})-\bar{s} -1&\text{if }s_{i}^{S}\leq s_{i}^{1},s_{i}^{2},\\ s_{1}+\hat{\mathfrak{p}}_{f_{\text{\rm{oct}},\min}}(\mathbf{H}_{i}^{\prime} \oplus\mathbf{F}_{i-1},\mathcal{P}_{i})-\bar{s}&\text{if }s_{i}^{1}=s_{i}^{2}<s_{i}^{S},\\ s_{1}+\hat{\mathfrak{p}}_{f_{\text{\rm{oct}},\min}}(\mathbf{H}_{i}^{2}\oplus \mathbf{F}_{i}^{2},\mathcal{P}_{i})-\bar{s}&\text{if }s_{i}^{1}<s_{i}^{S},s_{i}^{2}, \text{and}\\ s_{2}+\hat{\mathfrak{p}}_{f_{\text{\rm{oct}},\min}}(\mathbf{H}_{i}^{1}\oplus \mathbf{F}_{i}^{1},\mathcal{P}_{i})-\bar{s}&\text{otherwise}.\end{cases}\]
Since this is the case for any such \(\mathcal{P}_{i}\), we have that
\[\hat{\mathfrak{p}}_{f_{\text{\rm{oct}},\min}}(\mathbf{G}\oplus \mathbf{F},\mathcal{A}) =\hat{\mathfrak{p}}_{f_{\text{\rm{oct}},\min}}(\mathbf{H}_{i-1} \oplus\mathbf{F}_{i-1},\mathcal{A}_{i-1})+s_{i-1}\] \[=\hat{\mathfrak{p}}_{f_{\text{\rm{oct}},\min}}(\mathbf{H}_{i} \oplus\mathbf{F}_{i},\mathcal{A}_{i})+s_{i},\]
where
\[(\mathbf{H}_{i},S_{i},s_{i},E_{i})=\begin{cases}(\mathbf{H}_{i}^{\prime},S_{i-1 }\cup\{v\},s_{i-1}+s_{S}-\bar{s}-1,E_{i-1})&\text{if }s_{i}^{S}\leq s_{i}^{1},s_{i}^{2},\\ (\mathbf{H}_{i}^{\prime},S_{i-1},s_{i-1}+s_{1}-\bar{s},E_{i-1})&\text{if }s_{i}^{1}=s_{i}^{2}<s_{i}^{S},\\ (\mathbf{H}_{i}^{2},S_{i-1},s_{i-1}+s_{1}-\bar{s},E_{i-1}\cup\{v_{i}u_{2}\})& \text{if }s_{i}^{1}<s_{i}^{S},s_{i}^{2},\text{and}\\ (\mathbf{H}_{i}^{1},S_{i-1},s_{i-1}+s_{2}-\bar{s},E_{i-1}\cup\{v_{i}u_{1}\})& \text{otherwise}.\end{cases}\]
Let \(S_{B}=S_{|B|}\setminus S\subseteq B\). Let \(\mathbf{H}_{B}=\mathbf{H}_{|B|}\setminus S_{B}\). Observe that
\[\hat{\mathfrak{p}}_{f_{\text{\rm{oct}},\min}}(\mathbf{H}_{|B|} \oplus\mathbf{F}_{|B|},\mathcal{A}_{|B|}) =\hat{\mathfrak{p}}_{f_{\text{\rm{oct}},\min}}((\mathbf{H}_{|B|} \oplus\mathbf{F}_{|B|})\setminus S_{B},\mathcal{A}_{|B|}\setminus S_{B})+|S_{B}|\] \[=\hat{\mathfrak{p}}_{f_{\text{\rm{oct}},\min}}(\mathbf{H}_{B} \triangleright\mathbf{F},\mathcal{A}^{\prime})+|S_{B}|.\]
Hence,
\[\hat{\mathfrak{p}}_{f_{\text{\rm{oct}},\min}}(\mathbf{G}\oplus\mathbf{F}, \mathcal{A})=\hat{\mathfrak{p}}_{f_{\text{\rm{oct}},\min}}(\mathbf{H}_{B} \triangleright\mathbf{F},\mathcal{A}^{\prime})+|S_{B}|+s_{|B|}.\]
\(\mathbf{H}_{B}\) is isomorphic to the boundaried graph obtained from \(\mathbf{X}\) by adding two new vertices and the at most \(|B|\) edges in \(E_{|B|}\), and removing the vertices in \(S_{B}\). Hence, \(|V(H_{B})|\leq|X|+2\) and \(|E(H_{B})|\leq|E(G[X])|+|B|\). Moreover, \(|\cup\mathcal{A}^{\prime}|=|\cup\mathcal{A}|+2\). Suppose that \(F\setminus A\in\mathcal{H}\). Observe that, since the edges in \(E_{|B|}\) all have one endpoint in \(\{u_{1},u_{2}\}\), \((\mathbf{H}_{B}\triangleright\mathbf{F})\setminus(\cup\mathcal{A})\setminus\{u_{1 },u_{2}\}\) is isomorphic to \(F\setminus A\setminus S_{B}\). Since \(\mathcal{H}\) is hereditary, \((\mathbf{H}_{B}\triangleright\mathbf{F})\setminus(\cup\mathcal{A}^{\prime})\in \mathcal{H}\). Thus, \((\mathbf{H}_{B},\mathcal{A}^{\prime},|S_{B}|+s_{|B|})\) is a \(\mathcal{H}\)-nice reduction of \((G,\mathcal{A})\) with respect to Annotated Odd Cycle Transversal.
At each step \(i\), \((H_{i},S_{i},s_{i},E_{i})\) is computable in time \(\mathcal{O}(|A|\cdot|I_{i}|)\), so the computation takes time \(\mathcal{O}(|A|\cdot d)\). Hence, Annotated Odd Cycle Transversal is \(\mathcal{H}\)-nice.
In the next lemma, we adapt the seminal proof of Reed, Smith and Vetta [32] that uses iterative compression to solve Annotated Odd Cycle Transversal in FPT-time parameterized by oct.
Given a graph \(G\) and two sets \(A,B\subseteq V(G)\), an \((A,B)\)_-cut_ is a set \(X\subseteq V(G)\) such that there are no paths from a vertex in \(A\) to a vertex in \(B\) in \(V(G)\setminus X\).
**Lemma 4.12**.: _There is an algorithm that, given a graph \(G\), (a weight function \(w:V(G)\to\mathbb{N}\),) and three disjoint sets \(S,A,B\subseteq V(G)\), such that \(G\setminus(S\cup A\cup B)\) is bipartite, solves Annotated Odd Cycle Transversal (resp. Weighted Annotated Odd Cycle Transversal) on \((G,S,A,B)\) in time \(\mathcal{O}((n+k)\cdot(m+k^{2}))\), where \(k=|A\cup B|\)._
Proof.: We can assume that \(S=\emptyset\), since \(S^{\star}\) is an optimal solution for \((G,S,A,B)\) if and only if \(S^{\star}\setminus S\) is an optimal solution for \((G\setminus S,\emptyset,A,B)\).
Let \(G^{+}\) be the graph obtained from \(G\) by joining each \(a\in A\) with each \(b\in B\). Let \(X=A\cup B\). Let \((S_{1},S_{2})\) be a partition witnessing the bipartiteness of \(G\setminus X=G^{+}\setminus X\). We construct an auxiliary bipartite graph \(G^{\prime}\) from \(G^{+}\) as follows. The vertex set of the auxiliary graph is \(V^{\prime}=V(G)\setminus X\cup\{x_{1},x_{2}\mid x\in X\}\). We maintain a one-to-one correspondence between the edges of \(G^{+}\) and the edges of \(G^{\prime}\) by the following scheme:
* for each edge \(e\) of \(G^{+}\setminus X\), there is a corresponding edge in \(G^{\prime}\) with the same endpoints,
* for each edge \(e\in E(G^{+})\) joining a vertex \(y\in S_{i}\) to a vertex \(x\in X\), the corresponding edge in \(G^{\prime}\) joins \(y\) to \(x_{3-i}\), and
* for each edge \(e\in E(G^{+})\) joining two vertices \(a\in A\) and \(b\in B\), the corresponding edge of \(G^{\prime}\) joins \(a_{1}\) to \(b_{2}\).
For \(i\in\{1,2\}\), let \(X_{i}=\{x_{i}\mid x\in X\}\), \(A_{i}=\{a_{i}\mid a\in A\}\), \(B_{i}=\{b_{i}\mid b\in B\}\). Note that \(G^{\prime}\) is a bipartite graph, witnessed by the partition \((S_{1}\cup X_{1},S_{2}\cup X_{2})\). Let \(Y_{1}=A_{1}\cup B_{2}\) and \(Y_{2}=A_{2}\cup B_{1}\). Note also that there is no edge joining \(Y_{1}\) and \(Y_{2}\), so any \((Y_{1},Y_{2})\)-cut of minimal weight in \(G^{\prime}\) is actually contained in \(V(G^{+})\setminus X\), and hence \(V(G)\setminus X\). Let us show that \(S^{\star}\subseteq V(G)\setminus X\) is a \((Y_{1},Y_{2})\)-cut of minimal weight in \(G^{\prime}\) if and only if \(S^{\star}\) is a minimum weighted odd cycle transversal of \(G\) with \(A\) on one side of the bipartition and \(B\) on the other one.
_If \(S^{\star}\subseteq V(G)\setminus X\) is a cutset separating \(Y_{1}\) from \(Y_{2}\) in \(G^{\prime}\), then \(S^{\star}\) is an odd cycle transversal of \(G\) with \(A\) on one side of the bipartition and \(B\) on the other one:_ Let \(C\) be an odd cycle of \(G^{+}\). Suppose toward a contradiction that \(C\cap S^{\star}=\emptyset\). \(G^{+}\setminus X\) is bipartite, so \(C\) intersects \(X\). Moreover, we assumed \(G^{+}[X]\) to be bipartite since \(A\) and \(B\) are two disjoint independent sets, so \(C\) intersects \(V(G^{+})\setminus X\). Hence, if we divide \(C\) in paths whose endpoints are in \(X\) and whose internal vertices are in \(V(G^{+})\setminus X\), each such a path in \(G^{\prime}\) has either both endpoints in \(Y_{1}\) or both endpoints in \(Y_{2}\), since it is not intersected by the cutset \(S^{\star}\). More specifically, each such a path in \(G^{\prime}\) has either both endpoints in \(A_{i}\), or both endpoints in \(B_{i}\), or one in \(A_{i}\) and one in \(B_{3-i}\), for \(i\in\{1,2\}\). A path with both endpoints in \(A_{i}\) or \(B_{i}\) is even, since the internal vertices are alternatively vertices of \(S_{i}\) and \(S_{3-i}\), with the first and the last in \(S_{3-i}\). A path with one endpoint in \(A_{i}\) and one in \(B_{3-i}\) is odd by a similar reasoning, but the number of such paths must be even in order to have a cycle. Therefore, the cycle \(C\) is even. Hence the contradiction. So \(S^{\star}\) is an odd cycle transversal of \(G^{+}\). Given that we added in \(G^{+}\) all edges between \(A\) and \(B\) and that \(A,B\subseteq V(G^{+})\setminus S^{\star}\), \(A\) and \(B\) belong to different sides of the bipartition. An odd cycle of \(G\) is also an odd cycle of \(G^{+}\), so \(S^{\star}\) is also an odd cycle transversal of \(G\), with \(A\) and \(B\) on different sides of the bipartition.
_If \(S^{\star}\subseteq V(G)\setminus X\) is an odd cycle transversal of \(G\) with \(A\) and \(B\) on different sides of the bipartition, then \(S^{\star}\) is a cutset separating \(Y_{1}\) from \(Y_{2}\) in \(G^{\prime}\):_ Suppose toward a contradiction that there is a path \(P\) between \(Y_{1}\) and \(Y_{2}\) that does not intersect \(S^{\star}\). Choose \(P\) of minimum length. Hence, the internal vertices of \(P\) belong to \(G\setminus X\). The endpoints \(u\) and \(v\) of \(P\) are such that, either \(u\in A_{i}\) and \(v\in A_{3-i}\), or \(u\in B_{i}\) and \(v\in B_{3-i}\), or \(u\in A_{i}\) and \(v\in B_{i}\), or \(u\in B_{i}\) and \(v\in A_{i}\), for \(i\in\{1,2\}\). By symmetry, we can assume without loss of generality that \(u\in A_{1}\) and \(v\in A_{2}\) or \(v\in B_{1}\). If \(v\in A_{2}\), then \(P\) is an odd path since \(G\setminus X\) is bipartite. However, since \(G\setminus S^{\star}\) is bipartite, with \(A\) on one side of the bipartition, \(P\) is an even path. If \(v\in B_{1}\), then \(P\) is an even path since \(G\setminus X\) is bipartite. However, since \(G\setminus S^{\star}\) is bipartite, with \(A\) and \(B\) on different sides of the bipartition, \(P\) is an odd path. Hence the contradiction.
\(G^{\prime}\) has \(n^{\prime}=n+|X|\) vertices and at most \(m^{\prime}=m+|X|^{2}/4\) edges. Finding a minimum (weighted) vertex-cut can be reduced to the problem of finding a minimum (weighted) edge-cut. To do so, we transform \(G^{\prime}\) into an arc-weighted directed graph \(G^{\prime\prime}\), by first replacing every edge by two parallel arcs in opposite directions, and then replacing every vertex \(v\) of \(G^{\prime}\) by an arc \((v_{\mathsf{in}},v_{\mathsf{out}})\), such that the arcs incoming (resp. outgoing) to \(v\) are now incoming to \(v_{\mathsf{in}}\) (resp. outgoing to \(v_{\mathsf{out}}\)). We give weight \(w(v)\) to \((v_{\mathsf{in}},v_{\mathsf{out}})\), and weight \(w(V(G))+1\) to the other arcs. Then, computing a minimum (weighted) vertex-cut in \(G^{\prime}\) is equivalent to computing a minimum (weighted) edge-cut in \(G^{\prime\prime}\), which can be done in time \(\mathcal{O}(n^{\prime}\cdot m^{\prime})\) according to [25, 31]. Hence, the running time of the algorithm is \(\mathcal{O}((n+|X|)\cdot(m+|X|^{2}))\).
We apply Lemma4.11 and Lemma4.12 to the dynamic programming algorithm of Lemma4.1 to obtain the following result.
**Corollary 4.6**.: _Given a graph \(G\) and a bipartite tree decomposition of \(G\) of width \(k\), there is an algorithm that solves Odd Cycle Transversal on \(G\) in time \(\mathcal{O}(3^{k}\cdot k\cdot n\cdot(m+k^{2}))\)._
#### 4.4.4 Maximum Weighted Cut
The Maximum Weighted Cut problem is defined as follows.
Maximum Weighted Cut
**Input**: A graph \(G\) and a weight function \(w:E(G)\to\mathbb{N}\).
**Objective**: Find an edge cut of maximum weight.
Let \(H\) be a graph. We define \(f_{\mathsf{cut}}\) as the 2-partition-evaluation function where, for every graph \(G\) with edge weight \(w\) and for every \(\mathcal{P}=(X_{1},X_{2})\in\mathcal{P}_{2}(V(G))\),
\[f_{\mathsf{cut}}(G,\mathcal{P})=w(\mathcal{P})=w(E(X_{1},X_{2})).\]
Hence, Maximum Weighted Cut is the problem of computing \(\mathsf{p}_{f_{\mathsf{cut}},\max}(G)\). We call its annotated extension Annotated Maximum Weighted Cut. In other words, Annotated Maximum Weighted Cut is defined as follows.
Annotated Maximum Weighted Cut
**Input**: A graph \(G\), a weight function \(w:E(G)\to\mathbb{N}\), and two disjoint sets \(X_{1},X_{2}\subseteq V(G)\).
**Objective**: Find an edge cut of maximum weight such that the vertices in \(X_{1}\) belongs to one side of the cut, and the vertices in \(X_{2}\) belong to the other side.
We first prove that Annotated Maximum Weighted Cut has the gluing property.
**Lemma 4.13** (Gluing property).: Annotated Maximum Weighted Cut _has the gluing property. More precisely, given two boundaried graphs \(\mathbf{F}=(F,B_{F},\rho_{F})\) and \(\mathbf{G}=(G,B_{G},\rho_{G})\), a weight function \(w:E(\mathbf{F}\oplus\mathbf{G})\to\mathbb{N}\), a set \(X\subseteq V(\mathbf{F}\oplus\mathbf{G})\) such that \(B_{F}\cap B_{G}\subseteq X\), and \(\mathcal{X}=(X_{1},X_{2})\in\mathcal{P}_{2}(X)\), if we set \(\bar{w}=w(\mathcal{X}\cap B_{F}\cap B_{G})\), then we have_
\[\hat{\mathsf{p}}_{f_{\mathsf{cut}},\max}(\mathbf{F}\oplus\mathbf{G},\mathcal{ X},w)=\hat{\mathsf{p}}_{f_{\mathsf{cut}},\max}(F,\mathcal{X}\cap V(F),w)+\hat{ \mathsf{p}}_{f_{\mathsf{cut}},\max}(G,\mathcal{X}\cap V(G),w)-\bar{w}.\]
Proof.: Let \(\mathcal{P}\in\mathcal{P}_{2}(V(\mathbf{F}\oplus\mathbf{G}))\) be such that \(\mathcal{X}\subseteq\mathcal{P}\) and \(\hat{\mathfrak{p}}_{f_{\text{cat}},\max}(\mathbf{F}\oplus\mathbf{G},\mathcal{ X},w)=f_{\text{cut}}(\mathbf{F}\oplus\mathbf{G},\mathcal{P},w)\). Then,
\[\hat{\mathfrak{p}}_{f_{\text{cat}},\max}(\mathbf{F}\oplus\mathbf{G },\mathcal{X},w) =w(\mathcal{P})\] \[=w(\mathcal{P}\cap V(F))+w(\mathcal{P}\cap V(G))-\bar{w}\] \[\leq\hat{\mathfrak{p}}_{f_{\text{cat}},\max}(F,\mathcal{X}\cap V( F),w)+\hat{\mathfrak{p}}_{f_{\text{cat}},\max}(G,\mathcal{X}\cap V(G),w)-\bar{w}.\]
Reciprocally, for \(H\in\{F,G\}\), let \(\mathcal{P}_{H}=(X_{1}^{H},X_{2}^{H})\in\mathcal{P}_{2}(V(H))\) be such that \(\mathcal{X}\cap V(H)\subseteq\mathcal{P}_{H}\) and \(\hat{\mathfrak{p}}_{f_{\text{cat}},\max}(H,\mathcal{X}\cap V(H),w)=f_{\text{ oct}}(H,\mathcal{P}_{H},w)\). Then, since \(\mathcal{P}_{H}\cap B_{F}\cap B_{G}=\mathcal{X}\cap B_{F}\cap B_{G}\) for \(H\in\{F,G\}\), we have
\[\hat{\mathfrak{p}}_{f_{\text{cat}},\max}(\mathbf{F}\oplus\mathbf{ G},\mathcal{X},w) \geq w(E(X_{1}^{F}\cup X_{1}^{G},X_{2}^{F}\cup X_{2}^{G}))\] \[=w(E(X_{1}^{F},X_{2}^{F}))+w(E(X_{1}^{G},X_{2}^{G}))-\bar{w}\] \[=\hat{\mathfrak{p}}_{f_{\text{cat}},\max}(F,\mathcal{X}\cap V(F), w)+\hat{\mathfrak{p}}_{f_{\text{cat}},\max}(G,\mathcal{X}\cap V(G),w)-\bar{w}.\]
We now show how to reduce a graph \(\mathbf{F}\oplus\mathbf{G}\) to a graph \(F^{\prime}\) when the boundary of \(\mathbf{F}\) and \(\mathbf{G}\) has a single vertex \(v\) that is not annotated.
**Lemma 4.14** (Gadgetization).: _Let \(\mathbf{F}=(F,B_{F},\rho_{F})\) and \(\mathbf{G}=(G,B_{G},\rho_{G})\) be two boundaried graphs, let \(w:E(\mathbf{F}\oplus\mathbf{G})\to\mathbb{N}\) be a weight function, let \(X\subseteq V(\mathbf{F}\oplus\mathbf{G})\) be such that \(B_{F}\cap B_{G}\subseteq X\), let \(v\in B_{F}\cap B_{G}\), and let \(\mathcal{X}=(X_{1},X_{2})\in\mathcal{P}_{2}(X\setminus\{v\})\). Suppose that there is \(v_{1}\in X_{1}\) and \(v_{2}\in X_{2}\) adjacent to \(v\) with \(w(vv_{1})=w(vv_{2})=0\). Let \(\mathcal{X}^{1}=(X_{1}\cup\{v\},X_{2})\) and \(\mathcal{X}^{2}=(X_{1},X_{2}\cup\{v\})\). For \(a\in[2]\), let \(g_{a}=\hat{\mathfrak{p}}_{f_{\text{cat}},\max}(G,\mathcal{X}^{a}\cap V(G),w)\). Let \(\bar{w}=w(\mathcal{X}\cap B_{F}\cap B_{G})\). Let \(w^{\prime}:E(F)\to\mathbb{N}\) be such that \(w^{\prime}(vv_{1})=g_{2}-\bar{w}\), \(w^{\prime}(vv_{2})=g_{1}-\bar{w}\), and \(w^{\prime}(e)=w(e)\) otherwise. Then_
\[\hat{\mathfrak{p}}_{f_{\text{cat}},\max}(\mathbf{F}\oplus\mathbf{G},\mathcal{ X},w)=\hat{\mathfrak{p}}_{f_{\text{cat}},\max}(F,\mathcal{X},w^{\prime}).\]
Proof.: For \(a\in[2]\), let \(f_{a}=\hat{\mathfrak{p}}_{f_{\text{cat}},\max}(F,\mathcal{X}^{a}\cap V(F),w)\). Note that in \(F\) with partition \(\mathcal{X}\), if \(v\) is on the same side as \(X_{1}\), then we must count the weight of the edge \(vv_{2}\), but not the weight of \(vv_{1}\), and vice versa when exchanging \(1\) and \(2\). Thus, using Lemma4.13, we have
\[\hat{\mathfrak{p}}_{f_{\text{cat}},\max}(\mathbf{F}\oplus\mathbf{ G},\mathcal{X},w) =\max\{\hat{\mathfrak{p}}_{f_{\text{cat}},\max}(\mathbf{F}\oplus \mathbf{G},\mathcal{X}^{1},w),\hat{\mathfrak{p}}_{f_{\text{cat}},\max}( \mathbf{F}\oplus\mathbf{G},\mathcal{X}^{2},w)\}\] \[=\max\{f_{1}+g_{1}-\bar{w},f_{2}+g_{2}-\bar{w}\}\] \[=\max\{f_{1}+w^{\prime}(vv_{2}),f_{2}+w^{\prime}(vv_{1})\}\] \[=\max\{\hat{\mathfrak{p}}_{f_{\text{cat}},\max}(F^{\prime}, \mathcal{X}^{1},w^{\prime}),\hat{\mathfrak{p}}_{f_{\text{cat}},\max}(F,\mathcal{X} ^{2},w^{\prime})\}\] \[=\hat{\mathfrak{p}}_{f_{\text{cat}},\max}(F,\mathcal{X},w^{\prime}).\]
Using Lemma4.13 and Lemma4.14, we prove that Annotated Maximum Weighted Cut is \(\mathcal{H}\)-nice. Essentially, given an instance \((\mathbf{G}=\mathbf{X}\boxplus(\boxplus_{i\in[d]}\mathbf{G}_{i}),(A,B), \mathcal{A},w)\), we reduce \(\mathbf{G}\) to \(\mathbf{X}\) where we add two new vertices in \(\mathcal{A}\) and add every edges between this new vertices and the vertices in \(B\). We then show that if the appropriate weight is given to each new edge, then the resulting boundaried graph is equivalent to \(\mathbf{G}\) modulo some constant \(s\).
**Lemma 4.15** (Nice problem).: _Let \(\mathcal{H}\) be a graph class. Annotated Maximum Weighted Cut is \(\mathcal{H}\)-nice._
Proof.: Let \(\mathbf{G}=(G,X,\rho)\) be a boundaried graph, let \(w:E(G)\rightarrow\mathbb{N}\) be a weight function, let \(\mathbf{X}=(G[X],X,\rho_{X})\) be a trivial boundaried graph and let \(\{\mathbf{G}_{i}=(G_{i},X_{i},\rho_{i})\mid i\in[d]\}\) be a collection of boundaried graphs, such that \(\mathbf{G}=\mathbf{X}\boxplus(\boxplus_{i\in[d]}\mathbf{G}_{i})\), let \((A,B)\) be a partition of \(X\) such that for all \(i\in[d]\), \(|X_{i}\setminus A|\leq 1\), and let \(\mathcal{A}=(X_{1},X_{2})\in\mathcal{P}_{2}(A)\). Suppose that we know, for every \(i\in[d]\) and each \(\mathcal{X}_{i}\in\mathcal{P}_{3}(X_{i})\), the value \(\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(G_{i},\mathcal{X}_{i},w)\).
Let \(\bar{\mathbf{G}}\), \(\bar{\mathbf{X}}\), and \(\bar{\mathbf{G}}_{i}\) be the boundaried graphs obtained from \(\mathbf{G}\), \(\mathbf{X}\), and \(\mathbf{G}_{i}\), respectively, by adding two new vertices \(u_{1}\) and \(u_{2}\) in the boundary (with unused labels) and making them adjacent to every vertex in \(B\). We extend \(w\) to \(E(G^{\prime})\) such that any edge adjacent to \(u_{1}\) or \(u_{2}\) has weight zero. Let \(\mathcal{A}^{\prime}=(S,X_{1}\cup\{u_{1}\},X_{2}\cup\{u_{2}\})\). This operation is done to ensure that \(X_{i}^{\prime}=X_{i}\cup\{u_{i}\}\) is non-empty for \(i\in[2]\). For any boundaried graph \(\mathbf{F}\) compatible with \(G\), let \(\bar{\mathbf{F}}\) be the boundaried graph obtained similarly to \(\bar{\mathbf{G}}\). Obviously, \(\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\mathbf{G}\oplus\mathbf{F},\mathcal{ A},w)=\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\bar{\mathbf{G}}\oplus\bar{ \mathbf{F}},\mathcal{A}^{\prime},w)=\hat{\mathfrak{p}}_{f_{\text{cut},\max}} (\bar{\mathbf{G}}\triangleright\mathbf{F},\mathcal{A}^{\prime},w)\).
Let \(v_{1},\ldots,v_{|B|}\) be the vertices of \(B\). For \(i\in[|B|]\), let \(I_{i}=\{j\in[d]\mid X_{j}\setminus A=\{v_{i}\}\}\). Let \(I_{0}=\{j\in[d]\mid X_{j}\subseteq A\}\). Obviously, \((I_{i})_{i\in[0,|B|]}\) is a partition of \([d]\).
Let \((\mathbf{H}_{-1},s_{-1},w_{-1})=(\bar{\mathbf{G}},0,w)\). For \(i\) going from \(0\) up to \(|B|\), we construct \((\mathbf{H}_{i},s_{i},w_{i})\) from \((\mathbf{H}_{i-1},s_{i-1},w_{i-1})\) such that \(w_{i}\) and \(w_{i-1}\) may only differ on \(v_{i}u_{1}\) and \(v_{i}u_{2}\) and such that for any boundaried graph \(\mathbf{F}\) compatible with \(\mathbf{G}\),
\[\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\mathbf{G}\oplus\mathbf{F},\mathcal{ A})=\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\mathbf{H}_{i}\oplus\bar{\mathbf{F}}, \mathcal{A}^{\prime},w_{i})+s_{i}.\]
This is obviously true for \(i=-1\).
Let \(i\in[0,|B|]\). By induction, \(\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\mathbf{G}\oplus\mathbf{F},\mathcal{ A},w)=\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\mathbf{H}_{i-1}\oplus\bar{ \mathbf{F}},\mathcal{A}^{\prime},w_{i-1})+s_{i-1}\). Let \(\mathbf{G}_{i}^{\prime}=\boxplus_{j\in I_{i}}\bar{\mathbf{G}}_{j}\). Let \(\mathbf{H}_{i}\) be the boundaried graph such that \(\mathbf{H}_{i-1}=\mathbf{H}_{i}^{\prime}\boxplus\mathbf{G}_{i}\).
_Suppose first that \(i=0\)._ Let \(\mathcal{P}_{i}=(X_{1}^{i},X_{2}^{i})\in\mathcal{P}_{2}(X\cup\{u_{1},u_{2}\})\) be such that \(\mathcal{A}^{\prime}\subseteq\mathcal{P}_{i}\) and \(\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\mathbf{H}_{i-1}\oplus\bar{\mathbf{F} },\mathcal{A}^{\prime},w_{i-1})=\hat{\mathfrak{p}}_{f_{\text{cut},\max}}( \mathbf{H}_{i-1}\oplus\bar{\mathbf{F}},\mathcal{P}_{i},w_{i-1}).\) According to Lemma 4.13,
\[\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\mathbf{H}_{i-1}\oplus \bar{\mathbf{F}},\mathcal{P}_{i},w_{i-1}) =\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\bar{F},\mathcal{P}_{i},w _{i-1})+\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(H_{i-1},\mathcal{P}_{i},w_{i-1})\] \[\quad-w_{i-1}(E(\mathcal{P}_{i}\cap X))\] \[=\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\bar{F},\mathcal{P}_{i},w _{i-1})-w_{i-1}(E(\mathcal{P}_{i}\cap X))\] \[\quad+\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(H_{i},\mathcal{P}_{i },w_{i-1})\] \[\quad+\sum_{j\in I_{i}}(\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(G_{j },\mathcal{P}_{i}\cap X_{j},w)-w(\mathcal{P}_{i}\cap X_{j}))\] \[=\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\mathbf{H}_{i}\oplus\bar{ \mathbf{F}},\mathcal{P}_{i},w_{i-1})\] \[\quad+\sum_{j\in I_{i}}(\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(G_ {j},\mathcal{A}\cap X_{j},w)-w(\mathcal{A}\cap X_{j})).\]
Since this is the case for all such \(\mathcal{P}_{i}\), it implies that
\[\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\mathbf{H}_{i-1}\oplus\bar{\mathbf{F}}, \mathcal{A}^{\prime},w_{i-1})=\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\mathbf{H}_{i }\oplus\bar{\mathbf{F}},\mathcal{A}^{\prime},w_{i-1})+\sum_{j\in I_{i}}(\hat{ \mathfrak{p}}_{f_{\text{cut},\max}}(G_{j},\mathcal{A}\cap X_{j},w)-w(\mathcal{A} \cap X_{j})).\]
Therefore, if \(w_{i}=w_{i-1}\) and \(s_{i}=s_{i-1}+\sum_{j\in I_{i}}(\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(G_{j}, \mathcal{A}\cap X_{j},w)-w(\mathcal{A}\cap X_{j}))\), then
\[\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\mathbf{G}\oplus\mathbf{F},\mathcal{ A},w)=\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\mathbf{H}_{i}\oplus\bar{\mathbf{F}}, \mathcal{A},w_{i})+s_{i}.\]
_Otherwise, \(i\in[|B|]\) and \(X_{j}\setminus A=\{v_{i}\}\) for each \(j\in I_{i}\)._ Let \(X_{i}^{\prime}=\bigcup_{j\in I_{i}}X_{j}\). Let \(\mathcal{X}_{i}^{1}=(X_{1},X_{2}\cup\{v_{i}\})\) and \(\mathcal{X}_{i}^{2}=(X_{1}\cup\{v_{i}\},X_{2})\). Let \(g_{i}^{1}=\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(G_{i}^{\prime},\mathcal{X} _{i}^{1}\cap X_{i}^{\prime},w)\) and \(g_{i}^{2}=\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(G_{i}^{\prime},\mathcal{X} _{i}^{2}\cap X_{i}^{\prime},w)\). For \(a\in[2]\), \(g_{i}^{a}\) can be computed in time \(\mathcal{O}(|A|\cdot|I_{i}|)\) since, by Lemma 4.13,
\[g_{i}^{a}=\sum_{j\in I_{i}}(\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(G_{j}, \mathcal{X}_{i}^{a}\cap X_{j},w)-w(\mathcal{X}_{i}^{a}\cap X_{j}))+w(\mathcal{ X}_{i}^{a}\cap X_{i}^{\prime}).\]
Let \(w_{i}:E(\mathbf{H}_{i}\oplus\bar{\mathbf{F}})\rightarrow\mathbb{N}\) be such that \(w_{i}(v_{i}u_{1})=g_{i}^{2}-w(\mathcal{X}_{i}^{a}\cap X_{i}^{\prime})\), \(w_{i}(v_{i}u_{2})=g_{i}^{1}-w(\mathcal{X}_{i}^{a}\cap X_{i}^{\prime})\), and \(w_{i}(e)=w_{i-1}(e)\) otherwise. Let \(\mathcal{P}_{i}=(R^{i},S^{i})\subseteq\mathcal{P}_{2}(X\cup\{u_{1},u_{2}\} \setminus\{v_{i}\})\) be such that \(\mathcal{A}^{\prime}\subseteq\mathcal{P}_{i}\) and \(\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\mathbf{H}_{i-1}\oplus\bar{\mathbf{ F}},\mathcal{A},w_{i-1})=\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\mathbf{H}_{i-1} \oplus\bar{\mathbf{F}},\mathcal{P}_{i},w_{i-1}).\) Then, using Lemma 4.14,
\[\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\mathbf{H}_{i-1}\oplus \bar{\mathbf{F}},\mathcal{P}_{i},w_{i-1}) =\hat{\mathfrak{p}}_{f_{\text{cut},\max}}((\mathbf{H}_{i}\oplus \bar{\mathbf{F}})\oplus\mathbf{G}_{i}^{\prime},\mathcal{P}_{i},w_{i-1})\] \[=\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\mathbf{H}_{i}\oplus \bar{\mathbf{F}},\mathcal{P}_{i},w_{i})\]
Since this is the case for all such \(\mathcal{P}_{i}\), it implies that
\[\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\mathbf{H}_{i-1}\oplus\bar{\mathbf{ F}},\mathcal{A}^{\prime},w_{i-1})=\hat{\mathfrak{p}}_{f_{\text{cut},\max}}( \mathbf{H}_{i}\oplus\bar{\mathbf{F}},\mathcal{A}^{\prime},w_{i}).\]
Therefore, given \(s_{i}=s_{i-1}\),
\[\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\mathbf{G}\oplus\mathbf{F},\mathcal{ A},w)=\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\mathbf{H}_{i}\oplus\bar{\mathbf{ F}},\mathcal{A}^{\prime},w_{i})+s_{i}.\]
Note that \(\mathbf{H}_{|B|}\) is isomorphic to \(\bar{\mathbf{X}}\). We thus have
\[\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\mathbf{G}\oplus\mathbf{F},\mathcal{ A},w)=\hat{\mathfrak{p}}_{f_{\text{cut},\max}}(\bar{\mathbf{X}}\triangleright \mathbf{F},\mathcal{A}^{\prime},w_{|B|})+s_{|B|}.\]
We have \(|V(\bar{X})|\leq|X|+2\) and \(|E(\bar{X})|\leq|E(G[X])|+2|B|\). Moreover, \(|\cup\mathcal{A}^{\prime}|=|\cup\mathcal{A}|+2\). Suppose that \(F\setminus A\in\mathcal{H}\). Observe that, since the edges added in \(\bar{\mathbf{X}}\) compared to \(\mathbf{X}\) all have one endpoint in \(\{u_{1},u_{2}\}\), \((\mathbf{X}^{\prime}\triangleright\mathbf{F})\setminus(\cup\mathcal{A})\setminus \{u_{1},u_{2}\}\) is isomorphic to \(F\setminus A\) and thus belong to \(\mathcal{H}\). Thus, \((\mathbf{X}^{\prime},\mathcal{A}^{\prime},s_{|B|},w_{|B|})\) is a \(\mathcal{H}\)-nice reduction of \((G,\mathcal{A},w)\) with respect to Annotated Maximum Weighted Cut.
Computing \((\mathbf{H}_{i},s_{i},w_{i})\) takes time \(\mathcal{O}(|A|\cdot|I_{i}|)\) at each step \(i\), so the computation takes time \(\mathcal{O}(|A|\cdot d)\). Hence, Annotated Maximum Weighted Cut is \(\mathcal{H}\)-nice.
Maximum Weighted Cut is a NP-hard problem [22]. However, there exists a polynomial-time algorithm when restricted to some graph classes. In particular, Grotschel and Pulleyblank [16] proved that Maximum Weighted Cut is solvable in polynomial-time on weakly bipartite graphs, and Guenin [17] proved that weakly bipartite graphs are exactly \(K_{5}\)-odd-minor-free graphs, which gives the following result.
**Proposition 4.1** ([16, 17]).: _There is a constant \(c\in\mathbb{N}\) and an algorithm that solves Maximum Weighted Cut on \(K_{5}\)-odd-minor-free graphs in time \(\mathcal{O}(n^{c})\)._
Moreover, we observe the following.
**Lemma 4.16**.: _A graph \(G\) such that \(\mathsf{oct}(G)\leq 2\) does not contain \(K_{5}\) as an odd-minor._
Proof.: Let \(u,v\in V(G)\) be such that \(G^{\prime}=G\setminus\{u,v\}\) is bipartite. \(G^{\prime}\) does not contain \(K_{3}\) as an odd-minor, so \(G\) does not contain \(K_{5}\) as an odd-minor.
Combining Proposition 4.1 and Lemma 4.16, we have that Annotated Maximum Weighted Cut is \(\mathsf{FPT}\) parameterized by \(\mathsf{oct}\).
**Lemma 4.17**.: _There is an algorithm that, given a graph \(G\), a weight function \(w:E(G)\to\mathbb{N}\), and two disjoint sets \(X_{1},X_{2}\subseteq V(G)\), such that \(G^{\prime}=G\setminus(X_{1}\cup X_{2})\) is bipartite, solves Annotated Maximum Weighted Cut on \((G,X_{1},X_{2},w)\) in time \(\mathcal{O}(k\cdot n^{\prime}+n^{\prime c})\), where \(k=|X_{1}\cup X_{2}|\) and \(n^{\prime}=|V(G^{\prime})|\)._
Proof.: Let \(G^{\prime\prime}\) be the graph obtained from \(G\) by identifying all vertices in \(X_{1}\) (resp. \(X_{2}\)) to a new vertex \(x_{1}\) (resp. \(x_{2}\)). Let \(w^{\prime}:V(G^{\prime\prime})\to\mathbb{N}\) be such that \(w^{\prime}(x_{1}x_{2})=\sum_{e\in E(G)}w(e)+1\), \(w^{\prime}(x_{i}u)=\sum_{x\in X_{i}}w(xu)\) for \(i\in[2]\) and \(u\in N_{G}(X_{i})\), and \(w^{\prime}(e)=w(e)\) otherwise. Let \((X_{1}^{\star},X_{2}^{\star})\in\mathcal{P}_{2}(V(G))\) be such that \((X_{1},X_{2})\subseteq(X_{1}^{\star},X_{2}^{\star})\). For \(i\in[2]\), let \(X_{i}^{\prime}=X_{i}^{\star}\setminus X_{i}\). Then
\[w(X_{1}^{\star},X_{2}^{\star}) =w(X_{1},X_{2})+w(X_{1}^{\prime},X_{2}^{\prime})+\sum_{xy\in E(X_{ 1},X_{2}^{\prime})}w(xy)+\sum_{xy\in E(X_{1}^{\prime},X_{2})}w(xy)\] \[=w(X_{1},X_{2})+w^{\prime}(X_{1}^{\prime},X_{2}^{\prime})+\sum_{ u\in X_{2}\cap N_{G}(X_{1})}w^{\prime}(x_{1}u)+\sum_{u\in X_{1}\cap N_{G}(X_{2})}w^{ \prime}(x_{2}u)\] \[=w^{\prime}(X_{1}^{\prime}\cup\{x_{1}\},X_{2}^{\prime}\cup\{x_{2 }\})+w(X_{1},X_{2})-w^{\prime}(x_{1}x_{2})\]
Let \(\bar{w}\) be the contant \(w(X_{1},X_{2})-w^{\prime}(x_{1}x_{2})\). Hence, \(f_{\mathsf{cut}}(G,(X_{1}^{\star},X_{2}^{\star}))=f_{\mathsf{cut}}(G^{\prime \prime},(X_{1}^{\prime}\cup\{x_{1}\},X_{2}^{\prime}\cup\{x_{2}\}))+\bar{w}\), and so \(\hat{p}_{f_{\mathsf{cut}},\max}(G,(X_{1},X_{2}))=\hat{p}_{f_{\mathsf{cut}}, \max}(G^{\prime\prime},(\{x_{1}\},\{x_{2}\}))+\bar{w}\). Moreover, given that the weight of the edge \(x_{1}x_{2}\) is larger than the sum of all other weights, \(x_{1}\) and \(x_{2}\) are never on the same side of a maximum cut in \(G^{\prime\prime}\). Hence, \(\hat{p}_{f_{\mathsf{cut}},\max}(G^{\prime\prime},(\{x_{1}\},\{x_{2}\}))= \mathsf{p}_{f_{\mathsf{cut}},\max}(G^{\prime\prime})\), and therefore, \(\hat{p}_{f_{\mathsf{cut}},\max}(G,(X_{1},X_{2}))=p_{f_{\mathsf{cut}},\max}(G^ {\prime\prime})+\bar{w}\).
Constructing \(G^{\prime\prime}\) takes time \(\mathcal{O}(k\cdot n)\) and computing \(\bar{w}\) takes time \(\mathcal{O}(k^{2})\). Since \(\mathsf{oct}(G^{\prime\prime})=2\), according to Proposition 4.1 and Lemma 4.16, an optimal solution to Maximum Weighted Cut on \(G^{\prime\prime}\) can be found in time \(\mathcal{O}(n^{\prime c})\), and thus, an optimal solution to Annotated Maximum Weighted Cut on \((G,X_{1},X_{2})\) can be found in time \(\mathcal{O}(k\cdot(k+n^{\prime})+n^{\prime c})\).
We apply Lemma 4.15 and Lemma 4.17 to the dynamic programming algorithm of Lemma 4.1 to obtain the following result.
**Corollary 4.7**.: _Given a graph \(G\) and a bipartite tree decomposition of \(G\) of width \(k\), there is an algorithm that solves Maximum Weighted Cut on \(G\) in time \(\mathcal{O}(2^{k}\cdot(k\cdot(k+n)+n^{c}))\)._
## 5 \(\mathsf{XP}\)-algorithms for packing problems
Let \(\mathcal{G}\) be a graph class. We define the \(\mathcal{G}\)-Packing problem as follows.
\[\boxed{\mathcal{G}\text{-Packing}}\]
**Input**: A graph \(G\).
**Objective**: Find the maximum number \(k\) of pairwise-disjoint subgraphs \(H_{1},\ldots,H_{k}\) such that, for each \(i\in[k]\), \(H_{i}\in\mathcal{G}\).
Let \(H\) be a graph. If \(\mathcal{G}=\{H\}\) (resp. \(\mathcal{G}\) is the class of all graphs containing \(H\) as a minor/odd-minor/induced subgraph), then we refer to the corresponding problem as \(H\)-Subgraph-Packing
(resp. \(H\)-Minor-Packing/\(H\)-Odd-Minor-Packing/\(H\)-Induced-Subgraph-Packing). Note, in particular, that \(K_{3}\)-Odd-Minor-Packing is exactly Odd Cycle Packing.
If in the definition of \(\mathcal{G}\)-Packing we add the condition that there is no edge in the input graph between vertices of different \(H_{i}\)'s, then we refer to the corresponding problem as \(H\)-Scattered-Packing, where we implicitly assume that we refer to the subgraph relation, and where we do not specify a degree of "scatteredness", as it is usual in the literature when dealing, for instance, with the scattered version of Independent Set. For instance, \(K_{2}\)-Scattered-Packing is exactly Induced Matching.
As we prove in Lemma6.4, \(H\)-Minor-Packing is para-NP-complete parameterized by \(\mathsf{btw}\) when \(H\) is 2-connected. This is however not the case of \(H\)-Subgraph-Packing and \(H\)-Odd-Minor-Packing when \(H\) is 2-connected non-bipartite: we provide below an XP-algorithm for both problems.
These algorithms have a similar structure to the dynamic programming algorithm of Lemma4.1. Namely, for each node \(t\) of the bipartite tree decomposition, we reduce \(\mathbf{G_{t}}\) to a smaller equivalent instance \(\mathbf{G^{\prime}_{t}}\), and solve the problem on \(\mathbf{G^{\prime}_{t}}\). The main observation here is that the maximum size of a packing is small when restricted to the bag of \(t\). Thus, we guess the packing in each bag, and how it intersects the adhesion of \(t\) and its neighbors. This guessing is the source of XP-time in our algorithms.
Given a graph class \(\mathcal{G}\), \(\mathcal{G}\)-Packing, seen as an optimization problem, is the problem of computing \(\mathsf{p}_{f_{\mathsf{pack}},\max}(G)\), where, for every graph \(G\), for every \((S,R)\in\mathcal{P}_{2}(V(G))\),
\[f_{\mathsf{pack}}(G,(S,R))=\begin{cases}|S|&\text{if there are $|S|$ disjoint subgraphs $H_{1},\ldots,H_{|S|}$ in $G$ such that}\\ &|H_{i}\cap S|=1\text{ and $H_{i}\in\mathcal{G}$ for $i\in[|S|]$},\\ 0&\text{otherwise}.\end{cases}\]
In the above equation, the pair \((S,R)\) means that \(S\) is the part that interacts with the solution, and \(R\) is the remainder.
### (Induced) Subgraph packing
Let \(H\) be a graph. A _partial copy_ of \(H\) is a boundaried graph \(\mathbf{F}\) such that there is a boundaried graph \(\mathbf{F}^{\prime}\) compatible with \(\mathbf{F}\) with \(\mathbf{F}\oplus\mathbf{F}^{\prime}=H\). Given a graph \(G\) and a set \(X\subseteq V(G)\), we denote by \(\mathcal{C}^{H}_{G,X}\) the set of all partial copies \(\mathbf{F}\) of \(H\) such that \(G[\mathsf{bd}(\mathbf{F})]\) is a subgraph of \(G[X]\).
**Lemma 5.1**.: _Let \(\mathcal{H}\) be a graph class and let \(H\) be a 2-connected graph with \(h\) vertices that does not belong to \(\mathcal{H}\). There is an algorithm that, given a graph \(G\) and a 1-\(\mathcal{H}\)-tree decomposition of \(G\) of width at most \(k\), solves the following problem in time \(n^{\mathcal{O}(h\cdot k)}\):_
1. \(H\)-Induced-Subgraph-Packing_, if_ \(\mathcal{H}\) _is hereditary._
2. \(H\)-Subgraph-Packing_, if_ \(\mathcal{H}\) _is monotone._
3. \(H\)-Scattered-Packing_, if_ \(\mathcal{H}\) _is monotone._
Proof.: Both cases follow from the same argument. When speaking of a "copy" of \(H\) in \(G\), we mean an induced subgraph of \(G\) isomorphic to \(H\) for Case 1, and a subgraph of \(G\) isomorphic to \(H\) for
Case 2 and Case 3. If we speak of an \(H\)-packing, we mean an \(H\)-induced-subgraph packing in Case 1, an \(H\)-subgraph packing in Case 2, and an \(H\)-scattered packing in Case 3.
Let \((T,\alpha,\beta,r)\) be a rooted \(1\)-\(\mathcal{H}\)-tree decomposition of \(G\) of width at most \(k\). Let \(k\cdot H\) be the union of \(k\) disjoint copies of \(H\). For each \(t\in V(T)\), in a bottom-up manner, for each \(\mathbf{F}\in\mathcal{C}^{k:H}_{G,\partial_{t}}\), we compute the maximum integer \(s^{\mathbf{F}}_{t}\), if it exists, such that there exists an \(H\)-packing \(H_{1},\ldots,H_{s^{\mathbf{F}}_{t}}\) in \(G_{t}\) such that \(\mathbf{F}\) is a boundaried (induced in Case 1) subgraph of \(\mathbf{G_{t}}\setminus\bigcup_{i\in[s^{\mathbf{F}}_{t}]}V(H_{i})\). Since \(\delta_{r}=\emptyset\), \(s^{\emptyset}_{r}\) is the optimum for \(H\)-(Induced-)Subgraph-Packing on \(G\).
Let \(t\in V(T)\). We observe that no copy of \(H\) is fully contained in \(G[\beta(t)]\). In Case 1, this follows immediately from the fact that \(H\notin\mathcal{H}\), whereas \(G[\beta(t)]\in\mathcal{H}\), and that \(\mathcal{H}\) is hereditary. In Case 2 and Case 3, we additionally use the assumption that \(\mathcal{H}\) is monotone. We call \(t\)_-inner copy_ any copy of \(H\) in \(G_{t}\) that intersects \(\alpha(t)\). Given that \(H\) is \(2\)-connected and that \(H\notin\mathcal{H}\), for any copy of \(H\) in \(G_{t}\) that is not a \(t\)-inner copy, the definition of a \(1\)-\(\mathcal{H}\)-tree decomposition implies that there is \(t^{\prime}\in\mathsf{ch}_{r}(t)\) such that this copy is contained in \(G_{t^{\prime}}\setminus\alpha(t)\). We call these copies \(t\)_-outer copies_. Given that \(|\alpha(t)|\leq k\), it follows that any \(H\)-subgraph-packing contains at most \(k\)\(t\)-inner copies. For each \(t\)-inner copy \(H^{\prime}\) of \(H\), let \(Y_{H^{\prime}}\) be the set of the children \(t^{\prime}\) of \(t\) such that \(H^{\prime}\) intersects \(G_{t^{\prime}}\setminus\alpha(t)\). Since \(H^{\prime}\) has at most \(h\) vertices, \(|Y_{H^{\prime}}|\leq h\). Thus, for any maximum \(H\)-packing \(\mathcal{S}=\{H_{1},\ldots,H_{r}\}\) in \(G\), the set \(Y_{t}=\{t_{1},\ldots,t_{|Y_{t}|}\}\) of children of \(t\) that intersect a \(t\)-inner copy in \(\mathcal{S}\) has size at most \(h\cdot k\). We guess \(Y_{t}\), and note that there are at most \(n^{h\cdot k}\) choices. Let \(Z_{t}=\mathsf{ch}_{r}(t)\setminus Y_{t}\).
No \(t\)-inner copy of \(\mathcal{S}\) intersects \(G_{t^{\prime}}\setminus\delta_{t^{\prime}}\) for \(t^{\prime}\in Z_{t}\). Thus, \(\mathcal{S}\) is maximum when restricted to \(G_{t^{\prime}}\setminus\delta_{t^{\prime}}\). However, if there is a vertex \(v_{t^{\prime}}\) such that \(\delta_{t^{\prime}}\setminus\alpha(t)=\{v_{t}^{\prime}\}\), there might still be one \(t\)-inner copy of \(\mathcal{S}\) that contains \(v_{t^{\prime}}\). We want to find a maximum \(H\)-packing on \(G_{t^{\prime}}\setminus\alpha(t)\) that is contained in a maximum \(H\)-packing of \(G\) such that no \(t\)-inner copy intersecting \(Z_{t}\) is in the packing. We compute inductively \(s^{+}_{t^{\prime}}=s^{\mathbf{F}+}_{t^{\prime}}\) and \(s^{-}_{t^{\prime}}=s^{\mathbf{F}-}_{t^{\prime}}\), where \(\mathbf{F}+\) and \(\mathbf{F}-\) are trivial boundaried graphs with boundary \(\delta_{t^{\prime}}\cap\alpha(t)\) and \(\delta_{t^{\prime}}\), respectively. In other words, \(s^{+}_{t^{\prime}}\) (resp. \(s^{-}_{t^{\prime}}\)) is the maximum size of an \(H\)-packing on \(G_{t^{\prime}}\setminus\alpha(t)\) (resp. \(G_{t^{\prime}}\setminus\delta_{t^{\prime}}\)). Note that \(s^{-}_{t^{\prime}}\leq s^{+}_{t^{\prime}}\leq s^{-}_{t^{\prime}}+1\).
* If \(s^{-}_{t^{\prime}}=s^{+}_{t^{\prime}}\), then it is optimal to choose a maximum \(H\)-subgraph-packing on \(G_{t^{\prime}}\setminus\delta_{t^{\prime}}\). Therefore, we remove \(V(G_{t^{\prime}})\setminus\delta_{t^{\prime}}\) from \(G\) and set \(s_{t^{\prime}}=s^{-}_{t^{\prime}}\).
* Otherwise, \(s^{+}_{t^{\prime}}=s^{-}_{t^{\prime}}+1\). In this case, \(v_{t^{\prime}}\) has to be part of some copy of \(H\) in \(\mathcal{S}\). So, we may assume that \(\mathcal{S}\) consists of a maximum \(H\)-packing in \(G_{t^{\prime}}\setminus\alpha(t^{\prime})\) and a \(t\)-outer copy \(H^{\prime}\) of \(H\) containing \(v_{t^{\prime}}\). Indeed, if a \(t\)-inner copy of \(\mathcal{S}\) intersects \(v_{t^{\prime}}\), then \(\mathcal{S}\) restricted to \(G_{t^{\prime}}\setminus\delta_{t^{\prime}}\) has size \(s^{-}_{t^{\prime}}\). If we replace \(H^{\prime}\) and \(\mathcal{S}\) restricted to \(G_{t^{\prime}}\setminus\delta_{t^{\prime}}\) by a maximum \(H\)-packing on \(G_{t^{\prime}}\setminus\alpha(t^{\prime})\), then we obtain a packing of the same size. Therefore, we remove \((V(G_{t^{\prime}})\setminus\delta_{t^{\prime}})\cup\{v_{t^{\prime}}\}\) from \(G\) and set \(s_{t^{\prime}}=s^{+}_{t^{\prime}}\). If there are \(z_{1},\ldots,z_{d}\in Z_{t}\) such that \(v_{z_{1}}=\ldots=v_{z_{d}}\) and \(s^{+}_{z_{i}}=s^{-}_{z_{i}}+1\) for \(i\in[d]\), then we choose one of them arbitrarily, say \(z_{1}\), for which we set \(s_{z_{1}}=s^{+}_{z_{1}}\), and we set \(s_{z_{i}}=s^{-}_{z_{i}}\) otherwise, since \(v_{z_{1}}\) is contained in only one copy of an \(H\)-packing.
Let \(\mathbf{F}\in\mathcal{C}^{k:H}_{G,\partial_{t}}\). Let \(\Delta=\delta_{t}\cup\bigcup_{t^{\prime}\in Y_{t}}\delta_{t^{\prime}}\). Now that we dealt with the children containing no \(t\)-inner copy, we only have a few children left. To reduce the size of \(G_{t}\), we will guess the partial \(t\)-inner copies in each children in \(Y_{t}\). \(\mathcal{F}_{\mathbf{F}}\) defined below is the set of all such possible guesses. We set \(\mathcal{F}=\{(\mathbf{F}_{\mathbf{t}^{\prime}})_{t^{\prime}\in Y_{t}}\mid \forall t^{\prime}\in Y_{t},\mathbf{F}_{\mathbf{t}^{\prime}}\in\mathcal{C}^{k:H}_ {G,\delta_{t^{\prime}}}\}\). For each \(\mathcal{L}=(\mathbf{F}_{\mathbf{t}^{\prime}})_{t^{\prime}\in Y_{t}}\in \mathcal{F}\), we say that \(\mathcal{L}\) is _compatible_ with \(\mathbf{F}\) if there is \(\mathbf{F}_{\mathcal{L}}\in\mathcal{C}^{k:H}_{G,\Delta}\) and a partition \((U,U_{t_{1}},\ldots,U_{t_{|Y|}})\) of the connected components of \(\mathbf{F}_{\mathcal{L}}\setminus\Delta\) such that
* \(\mathbf{F}_{\mathcal{L}}[V(U)\cup(\delta_{t}\cap V(F))]=\mathbf{F}\) and
* for \(t^{\prime}\in Y\), \(\mathbf{F}_{\mathcal{L}}[V(U_{t^{\prime}})\cup(\delta_{t^{\prime}}\cap V(F_{t^{ \prime}}))]=\mathbf{F}_{\mathbf{t}^{\prime}}\).
Let \(\mathcal{F}_{\mathbf{F}}\) be the set of all \(\mathcal{L}\in\mathcal{F}\) compatible with \(\mathbf{F}\). Let \(\mathcal{L}\in\mathcal{F}_{\mathbf{F}}\). Since we guessed how the \(t\)-inner copies of \(\mathcal{S}\) interact with each \(t^{\prime}\in Y_{t}\), we can now compute the \(t\)-outer copies of \(\mathcal{S}\) in \(G_{t^{\prime}}\). This is done iteratively by computing \(s_{t^{\prime}}^{\mathbf{F}_{\mathbf{t}^{\prime}}}\). Let \(G_{\mathcal{L}}\) be the graph obtained from \(\mathbf{G}_{\mathbf{t}}\) by replacing \(\mathbf{G}_{\mathbf{t}^{\prime}}\) by \(\mathbf{F}_{\mathbf{t}^{\prime}}\) for each \(t^{\prime}\in Y_{t}\) and removing the vertices of \(\Delta\setminus V(F_{\mathcal{L}})\).
Then we guess a set \(W\) of \(|V(F)|\leq h\cdot k\) vertices of \(G_{\mathcal{L}}\) that realize \(\mathbf{F}\) (there are at most \(n^{h\cdot k}\) possible choices) and we remove \(W\) from \(G_{\mathcal{L}}\) to obtain \(G_{\mathcal{L}}^{W}\). Then we compute the size \(s_{W}\) of a maximum \(H\)-packing on \(G_{\mathcal{L}}^{W}\). For each \(i\in[k]\), we check whether we can find \(i\) disjoint copies of \(H\) in \(G_{\mathcal{L}}^{W}\) by brute-force. Let \(s_{\mathcal{L}}=\max_{W}s_{W}+\sum_{t^{\prime}\in Y_{t}}s_{t^{\prime}}^{ \mathbf{F}_{\mathbf{t}^{\prime}}}+\sum_{t^{\prime}\in Z_{t}}s_{t^{\prime}}\). Then it follows that \(s_{t}^{\mathbf{F}}=\max_{\mathcal{L}\in\mathcal{F}_{\mathbf{F}}}s_{\mathcal{L}}\).
It remains to upper-bound the running time. There are at most \(n^{h\cdot k}\) choices for \(Y_{t}\). Let \(h^{\prime}=h\cdot k\) be the number of vertices of \(H^{\prime}\) and \(d=|\Delta|\leq k+|Y_{t}|+1=\mathcal{O}(h\cdot k)\). There are at most \(d!\cdot\binom{d}{h^{\prime}}\cdot 2^{h^{\prime}}\cdot|Y_{t}|^{h^{\prime}}=(h \cdot k)^{\mathcal{O}(h\cdot k)}\) choices for \(\mathbf{F}_{\mathcal{L}}\), and thus for \(\mathcal{L}\) and \(\mathbf{F}\). There are at most \(n^{h\cdot k}\) choices for \(W\). Computing \(s_{W}\) takes time at most \((n+h\cdot k)^{\mathcal{O}(h\cdot k)}\), since \(G_{\mathcal{L}}^{W}\) has size at most \(n+h\cdot k\). Hence, the running time of the algorithm is upper-bounded by \((h\cdot k\cdot n)^{\mathcal{O}(h\cdot k)}\), which is \(n^{\mathcal{O}(h\cdot k)}\) since \(k\leq n\) by definition and we may assume that \(h\leq n\).
Remark.Note that the above algorithm has two guessing phases that require XP-time. For the case of \(H\)-Subgraph-Packing, however, we can get rid of the second guessing stage using color-coding [2], which allows us to find a subgraph with \(\ell\) vertices and treewidth \(t\) inside an \(n\)-vertex graph in time \(2^{\mathcal{O}(\ell)}n^{\mathcal{O}(t)}\). Since the disjoint union of \(k\) copies of \(H\) has treewidth less than \(h=|V(H)|\) and \(h\cdot k\) vertices, the second XP guessing stage can be replaced by an algorithm running in \(2^{\mathcal{O}(h\cdot k)}n^{\mathcal{O}(h)}\) time, which is FPT in our parameterization as \(h\) is a fixed constant.
### Odd-minor-packing
The algorithm for \(H\)-Odd-Minor-Packing is very similar to the one for \(H\)-Subgraph-Packing. The main difference is the use of an FPT-algorithm of Kawarabayashi, Reed, and Wollan [24] solving the Parity \(k\)-Disjoint Paths defined below. We say that the parity of a path is zero if its length is even, and one otherwise.
Parity \(k\)-Disjoint Paths
**Input**: A graph \(G\), two disjoint sets \(S=\{s_{1},\ldots,s_{k}\},T=\{t_{1},\ldots,t_{k}\}\subseteq V(G)\), and values \(j_{1},\ldots,j_{k}\in\{0,1\}\).
**Objective**: Find, if it exists, \(k\) internally-vertex-disjoint paths \(P_{1},\ldots,P_{k}\) from \(S\) to \(T\) in \(G\) such that \(\mathcal{X}_{i}\) has endpoints \(s_{i}\) and \(t_{i}\), and parity \(j_{i}\) for \(i\in[k]\).
**Proposition 5.1** ([24]).: _There is an algorithm running in time \(\mathcal{O}_{k}(m\cdot\alpha(m,n)\cdot n)\) for the Parity \(k\)-Disjoints Paths problem, where \(\alpha\) is the inverse of the Ackermann function._
Given that an odd-minor preserves the cycle parity (Lemma 3.1), we make the following observation.
**Observation 5.1**.: _Any odd-minor of a bipartite graph is bipartite._
Given an odd \(H\)-expansion \(\eta\) for some graph \(H\) in a graph \(G\), the _branch vertices_ of \(\eta\) are the vertices of \(\eta\) of degree at least three and the ones incident to an edge not contained in a node of \(\eta\)
**Lemma 5.2**.: _Let \(H\) and \(G\) be two graphs such that \(H\) is an odd-minor of \(G\) with \(h\) vertices. Any odd \(H\)-expansion in \(G\) has at most \(h\cdot(2h-3)\) branch vertices._
Proof.: Let \(\eta\) be a odd \(H\)-expansion in \(G\). At most \(h-1\) vertices of each node of \(\eta\) are adjacent to vertices of another node. These \(h-1\) vertices are the leaves of the tree induced by the vertices of the node in \(G\). By an easy induction, the number of vertices of degree at least three in a tree is at most the number of leaves minus one. Therefore, there are at most \(h-1+h-2=2h-3\) branch vertices in each node of \(\eta\). Since \(\eta\) has \(h\) nodes, the result follows.
Let \(H\) be a graph. A _partial odd-model_ of \(H\) is an odd-minor-minimal boundaried graph \(\mathbf{F}\) such that there is a boundaried graph \(\mathbf{F}^{\prime}\) compatible with \(\mathbf{F}\) and such that \(H\) is an odd-minor of \(\mathbf{F}\oplus\mathbf{F}^{\prime}\). Given a graph \(G\) and a set \(X\subseteq V(G)\), we denote by \(\mathcal{M}_{G,X}^{H}\) the set of all boundaried graphs \(\mathbf{F}\) such that \(G[\mathsf{bd}(\mathbf{F})]\) is a subgraph of \(G[X]\).
**Lemma 5.3**.: _Let \(H\) be a 2-connected non-bipartite graph with \(h\) vertices. There is an algorithm that, given a graph \(G\) and a bipartite tree decomposition of \(G\) of width at most \(k\), solves \(H\)-Odd-Minor-Packing in time \(n^{\mathcal{O}(h^{2}\cdot k)}\)._
Proof.: Let \((T,\alpha,\beta,r)\) be a rooted bipartite tree decomposition of \(G\) of width at most \(k\). Let \(k\cdot H\) be the union of \(k\) disjoint copies of \(H\). For each \(t\in V(T)\), in a bottom-up manner, for each \(\mathbf{F}\in\mathcal{M}_{G,\delta_{t}}^{k\cdot H}\), we compute the maximum integer \(s_{t}^{\mathbf{F}}\), if it exists, such that there exists an \(H\)-odd-minor-packing \(H_{1},\ldots,H_{s_{t}^{\mathbf{F}}}\) in \(G_{t}\) such that \(\mathbf{F}\) is a bounded odd-minor of \(\mathbf{G_{t}}\setminus\bigcup_{i\in[s_{t}^{\mathbf{F}}]}V(H_{i})\). Since \(\delta_{r}=\emptyset\), \(s_{r}^{\emptyset}\) is the optimum for \(H\)-Odd-Minor-Packing on \(G\).
Let \(t\in V(T)\). Given that \(H\) is non-bipartite, according to 5.1, no odd \(H\)-expansion is contained in \(G[\beta(t)]\). We call _\(t\)-inner odd-model_ any odd \(H\)-expansion in \(G_{t}\) that intersects \(\alpha(t)\). Given that \(H\) is 2-connected and non-bipartite, for any odd \(H\)-expansion in \(G_{t}\) that is not a \(t\)-inner odd-model, there is \(t^{\prime}\in\mathsf{ch}_{r}(t)\) such that this odd \(H\)-expansion is contained in \(G_{t^{\prime}}\setminus\alpha(t)\). We call these odd \(H\)-expansions _\(t\)-outer odd-models_. Given that \(|\alpha(t)|\leq k\), it follows that at most \(k\)\(t\)-inner odd-models can be packed at once. Since a \(t\)-inner odd-model of \(H\) has at most \(h^{\prime}=h\cdot(2h-3)\) branch vertices by Lemma5.2, for any maximum \(H\)-odd-minor-packing \(\mathcal{S}=\{H_{1},\ldots,H_{r}\}\) in \(G\), the set \(Y_{t}=\{t_{1},\ldots,t_{|Y_{t}|}\}\) of children of \(t\) that intersect a \(t\)-inner odd-model in \(\mathcal{S}\) has size at most \(h^{\prime}\cdot k+k\). The additive term \(k\) refers to the fact that each path of the packing between branch vertices may go in and out of a child, but has to intersect at least one vertex in \(\alpha(t)\) when crossing, so at most \(|\alpha(t)|\) children can be intersected by a \(t\)-inner odd-model this way. We guess \(Y_{t}\), and note that there are at most \(n^{\mathcal{O}(h^{2}\cdot k)}\) choices. Let \(Z_{t}=\mathsf{ch}_{r}(t)\setminus Y_{t}\).
We proceed similarly to the proof of Lemma5.1. We compute inductively \(s_{t^{\prime}}^{+}=s_{t^{\prime}}^{\mathbf{F}+}\) and \(s_{t^{\prime}}^{-}=s_{t^{\prime}}^{\mathbf{F}-}\), where \(\mathbf{F}+\) and \(\mathbf{F}-\) are trivial boundaried graphs with boundary \(\delta_{t^{\prime}}\cap\alpha(t)\) and \(\delta_{t^{\prime}}\), respectively. In other words, \(s_{t^{\prime}}^{+}\) (resp. \(s_{t^{\prime}}^{-}\)) is the maximum size of a \(H\)-odd-minor-packing on \(G_{t^{\prime}}\setminus\alpha(t)\) (resp. \(G_{t^{\prime}}\setminus\delta_{t^{\prime}}\)). Thus, we observe that \(s_{t^{\prime}}^{-}\leq s_{t^{\prime}}^{+}\leq s_{t^{\prime}}^{-}+1\).
* If \(s_{t^{\prime}}^{-}=s_{t^{\prime}}^{+}\), then we remove \(V(G_{t^{\prime}})\setminus\delta_{t^{\prime}}\) from \(G\) and set \(s_{t^{\prime}}=s_{t^{\prime}}^{-}\).
* Otherwise, \(s_{t^{\prime}}^{+}=s_{t^{\prime}}^{-}+1\). Then, we remove \((V(G_{t^{\prime}})\setminus\delta_{t^{\prime}})\cup\{v_{t^{\prime}}\}\) from \(G\) and set \(s_{t^{\prime}}=s_{t^{\prime}}^{+}\). If there are \(z_{1},\ldots,z_{d}\in Z_{t}\) such that \(v_{z_{1}}=\ldots=v_{z_{d}}\) and \(s_{z_{i}}^{+}=s_{z_{i}}^{-}+1\) for \(i\in[d]\), then we choose one of them arbitrarily, say \(z_{1}\), for which we set \(s_{z_{1}}=s_{z_{1}}^{+}\), and we set \(s_{z_{i}}=s_{z_{i}}^{-}\) otherwise, since \(v_{z_{1}}\) is contained in only one odd \(H\)-expansion of an \(H\)-odd-minor-packing.
Let \(\mathbf{F}\in\mathcal{M}_{G,\delta_{t}}^{k:H}\). Let \(\Delta=\delta_{t}\cup\bigcup_{t^{\prime}\in Y_{t}}\delta_{t^{\prime}}\). Now that we dealt with the children containing no \(t\)-inner odd-model, we only have a few children left. To reduce the size of \(G_{t}\), we will guess the partial \(t\)-inner copies in each children in \(Y_{t}\). \(\mathcal{F}_{\mathbf{F}}\), defined as in the proof of Lemma5.1 but replacing partial copies with partial odd-models, is the set of all possible guesses.
Let \(\mathcal{L}\in\mathcal{F}_{\mathbf{F}}\). Since we guessed how the \(t\)-inner odd-models of \(\mathcal{H}\) interact with each \(t^{\prime}\in Y_{t}\), we can now compute the \(t\)-outer odd-models of \(\mathcal{H}\) in \(G_{t^{\prime}}\). This is done iteratively by computing \(s_{t^{\prime}}^{\mathbf{F}_{\mathbf{t^{\prime}}}}\). Let \(G_{\mathcal{L}}\) be the graph obtained from \(\mathbf{G}_{\mathbf{t}}\) by replacing \(\mathbf{G}_{\mathbf{t^{\prime}}}\) by \(\mathbf{F}_{\mathbf{t^{\prime}}}\) for each \(t^{\prime}\in Y_{t}\) and removing the vertices of \(\Delta\setminus V(F_{\mathcal{L}})\).
Then we guess a set \(W\) of \(|V(F)|\leq h^{\prime}\cdot k\) vertices of \(G_{\mathcal{L}}\) that would be the branch vertices of the \(t\)-inner odd-models, and of \(\mathbf{F}\). There are at most \(n^{\mathcal{O}(h^{2}\cdot k)}\) possible such choices. Let \(s_{W}\) be the size of the \(H\)-odd-minor-packing on \(G_{\mathcal{L}}\) corresponding to this choice of \(W\), if it exists. We can check its existence using Proposition5.1. To do so, we guess which vertices in \(W\) are joined by a path, and what is the parity of the path, so that we obtain \(s_{W}\) disjoint odd \(H\)-expansions in \(G_{\mathcal{L}}\). Since \(G_{\mathcal{L}}^{W}\) has size at most \(n+h^{\prime}\cdot k\), this takes time \((n+h^{\prime}\cdot k)^{\mathcal{O}(1)}\). Let \(s_{\mathcal{L}}=\max_{W}s_{W}+\sum_{t^{\prime}\in Y_{t}}s_{t^{\prime}}^{ \mathbf{F}_{\mathbf{t^{\prime}}}}+\sum_{t^{\prime}\in Z_{t}}s_{t^{\prime}}\). Then it follows that \(s_{t}^{\mathbf{F}}=\max_{\mathcal{L}\in\mathcal{F}_{\mathbf{F}}}s_{\mathcal{ L}}\).
It remains to upper-bound the running time. There are \(n^{\mathcal{O}(h^{2}\cdot k)}\) choices for \(Y_{t}\). There are \((h^{2}\cdot k)^{\mathcal{O}(h^{2}\cdot k)}\) choices for \(\mathbf{F}_{\mathcal{L}}\), and thus for \(\mathcal{L}\) and \(\mathbf{F}\). There are at most \(n^{\mathcal{O}(h^{2}\cdot k)}\) choices for \(W\). Computing \(s_{W}\) takes time \((n+h^{2}\cdot k)^{5}\). Hence, the running time of the algorithm is \((h^{2}\cdot k\cdot n)^{\mathcal{O}(h^{2}\cdot k)}\), which is \(n^{\mathcal{O}(h^{2}\cdot k)}\) as \(k\leq n\) and we may assume that \(h\leq n\).
## 6 \(\mathsf{NP}\)-completeness on graphs of bounded \(\mathsf{btw}\)
In this section we present our hardness results. For any graph \(G\), it holds that \(\mathsf{btw}(G)\leq\mathsf{oct}(G)\). Thus, for a problem \(\Pi\) to be efficiently solvable on graphs of bounded \(\mathsf{oct}\), and first and foremost on bipartite graphs. Unfortunately, many problems are \(\mathsf{NP}\)-complete on bipartite graphs (or on graphs of small \(\mathsf{oct}\)), and hence \(\mathsf{para}\)-\(\mathsf{NP}\)-complete parameterized by \(\mathsf{btw}\). In this section we provide a non-exhaustive list of such problems. In fact, there also exist problems that are trivial or polynomially solvable on bipartite graphs, but are \(\mathsf{para}\)-\(\mathsf{NP}\)-complete parameterized by \(\mathsf{btw}\), such as the \(3\)-Coloring problem discussed in Subsection6.1.
### Coloring
The \(3\)-Coloring problem is defined as follows.
\begin{tabular}{|l|} \hline \(3\)-Coloring \\
**Input**: A graph \(G\). \\
**Question**: Is \(G\)\(3\)-colorable? \\ \hline \end{tabular}
Bipartite graphs are \(2\)-colorable, so we could hope for positive results about \(3\)-Coloring on graphs of bounded \(\mathsf{oct}\), or even bounded \(\mathsf{btw}\). In fact, we have the following result.
**Lemma 6.1**.: _If a graph has bipartite treewidth at most \(k\), then it is \((k+2)\)-colorable._
Proof.: Let \(G\) be a graph. Let \(\mathcal{T}=(T,\alpha,\beta)\) be a bipartite tree decomposition of \(G\) of width at most \(k\). We proceed by induction on \(|V(T)|\). For the base case, suppose that \(T\) has a unique node \(t\)
\(G[\beta(t)]\) is bipartite, so it is \(2\)-colorable. Thus, if we color each vertex in \(\alpha(t)\) with a unique new color, given that \(|\alpha(t)|\leq k\), we can extend the \(2\)-coloring of \(G[\beta(t)]\) to a \((k+2)\)-coloring of \(G\). Now suppose that \(T\) has \(\ell\geq 2\) nodes.
Let \(t\) be a leaf of \(T\). Let \(H=G[\bigcup_{t^{\prime}\in V(T)\setminus\{t\}}(\alpha\cup\beta)(t^{\prime})]\). \((T\setminus\{t\},\alpha^{\prime},\beta^{\prime})\), where \(\alpha^{\prime}\) and \(\beta^{\prime}\) are the restrictions of \(\alpha\) and \(\beta\) to \(T\setminus\{t\}\), respectively, is a bipartite tree decomposition of \(H\) of width at most \(k\). By induction, \(H\) admits an \((k+2)\)-coloring \(c\). Let \(\delta\) be the adhesion of \(t\) and its neighbor. Let \(a=|\alpha(t)\cap\delta|\leq k\). Given that \(|\delta\cap\beta(t)|\leq 1\), it follows that \(|\delta|\leq a+1\). We extend \(c\) to a coloring \(c^{\prime}\) of \(G[V(H)\cup\alpha(t)]\) by coloring each one of the at most \(k-a\) vertices in \(\alpha(t)\setminus\delta\) with a unique color that \(c\) does not use in \(\delta\). Given that no vertex in \(\alpha(t)\setminus\delta\) is adjacent to a vertex in \(H\setminus\delta\), this coloring is proper.
If \(a+1\) colors are used in \(\delta\), then it means that there is a (unique) vertex \(v\in\delta\cap\beta(t)\). In this case, at node \(t\), there is at least \((k+2)-(a+1)-(k-a)=1\) unused color. Otherwise, at most \(a\) colors are used in \(\delta\), and thus, there are at least \((k+2)-a-(k-a)=2\) unused colors at node \(t\). Let \((A,B)\) be a partition witnessing the bipartiteness of \(G[\beta(t)]\) such that \(v\in A\), if it exists. We color the vertices of \(A\) with the color of \(v\), if it exists, or one of the two unused color and the vertices of \(B\) with the last unused color. Given that the vertices in \(A\) (resp. \(B\)) are not pairwise adjacent and that the vertices in \(\beta(t)\setminus\{v\}\) are not adjacent to vertices in \(H\setminus\delta\), the coloring remains proper and uses at most \(k+2\) colors.
The result of Lemma6.1 is tight since any even cycle has bipartite treewidth zero and is \(2\)-colorable, and any odd cycle has bipartite treewidth one and is \(3\)-colorable. Unfortunately, despite of Lemma6.1, the problem is para-NP-complete parameterized by \(\mathsf{oct}\).
**Lemma 6.2**.: \(3\)-Coloring _is_ NP_-complete even for graphs of \(\mathsf{oct}\) at most three._
Proof.: We present a reduction from the List-\(3\)-Coloring problem that is defined as follows.
List-\(3\)-Coloring
**Input**: A graph \(G\) and a set \(L(v)\) of colors in [3] for each \(v\in V(G)\).
**Question**: Is there a proper \(3\)-coloring \(c\) of \(G\) such that \(c(v)\in L(v)\) for each \(v\in V(G)\)?
According to [6], the List-\(3\)-Coloring problem is NP-complete even when restricted to planar \(3\)-regular bipartite graphs. Let \((G,L)\) be such an instance of List-\(3\)-Coloring. Let \(G^{+}\) be the graph obtained from \(G\) by adding three vertices \(v_{1},v_{2}\), and \(v_{3}\) that are pairwise adjacent and such that for each \(v\in V(G)\), \(v\) is adjacent to \(v_{i}\) for \(i\in[3]\setminus L(v)\). It is easy to see that \(G^{+}\) admits a proper \(3\)-coloring \(c\) if and only if \((G,L)\) admits a proper list coloring \(c^{\prime}\), and, in this case, necessarily, \(c_{|V(G)}=c^{\prime}\). Given that List-\(3\)-Coloring is NP-complete on bipartite graphs and that \(\mathsf{oct}(G^{+})\leq 3\), \(3\)-Coloring is NP-complete even for graphs of \(\mathsf{oct}\) at most three.
### Hardness of covering problems
Vertex Deletion to \(\mathcal{G}\) is known to be NP-complete on general graphs for every non-trivial graph class \(\mathcal{G}\)[27]. However, for some graph classes \(\mathcal{G}\), it might change when we restrict the input graph to be bipartite. Yannakakis [36] characterizes hereditary graph classes \(\mathcal{G}\) for which Vertex Deletion to \(\mathcal{G}\) on bipartite graphs is polynomial-time solvable and those for which Vertex Deletion to \(\mathcal{G}\) remains NP-complete.
A problem \(\Pi\) is said to be _trivial_ on a graph class \(\mathcal{G}\) if the solution to \(\Pi\) is the same for every graph \(G\in\mathcal{G}\). Otherwise, \(\Pi\) is called _nontrivial_ on \(\mathcal{G}\). Given a graph \(G\), let \(\nu(G)=|\{N_{G}(v)\mid v\in V(G)\}|\). Given a graph class \(\mathcal{G}\), let \(\nu(\mathcal{G})=\sup\{\nu(G)\mid G\in\mathcal{G}\}\).
**Proposition 6.1** ([36]).: _Let \(\mathcal{G}\) be a hereditary graph class such that Vertex Deletion to \(\mathcal{G}\) is nontrivial on bipartite graphs._
* _If_ \(\nu(\mathcal{G})=+\infty\)_, then_ Vertex Deletion _to_ \(\mathcal{G}\) _is_ NP_-complete on bipartite graphs._
* _If_ \(\nu(\mathcal{G})<+\infty\)_, then_ Vertex Deletion _to_ \(\mathcal{G}\) _is polynomial time-solvable on bipartite graphs._
Hence, here is a non-exhaustive list of problems that are NP-complete on bipartite graphs: Vertex Deletion to \(\mathcal{G}\) where \(\mathcal{G}\) is a minor-closed graph class that contains edges (and hence Feedback Vertex Set, Vertex Planarization, \(H\)-Minor-Cover for \(H\) containing \(P_{3}\) as a subgraph), \(H\)-Subgraph-Cover, \(H\)-Induced-Subgraph-Cover, and \(H\)-Odd-Minor-Cover for \(H\) bipartite graph containing \(P_{3}\) as a (necessarily induced) subgraph, Vertex Deletion to graphs of degree at most \(p\) for \(p\geq 1\), Vertex Deletion to graphs of girth at least \(p\) for \(p\geq 6\) (note that the smallest non-trivial lower bound on the length of a cycle in a bipartite graph is six, or equivalently five).
As a consequence of the above results, all the above problems, when parameterized by bipartite treewidth, are para-NP-complete.
### Hardness of packing problems
Kirkpatrick and Hell [26, Theorem 4.2] proved that if \(H\) is a graph that contains a \(P_{3}\) (the path on three vertices) as a subgraph, then the problem of partitioning the vertex set of an input graph \(G\) into subgraphs isomorphic to \(H\) is NP-complete. This immediately implies that the \(H\)-Subgraph-Packing problem is NP-complete if \(H\) contains a \(P_{3}\). In the next lemma we observe that the reduction of Kirkpatrick and Hell [26] can be carefully analyzed so that the same result also applies to bipartite input graphs, and to the induced packing version as well.
**Lemma 6.3**.: _Let \(H\) be a bipartite graph containing \(P_{3}\) as a subgraph. Then \(H\)-Subgraph-Packing and \(H\)-Induced-Subgraph-Packing are_ NP_-complete on bipartite graphs._
Proof.: We proceed to discuss the proof for the \(H\)-Subgraph-Packing problem, and finally we observe that the same proof applies to the induced subgraph version. The reduction of Kirkpatrick and Hell [26, Lemma 4.1] is from the \(h\)-Dimensional Matching problem, where \(h=|V(H)|\). In this problem, we are given a set of \(h\)-tuples \(\mathcal{T}\subseteq[p]^{h}\), where \(p\) is any positive integer, and the goal is to decide whether there exists a subset \(\mathcal{S}\subseteq\mathcal{T}\) with \(|\mathcal{S}|=p\) such that no two elements in \(S\) agree in any coordinate. This problem is well-known to be NP-complete for any \(h\geq 3\)[22], which is guaranteed by the hypothesis of the lemma. We need to introduce some notation. Given a vertex \(v\in H\), we denote by \(H_{v}\) the graph obtained from \(H\) by adding a new vertex \(v^{\prime}\) adjacent to \(N_{H}(v)\) (that is, \(v^{\prime}\) becomes a false twin of \(v\)). Vertices \(v,v^{\prime}\) are called the _connector vertices of \(H_{v}\)_ and the newly introduced vertex \(v^{\prime}\) is called the _twin vertex_ of \(H_{v}\). Given a vertex \(v\in H\), we denote by \(H\langle v\rangle\) the graph obtained from \(H\) by adding, for every vertex \(u\in V(H)\), a distinct copy of \(H_{v}\) and identifying vertex \(v\) of \(H_{v}\) with \(u\). The \(h\) twin vertices of the copies of \(H_{v}\), which were not identified with any vertex, are called the _connector vertices of \(H\langle v\rangle\)_. In this construction, we call the initial copy of \(H\) the _base copy_ of \(H\) in \(H\langle v\rangle\).
The reduction of Kirkpatrick and Hell [26, Lemma 4.1] proceeds as follows. Let \(v^{\star}\) be any vertex of \(H\) that is not a cut-vertex and belongs to a biconnected component of \(H\) containing at most one cut-vertex, which is always guaranteed to exist. Given an instance \(\mathcal{T}\subseteq[p]^{h}\) of \(h\)-Dimensional Matching, we construct a graph \(G\) as follows. We start with an independent set of size \(hp\) whose elements are labeled with pairs \((i,j)\) with \(i\in[h]\) and \(j\in[p]\). For each \(h\)-tuple \((t_{1},\ldots,t_{h})\in\mathcal{T}\) we introduce a distinct copy of \(H\langle v^{\star}\rangle\) and we identify its \(h\) connector vertices arbitrarily with the \(h\) vertices labeled \((1,t_{1}),\ldots,(h,t_{h})\). It is proved in [26] that \(V(G)\) can be partitioned into copies of \(H\) if and only if \(\mathcal{T}\) is a \(\mathsf{yes}\)-instance of \(h\)-Dimensional Matching.
All we need to show is that, if \(H\) is bipartite, then the constructed graph \(G\) is bipartite as well. For this, we proceed to define a bipartition function \(b:V(G)\to\{0,1\}\) such that for every edge \(uv\in E(G)\) it holds that \(b(u)\neq b(v)\). Since \(H\) is bipartite, such a function exists for \(H\); we denote it by \(b_{H}\). We start by defining \(b\) restricted to each of the copies \(H\langle v^{\star}\rangle\) introduced in the construction (all copies are labeled equally). Recall that each such a copy contains a copy of \(H_{v^{\star}}\) for every vertex \(v\in V(H)\). We proceed to define \(b\) so that the twin vertex of every copy of \(H_{v^{\star}}\) lies on the same side as the vertex of \(H\) to which this copy has been attached. Formally, for every vertex \(u\) in the base copy of \(H\) in \(H\langle v^{\star}\rangle\), we define \(b(u)=b_{H}(u)\). Consider any vertex \(u\) in the base copy of \(H\), and let \(v\) be any vertex in the copy of \(H_{v^{\star}}\) that has been attached to \(u\), different from the twin vertex of that copy. If \(b_{H}(u)=b_{H}(v^{\star})\), we let \(b(v)=b_{H}(v)\), otherwise we let \(b(v)=1-b_{H}(v)\). That is, in the latter case, when \(u\) and \(v^{\star}\) do not agree in \(H\), we swap the bipartition of that copy, except from its twin vertex. Finally, let \(w\) be the twin vertex of the copy of \(H_{v^{\star}}\) attached to \(u\). We define \(b(w)=b(u)\), and note that this is always possible since the two connector vertices of each copy of \(H_{v^{\star}}\) are false twins. It can be easily verified that, for each copy of \(H\langle v^{\star}\rangle\), the defined function \(b\) induces a proper bipartition. To conclude the proof, we need to guarantee that, when identifying the connector vertices of distinct copies of \(H\langle v^{\star}\rangle\) according to the \(h\)-tuples in \(\mathcal{T}\), we do not identify two vertices \(u\) and \(v\) with \(b(u)\neq b(v)\). For this, we do the following trick. Let us consider any fixed ordering of \(V(H)\). In the construction of \(G\) described above, for each \(h\)-tuple \((t_{1},\ldots,t_{h})\in\mathcal{T}\) and its associated copy of \(H\langle v^{\star}\rangle\), instead of identifying its \(h\) connector vertices arbitrarily with the \(h\) vertices in the independent set labeled \((1,t_{1}),\ldots,(h,t_{h})\), we do it as follows. Each connector vertex \(w\) in \(H\langle v^{\star}\rangle\) is naturally associated with a vertex \(u\) of the base copy of \(H\) to which its copy of \(H_{v^{\star}}\) has been attached. Suppose that vertex \(u\) is the \(i\)-th vertex in the considered ordering of \(V(H)\). We then identify the connector vertex \(w\) with the vertex labeled \((i,t_{i})\) in the independent set. This way, there is no conflict between the function \(b\) defined for different copies of \(H\langle v^{\star}\rangle\), and it indeed holds that \(b(u)\neq b(v)\) whenever \(uv\in E(G)\), concluding the proof.
Finally, in order to prove the same result for \(H\)-Induced-Subgraph-Packing, we use exactly the same construction as above, and it suffices to observe that, in a \(\mathsf{yes}\)-instance, each of the copies of \(H\) in \(G\) is induced. Indeed, in the proof of [26, Lemma 4.1], the idea of the construction is the following. If a tuple \((t_{1},\ldots,t_{h})\in\mathcal{T}\) is taken into the solution, then in its associated copy of \(H\langle v^{\star}\rangle\), the copies of \(H\) chosen for the packing are, on the one hand, the base copy of \(H\) and, on the other hand, the copy of \(H\) in each \(H_{v^{\star}}\) containing the connector vertex. For a tuple \((t_{1},\ldots,t_{h})\in\mathcal{T}\) that is not taken into the solution, in its associated copy of \(H\langle v^{\star}\rangle\), the copies of \(H\) chosen for the packing are just the copy of \(H\) in each \(H_{v^{\star}}\) not containing the connector vertex. In both cases, each of the chosen copies of \(H\) is an induced subgraph of \(G\), and the lemma follows.
Lemma6.3 is tight for connected graphs \(H\). Indeed, if \(H\) is connected and \(P_{3}\) is not a subgraph of \(H\), then \(H\) is either a vertex or an edge. In the former case, the problem is trivial, and the latter one is Maximum Matching, which is polynomially-time-solvable on general graphs [29].
In the next lemma we prove that \(H\)-Minor-Packing is also para-NP-complete parameterized by btw when \(H\) is 2-connected.
**Lemma 6.4**.: _Let \(H\) be a 2-connected graph with at least three vertices. Then \(H\)-Minor-Packing is -complete on bipartite graphs._
Proof.: We present a reduction from the \(P_{3}\)-Subgraph-Packing problem restricted to bipartite graphs, which was proved to be -complete by Monnot and Toulouse [30] (note that this result also follows from 6.3). Let \(H^{\bullet}\) be the graph obtained from \(H\) by subdividing every edge of \(H\) once, and note that \(H^{\bullet}\) is 2-connected and bipartite. Let \(p\) be a 3-path of \(H^{\bullet}\) formed from an edge \(ac\) of \(H\) that is subdivided in \(H^{\bullet}\) into \(a-b-c\). Let \(H^{\prime}=H^{\bullet}\setminus\{a,b,c\}\).
Let \(G\) be a bipartite graph as an instance of \(P_{3}\)-Subgraph-Packing. We build a bipartite graph \(G^{\prime}\) as an instance of \(H\)-Minor-Packing as follows. Let \(p_{1},\ldots,p_{d}\) be the 3-paths of \(G\), which can be clearly generated in polynomial time. \(G^{\prime}\) is obtained from the disjoint union of \(G\) and \(d\) graphs \(H_{1},\ldots,H_{d}\) isomorphic to \(H^{\prime}\) by adding the appropriate edges between \(p_{i}\) and \(H_{i}\), for \(i\in[d]\), such that each pair creates a graph isomorphic to \(H^{\bullet}\). We say that we _attach_\(H_{i}\) to \(p_{i}\). Note that \(G^{\prime}\) is indeed bipartite.
Given a \(P_{3}\)-subgraph packing \(\{p_{i}\mid i\in I\subseteq[d]\}\) in \(G\), \(\{G^{\prime}[V(p_{i})\cup V(H_{i})]\mid i\in I\}\) is an \(H^{\bullet}\)-subgraph packing in \(G^{\prime}\), and hence \(H\)-minor packing in \(G^{\prime}\) of the same size.
On the other hand, given an \(H\)-minor packing in \(G^{\prime}\), we claim that each model of \(H\) in \(G^{\prime}\) contains a 3-path of \(G\). We suppose toward a contradiction that there is a model \(\tilde{H}\) of \(H\) in \(G^{\prime}\) that does not contain a 3-path of \(G\). Since \(H\) is 2-connected, we can assume that \(\tilde{H}\) is 2-connected. Indeed, if \(\tilde{H}\) is not 2-connected, then one of its block is also a model of \(H\), so we can use this block for the packing. Hence, the intersection of \(\tilde{H}\) with \(G\) must be 2-connected, so it is either empty, or it is an edge \(e\in E(G)\). This former case is not possible by the fact that \(H\) is not a minor of \(H^{\prime}\). Indeed, note that \(H\) and \(H^{\bullet}\) have the same number of cycles. Hence, since \(H\) (and \(H^{\bullet}\)) is 2-connected, \(H^{\prime}\) has strictly less cycles than \(H\). And since the number of cycles cannot increase by taking minors, \(H\) is not a minor of \(H^{\prime}\). (Note that this claim is not true anymore if we drop the 2-connectivity assumption: for instance, take \(H=P_{4}\), and remove an extremal \(P_{3}\) from \(H^{\bullet}\). Then \(H^{\prime}=H\).) Therefore, the intersection of \(\tilde{H}\) with \(G\) consists of an edge \(e\). Since \(|V(H)|\geq 3\), it follows that \(\tilde{H}\) intersects at least one \(H_{i}\). By the 2-connectivity of \(H\), \(H_{i}\) is attached to both endpoints of \(e\). However, we also know that \(H_{i}\) is attached to the 3-path \(p_{i}\) of \(G\), with the same endpoints as \(e\). Hence \(p_{i}\) and \(e\) induce a triangle in \(G\), which contradicts the bipartiteness of \(G\). Hence the contradiction. Therefore, an \(H\)-minor packing in \(G^{\prime}\) gives rise to a \(P_{3}\)-subgraph packing in \(G\) of the same size. Thus, for any \(k\in\mathbb{N}\), \((G,k)\) is a yes-instance for \(P_{3}\)-Subgraph-Packing if and only if \((G^{\prime},k)\) is a yes-instance for \(H\)-Minor-Packing, and the lemma follows.
As a corollary of 6.4, \(H\)-Odd-Minor-Packing is para-NP-complete parameterized by btw when \(H\) is bipartite and 2-connected.
**Lemma 6.5**.: _Let \(H\) be a 2-connected bipartite graph with at least three vertices. Then \(H\)-Odd-Minor-Packing is -complete on bipartite graphs._
Proof.: Given that odd-minors preserve cycle parity (1), when \(H\) is bipartite, \(H\)-Odd-Minor-Packing and \(H\)-Minor-Packing are the same problem on bipartite graphs.
As stated in 4.5, (Weighted) Independent Set is solvable in FPT-time when parameterized by btw. But can we go beyond? Unfortunately, \(d\)-Scattered Set, that is, the
problem of asking for a set of vertices of size at least \(k\) that are pairwise within distance at least \(d\), is NP-complete for \(d\geq 3\) even for bipartite planar graphs of maximum degree three [11] (when \(d=2\), this corresponds to Independent Set), so we cannot hope to go further as \(d\)-Scattered Set, parameterized by btw, is para-NP-complete, when \(d\geq 3\).
Similarly, Induced Matching, that is, the problem of finding an induced matching with \(k\) edges, is NP-complete on bipartite graphs [4]. In the next lemma, we reduce from Induced Matching to prove that \(H\)-Scattered-Packing is para-NP-complete parameterized by btw when \(H\) is 2-connected and bipartite.
**Lemma 6.6**.: _Let \(H\) be a 2-connected bipartite graph with at least one edge. Then \(H\)-Scattered-Packing is NP-complete on bipartite graphs._
Proof.: We reduce from Induced Matching on bipartite graphs, which is NP-complete [4], as follows. Let \(uv\in E(H)\) and let \(H^{\prime}=H\setminus\{u,v\}\). Let \(G\) be a bipartite graph as an instance of Induced Matching. We build \(G^{\prime}\) as an instance of \(H\)-Scattered-Packing as follows. Let \(e_{1},\ldots,e_{m}\) be the edges of \(G\). \(G^{\prime}\) is obtained from the disjoint union of \(G\) and \(m\) graphs \(H_{1},\ldots,H_{m}\) isomorphic to \(H^{\prime}\) by adding the appropriate edges between \(e_{i}\) and \(H_{i}\) for \(i\in[m]\) to create a graph isomorphic to \(H\). Note that \(G^{\prime}\) is bipartite.
Given an induced matching \(\{e_{i}\mid i\in I\subseteq[m]\}\) in \(G\), \(\{G^{\prime}[e_{i}\cup V(H_{i})\mid i\in I\}\) is an \(H\)-scattered packing of the same size in \(G^{\prime}\). Conversely, given an \(H\)-scattered packing in \(G^{\prime}\), given that \(|V(H^{\prime})|=|V(H)|-2\) and that \(H\) is 2-connected, any occurrence of \(H\) in the packing intersects at least one edge of \(G\). Hence, this gives rise to an induced matching of the same size in \(G\). Thus, for any \(k\in\mathbb{N}\), \((G,k)\) is a yes-instance for Induced Matching if and only if \((G^{\prime},k)\) is a yes-instance for \(H\)-Scattered-Packing.
For \(q\geq 2\), it turns out that even asking that the graph \(H\) to be packed or covered is non-bipartite is not enough to make the problem tractable. As an illustration of this phenomenon, in the next lemma we prove that \(H\)-Scattered-Packing is para-NP-complete parameterized by \(q(\text{-torso})\)-\(\mathcal{B}\)-treewidth for \(q\geq 2\) even if \(H\) is not bipartite (Lemma 6.7).
**Lemma 6.7**.: _Let \(H\) be a 2-connected graph containing an edge, and let \(q\in\mathbb{N}_{\geq 2}\). Then \(H\)-Scattered-Packing is para-NP-complete parameterized by \(q\)(-torso)-\(\mathcal{B}\)-treewidth._
Proof.: According to [4], Induced Matching is NP-complete on bipartite graphs. Let \(H^{\prime}=H\setminus\{u,v\}\), where \(uv\in E(H)\). We reduce from Induced Matching as follows. Let \(G\) be a bipartite graph as an instance of Induced Matching. We build \(G^{\prime}\) as in the proof of Lemma 6.6, so that, for any \(k\in\mathbb{N}\), \((G,k)\) is a yes-instance for Induced Matching if and only if \((G^{\prime},k)\) is a yes-instance for \(H\)-Subgraph-Cover.
Let us show that \((q,\mathcal{B})^{(*)}\text{-tw}(G^{\prime})\leq(q,\mathcal{B})^{(*)}\text{-tw} (H)\). Let \(\mathcal{T}=(T,\alpha,\beta)\) be a \(q(\text{-torso})\)-\(\mathcal{B}\)-tree decomposition of \(H\). Since \(uv\in E(H)\), there is \(t_{0}\in V(T)\) such that \(u,v\in(\alpha\cup\beta)(t_{0})\). Then we build a \(q(\text{-torso})\)-\(\mathcal{B}\)-tree decomposition of \(G^{\prime}\) as follows. Let \(T^{\prime}\) be the tree obtained by taking \(|E(G)|\) copies of \(T\) and making the node \(t_{0}\) of each such a copy adjacent to a new node \(r\). We set \(\beta^{\prime}(r)=V(G^{\prime})\), \(\alpha^{\prime}(r)=\emptyset\), and \(\alpha^{\prime}\) and \(\beta^{\prime}\) take the same values as \(\alpha\) and \(\beta\), respectively, for the other nodes of \(T^{\prime}\). There are at most two vertices in the adhesion of \(r\) and any other node, and they are in \(\beta(r)\). Moreover, \(\text{torso}_{G}(V(G^{\prime}))=G^{\prime}\in\mathcal{B}\). Hence, \((T^{\prime},\alpha^{\prime},\beta^{\prime})\) is a \(q(\text{-torso})\)-\(\mathcal{B}\)-tree decomposition of \(G^{\prime}\) of width at most \((q,\mathcal{B})^{(*)}\text{-tw}(H)\).
Further research
In this paper we study the complexity of several problems parameterized by bipartite treewidth, denoted by \(\mathsf{btw}\). In particular, our results extend the graph classes for which Vertex Cover/ Independent Set, Maximum Weighted Cut, and Odd Cycle Transversal are polynomial-time solvable. A number of interesting questions remain open.
Except for \(3\)-Coloring, all the problems we consider are covering and packing problems. We are still far from a full classification of the variants that are \(\mathsf{para}\)-\(\mathsf{NP}\)-complete, and those that are not (\(\mathsf{FPT}\) or \(\mathsf{XP}\)). For instance, concerning \(H\)-Subgraph-Cover, we provided \(\mathsf{FPT}\)-algorithms when \(H\) is a clique (Corollary 4.1). This case is particularly well-behaved because we know that in a tree decomposition every clique appears in a bag. On the other hand, as an immediate consequence of the result of Yannakakis [36] (Proposition 6.1), we know that \(H\)-Subgraph-Cover is \(\mathsf{para}\)-\(\mathsf{NP}\)-complete for every bipartite graph \(H\) containing \(P_{3}\) (cf. Subsection 6.2). We do not know what happens when \(H\) is non-bipartite and is not a clique. An apparently simple but challenging case is \(C_{5}\)-Subgraph-Cover (or any other larger odd cycle). The main difficulty seems to be that \(C_{5}\)-Subgraph-Cover does not have the gluing property, which is the main ingredient in this paper to show that a problem is nice, and therefore to obtain an \(\mathsf{FPT}\)-algorithm. We do not exclude the possibility that the problem is \(\mathsf{para}\)-\(\mathsf{NP}\)-complete, as we were not able to obtain even an \(\mathsf{XP}\) algorithm.
Concerning the packing problems, namely \(H\)-Subgraph/Induced/Scattered/Odd-Minor-Packing, we provide \(\mathsf{XP}\)-algorithms for them in Section 5 when \(H\) is non-bipartite. Unfortunately, we do not know whether any of them admits an \(\mathsf{FPT}\)-algorithm, although we suspect that it is indeed the case. We would like to mention that it is possible to apply the framework of equivalence relations and representatives (see for instance [3, 12, 13]) to obtain an \(\mathsf{FPT}\)-algorithm for \(K_{t}\)-Subgraph-Packing parameterized by \(\mathsf{btw}\). However, since a number of definitions and technical details are required to present this algorithm, we decided not to include it in this paper (which is already quite long). However, when \(H\) is not a clique, we do not know whether \(H\)-Subgraph-Packing admits an \(\mathsf{FPT}\)-algorithm. A concrete case that we do not know how to solve is when \(H\) is the _paw_, i.e., the \(4\)-vertex graph consisting of one triangle and one pendent edge.
Beyond bipartite tree decompositions, we introduce a more general type of decompositions that we call \(q(\text{-torso})\)-\(\mathcal{H}\)-tree decompositions. For \(\mathcal{B}\) being the class of bipartite graphs, we prove in Lemma 6.7 that for every \(q\geq 2\) and every \(2\)-connected graph \(H\) with an edge, \(H\)-Scattered-Packing is \(\mathsf{para}\)-\(\mathsf{NP}\)-complete parameterized by \(q(\text{-torso})\)-\(\mathcal{B}\)-treewidth. It should be possible to prove similar results for other covering and packing problems considered in this article.
Most of our \(\mathsf{para}\)-\(\mathsf{NP}\)-completeness results consist just in proving \(\mathsf{NP}\)-completeness on bipartite graph (i.e., those with bipartite treewidth zero). There are two exceptions. On the one hand, the \(\mathsf{NP}\)-completeness of \(3\)-Coloring on graphs with odd cycle transversal at most three (Lemma 6.2) and Lemma 6.7 mentioned above for \(H\)-Scattered-Packing parameterized by \(q\)-\(\mathcal{B}\)-treewidth for every integer \(q\geq 2\). Interestingly, none of our hardness results really exploits the structure of bipartite tree decompositions (i.e., for \(q=1\)), beyond being bipartite or having bounded odd cycle transversal.
Finally, as mentioned in the introduction, the goal of this article is to make a first step toward efficient algorithms to solve problems related to odd-minors. We already show in this paper that bipartite treewidth can be useful in this direction, by providing an \(\mathsf{XP}\)-algorithm for \(H\)-Odd-Minor-Packing. Bipartite treewidth, or strongly related notions, also plays a strong role in the recent series of papers about odd-minors by Campbell, Gollin, Hendrey, and Wiederrecht [5, 15]. This looks like an emerging topic that is worth investigating.
Acknowledgments.We thank Sebastian Wiederrecht for his many helpful remarks.
|
2309.10228 | Drive as You Speak: Enabling Human-Like Interaction with Large Language
Models in Autonomous Vehicles | The future of autonomous vehicles lies in the convergence of human-centric
design and advanced AI capabilities. Autonomous vehicles of the future will not
only transport passengers but also interact and adapt to their desires, making
the journey comfortable, efficient, and pleasant. In this paper, we present a
novel framework that leverages Large Language Models (LLMs) to enhance
autonomous vehicles' decision-making processes. By integrating LLMs' natural
language capabilities and contextual understanding, specialized tools usage,
synergizing reasoning, and acting with various modules on autonomous vehicles,
this framework aims to seamlessly integrate the advanced language and reasoning
capabilities of LLMs into autonomous vehicles. The proposed framework holds the
potential to revolutionize the way autonomous vehicles operate, offering
personalized assistance, continuous learning, and transparent decision-making,
ultimately contributing to safer and more efficient autonomous driving
technologies. | Can Cui, Yunsheng Ma, Xu Cao, Wenqian Ye, Ziran Wang | 2023-09-19T00:47:13Z | http://arxiv.org/abs/2309.10228v1 | Drive as You Speak: Enabling Human-Like Interaction with Large Language Models in Autonomous Vehicles
###### Abstract
The future of autonomous vehicles lies in the convergence of human-centric design and advanced AI capabilities. Autonomous vehicles of the future will not only transport passengers but also interact and adapt to their desires, making the journey comfortable, efficient, and pleasant. In this paper, we present a novel framework that leverages Large Language Models (LLMs) to enhance autonomous vehicles' decision-making processes. By integrating LLMs' natural language capabilities and contextual understanding, specialized tools usage, synergizing reasoning, and acting with various modules on autonomous vehicles, this framework aims to seamlessly integrate the advanced language and reasoning capabilities of LLMs into autonomous vehicles. The proposed framework holds the potential to revolutionize the way autonomous vehicles operate, offering personalized assistance, continuous learning, and transparent decision-making, ultimately contributing to safer and more efficient autonomous driving technologies.
## 1 Introduction
Recently, Large Language Models (LLMs) have attracted significant attention. The key to their success lies in their remarkable ability to process a wide range of word-based inputs, including prompts, questions, dialogues, and vocabulary spanning diverse domains, resulting in significant and coherent textual outputs. LLMs serve as vast steronhouses of abundant information and knowledge acquired from numerous texts, much like the human brain. Considering the LLM's ability to emulate the human brain functions, it prompts us to ask: could we leverage the impressive capabilities of LLMs to revolutionize the future of autonomous driving?
Imagine a situation where you're sitting in an autonomous vehicle and you desire to safely overtake another vehicle. All you have to do is utter the command: "Overtake the vehicle in front of me." At that point, the LLMs would swiftly assess the existing conditions and safety listen, and ask questions before reasoning, providing you with informed guidance on the feasibility and recommended actions for executing the maneuver. Furthermore, in the context of fully autonomous vehicles, the LLMs' capabilities could even extend to taking charge of the vehicle and executing the instructed commands.
While LLMs have the potential to greatly enhance convenience and improve the driving experience for drivers, a significant challenge arises: LLMs lack understanding of information about the driving environment. Unlike humans, LLMs lack the inherent ability to perceive the physical environment. In other words, these models do not possess the capability to visually perceive and interact with the world around them [2]. It renders LLMs challenged in making sound decisions for the current situation, potentially leading to suboptimal outcomes or even hazardous consequences.
To address the challenge above, we present a perspective where LLMs can serve as the decision-making "brain" within autonomous vehicles. Complementing this, various tools within the autonomous vehicle ecosystem, including the perception module, localization module, and in-cabin monitor, function as the vehicle's sensory "eyes." This configuration enables LLMs to overcome the inherent limitation of not directly accessing real-time environmental information. By receiving processed data from the perception module, LLMs can facilitate informed decision-making, resulting in significant enhancements to the performance of the autonomous vehicle. Additionally, the vehicle's actions and controller function as its "hands," executing instructions derived from the LLM's decision-making process.
When comparing autonomous vehicles with and without integrated LLMs, it becomes evident that the latter offers a diverse array of compelling advantages. These advantages extend across various aspects of functionality and perfor
mance:
* **Language Interaction:** LLMs enable intuitive communication between drivers and vehicles, transforming interactions from rigid commands to natural conversations.
* **Contextual Understanding and Reasoning:** LLMs in vehicles offer enhanced contextual understanding from diverse sources like traffic laws and accident reports, ensuring decisions prioritize safety and regulation adherence.
* **Zero-Shot Planning:** LLMs in vehicles can understand and reason about unfamiliar situations without prior exposure, allowing vehicles to navigate uncharted scenarios confidently.
* **Continuous Learning and Personalization:** LLMs learn and adapt continuously, tailoring their assistance to individual driver preferences and improving the driving experience over time.
* **Transparency and Trust:** LLMs can articulate their decisions in simple language, fostering a crucial bond of trust and understanding between the technology and its users.
## 2 Perspective: the Role of LLMs in Advancing Autonomous Vehicles
As previously established in the earlier section, we've established that the LLMs serve as the "brain" in the autonomous driving system, facilitating driver interaction and decision-making, while the useful sensory tools and actuation function as the vehicle's "eyes" and "hands" respectively. To be more specific, When a driver requests a particular operation, the LLM prompts the related modules to provide data that has been processed to extract relevant information from the environment. By integrating the linguistic analysis of LLMs with the processed sensory inputs from the selected modules, the LLM can then make well-informed decisions. If the command is deemed both feasible and safe based on the prior analysis, the LLMs will transmit the corresponding instructions to the vehicle's controller. This includes components such as the steering wheel, throttle pedal, braking, and other control elements, enabling them to execute the necessary operations. Alternatively, if the operation is deemed inappropriate, the LLMs will provide drivers with a detailed explanation as to why the requested action is not suitable for execution.
Revisiting the example at the beginning of this paper, when drivers issue the command to overtake the vehicle ahead, the LLMs come into play by querying the perception module for pertinent processed information. This includes details such as the distance and speed of the target vehicle, the velocity of the ego vehicle, road conditions of potential lanes, the presence of other vehicles and their distances on those lanes, and other useful navigation information from the map system. Through an analysis of the provided data and the given command, the LLMs make a decision regarding whether to execute the driver's request. If the decision is affirmative, the LLMs subsequently communicate instructions to the controller, guiding the next course of action.
Having explored this intricate interaction between LLMs and the autonomous vehicle's decision-making process, we shift our focus to a broader context and propose the concept of a human-centric LLMs integrated framework for autonomous vehicles based on our prior work of the mobility digital twin [27]. As shown in Fig. 1, the physical world comprises human drivers, vehicles, and traffic objects. In the physical world, human drivers are the central agents in the physical world, sending commands and instructions to LLMs as they navigate roadways. The traffic environment contains various elements including vehicles, pedestrians, traffic lights, road conditions, and traffic cones, all of which contribute to the complexity of movement and interactions on the road. The vehicle, directed by the LLMs, operates within this ecosystem, executing the commands it receives from either drivers or LLMs through controllers and actuators.
The virtual world includes LLMs, memory, and essential tools which include the perception module, localization module, and in-cabin monitor. The perception module acquires raw input from sensors, including external cameras, LIDARs, and radars, and processes this data into a format suitable for the LLMs. The localization module employs GNSS data to determine the vehicle's precise location. Within the vehicle, the in-cabin monitor employs internal cameras, thermometers, and other sensors to vigilantly observe the in-cabin environment, preempting distractions, extreme temperatures, or uncomfortable conditions. At the core of the entire framework lies the LLMs, serving as its central intelligence. They receive commands from drivers, subsequently initiating queries to pertinent modules for related information. Furthermore, the memory section acts as a repository, storing historical operations and drivers' preferences, enabling continuous learning and enhancement for the LLMs. This repository of experiences equips the LLMs to make analogous decisions when confronted with similar situations, bolstering the system's adaptability and performance over time. the memory also houses maps and local law information, empowering the LLMs to make even wiser decisions adaptable to a variety of scenarios.
## 3 Review: Can LLMs Really Do This?
Through a comprehensive review of both theoretical underpinnings and real-world implementations, we seek to
address the fundamental question: Can LLMs really contribute to the improvement of autonomous driving by actively participating in the decision-making framework? By examining the current state of research and analyzing use cases, this section aims to provide a thorough assessment of the extent to which LLMs can bring to the landscape of human-centric autonomous driving.
### Adaptive Techniques and Human-Centric Refinements for LLMs
Parameter-efficient fine tuning (PEFT) is a crucial technique used to adapt pre-trained language models (LLMs) to specialized downstream applications [15, 11, 9, 6, 7]. Hu et al. [11] proposed utilizing low-rank decomposition matrices to reduce the number of trainable parameters needed for fine-tuning language models. Lester et al. [15] explore prompt tuning, a method for conditioning language models with learned soft prompts, which achieves competitive performance compared to full fine-tuning and enables model reuse for many tasks. These PEFT techniques offer valuable tools for adapting LLMs to autonomous driving tasks.
Reinforcement Learning from Human Feedback (RLHF) [18, 20, 21, 23, 1] has emerged as a key strategy for fine-tuning LLM systems to align more closely with human preferences. Ouyang et al. [18] introduce a human-in-the-loop process to create a model that better follows instructions. Bai et al. [1] propose a method for training a harmless AI assistant without human labels, providing better control over AI behavior with minimal human input. These approaches hold significant promise for developing LLMs for autonomous driving applications, as they can contribute in two dimensions. Firstly, they can ensure that LLMs avoid making decisions that may be illegal or unethical. Secondly, these methodologies enable LLMs to continually adapt and align their decision-making processes with user preferences, enhancing personalization and trust in autonomous vehicles.
LLM-based autonomous driving applications can also benefit from advanced prompting techniques [29, 28, 3, 26, 3, 29]. Chain-of-thought prompting [28] improves LLMs' ability to perform complex reasoning. Gao et al. [10] propose an approach that uses LLMs to read natural language problems and generate programs as intermediate reasoning steps. Yao et al. [29] present a new prompting technique that allows LLMs to make decisions about how to interact with external APIs. These methods provide a solid foundation for the development of LLMs for autonomous driving applications with two significant benefits. Firstly, they greatly enhance LLMs' reasoning capabilities, particularly in complex, multi-step scenarios. Secondly, these techniques improve the adaptability and versatility of LLMs, key attributes for autonomous driving systems interfacing with various tools and data sources.
Figure 1: The human-centric LLM-integrated framework for autonomous vehicles.
### Advancements in LLMs: Implications for Autonomous Driving Decision-Making
Recent Research has shown that LLMs can perform well in most commonsense tasks [4], which means it has the potential to make wise and feasible decisions in autonomous driving scenarios. The utilization of LLMs in the context of autonomous driving presents a captivating and potentially transformative direction for research. Recent investigations have brought light on the diverse ways in which LLMs can profoundly impact the landscape of autonomous vehicles. For instance, the study conducted by [16] highlights the promise of AI-infused with legal knowledge, offering the potential to avert legal transgressions in autonomous driving scenarios, thereby contributing to the establishment of a safer AI-driven environment. Additionally, [30] demonstrates that LLMs possess the capability to learn from local laws and accident reports, and effectively contribute to reducing accident rates, thus enhancing the safety of autonomous driving.
The application of LLMs to decision-making in autonomous driving is notably explored by [5]. Their research introduces the PaLM model, demonstrating that LLMs exhibit a capacity to effectively tackle intricate reasoning tasks and, intriguingly, surpass the performance of an average human. Such a finding carries significant implications, hinting at LLMs' remarkable ability to navigate complex scenarios, make astute judgments, and potentially lay the groundwork for optimal decision-making in autonomous vehicles.
The work highlighted in [19] demonstrates the utilization of large language models to effectively store experiences in natural language, forming a foundational approach for integrating historical data into our architecture.
The adaptive capabilities of LLMs are showcased in various ways. [14] underscores LLMs' proficiency in zero-shot reasoning, enabling them to deal with novel and unfamiliar situations, a vital feature for autonomous vehicles operating in dynamic environments. The study by [6] exemplifies that LLMs can be fine-tuned to exhibit enhanced performance, particularly in tasks with limited training data.
Additionally, LLMs have shown great potential in both transportation and robotics areas, as highlighted by [31], and [24] respectively. They reveal LLMs' prowess in tasks such as zero-shot planning and interactive conversations, even facilitating interaction with perception-action-based API libraries, an attribute that aligns with the demands of autonomous vehicles.
Furthermore, the work [25] demonstrates LLMs' potential for continuous learning, which is of paramount importance for adapting to evolving road conditions and enhancing performance over time.
The investigation from [8] introduces embodied language models capable of assimilating real-world sensor data, thus bridging the gap between perception and language. This development lays the foundation for potential advancements in autonomous vehicles, where LLMs could process sensory inputs, comprehend their surroundings, and consequently make more informed decisions. Building on these insights, additional studies [4], [32], [29], [22], and [12] have further enriched our understanding of LLMs' capabilities, underscoring their potential in decision-making, reasoning, and synergizing reasoning and acting.
## 4 Experiment: Decision-Making and Motion Planning with ChatGPT-4
To gain a deeper understanding of the practical capabilities of LLMs in the context of autonomous driving tasks, we embark on an insightful exploration involving real-world decision-making scenarios. This comprehensive case study serves as a compelling demonstration of how LLMs can effectively enhance autonomous vehicles by harnessing the potential of ChatGPT-4 [17] to replicate decision-making processes. Our investigation is structured in two distinct phases. Initially, we pose autonomous driving concept-related queries to GPT-4, which unveils its grasp of how language models can be seamlessly integrated into autonomous driving. Subsequently, we design and present genuine real-world situations to assess the decision-making proficiency of LLMs. This section covers an in-depth understanding of this case study, including a detailed conversation with GPT-4 that highlights our findings. This analysis serves to underscore the practical implications of leveraging LLMs for enhanced autonomous driving.
Figure 2: General Q&A with ChatGPT-4 regarding autonomous vehicles.
In our exploration with ChatGPT, we first asked some general conceptional questions regarding the LLMs in autonomous vehicles and aimed to find the true potential of LLMs in advancing the future of autonomous driving in
Figure 3: Experiment illustrating LLM-assisted decision-making and motion planning in a complex driving scenario. The Ego vehicle and its trajectory are marked orange; The vehicle ahead in the current lane and its trajectory are blue; The vehicles in adjacent lanes and their trajectories are green.
Figure 2. The responses indicated a profound ability of the LLMs to bridge the interaction between the vehicle and its passengers. From the responses, it's evident that LLMs can explain complex driving scenarios, decisions made by the vehicle, and even the technical of various autonomous modules. An especially significant observation was the LLMs' strength in processing vast volumes of data, and then converting these into real-time, understandable feedback. Such feedback isn't just about driving status but relates to the core autonomous functionalities, including the perception module's utilization and the motion planning's choices. Furthermore, the model demonstrated an enhanced capacity for vehicle-to-vehicle communications and, critically, troubleshooting. This capability not only fosters trust but can also develop the user experience by explaining the complex decisions of autonomous operations.
As we can see in Figure 3, we simulated a real-world driving scenario where the autonomous vehicle is equipped with Large Language Models (LLMs) to assist in decision-making and motion planning. The vehicle was on a two-lane Indiana highway, traveling east to west at 96 km/h. It was behind another vehicle moving at the same speed but only 8 meters away, a distance less than optimal for safety. On the adjacent left lane, two other vehicles were noted: one 30 meters ahead moving at 112 km/h, and another 40 meters behind at 104 km/h. The driver was highly attentive, and one passenger was wearing a seatbelt.
The LLMs were tasked with processing this multilayered data sourced from the perception module (vehicle speeds and distances), the localization module (road and environmental conditions), and the in-cabin monitoring system (driver's attention level and safety measures like seatb belts). The LLMs formulated a comprehensive 9-step motion plan that prioritized safety while efficiently executing the driver's command to overtake the front vehicle.
In the experimental scenario, the Large Language Models (LLMs) showcased their advanced reasoning ability by not just collecting and analyzing data but also applying layers of context-sensitive reasoning. The LLMs evaluated the speeds and distances of surrounding vehicles, the driver's state of attention, and even the traffic conditions to determine the safest and most efficient trajectory for overtaking. This capability to reason in real-time, considering multiple factors dynamically, significantly contributes to road safety and operational efficacy. The LLMs didn't merely follow pre-defined rules but adapted their decision-making to the unique circumstances, highlighting their potential for enhancing the future of autonomous driving.
Additionally, the language interaction capabilities of the LLMs proved crucial for trust-building. When the driver commanded to "overtake the vehicle in front," the LLMs assessed various factors and communicated their reasoning to the driver. This transparent interaction not only enhanced safety but also instilled greater confidence in the vehicle's autonomous capabilities.
LLMs also can access previous data and user preferences from the memory module, which allows for a more personalized driving experience. In the context of the experiment, for instance, the system could recall the driver's typical comfort levels with overtaking speeds, following distances, and lane preferences. This information could then influence how the LLMs interpret and execute a command like "overtake the front vehicle," ensuring that the action aligns with the driver's past behavior and comfort zones. As a result, the LLMs' capacity for memory-driven personalization not only improves user satisfaction but also can contribute to safer, more predictable autonomous driving scenarios.
Another crucial advantage is enhanced transparency and trust. When the vehicle makes a complex decision, such as overtaking another vehicle on a high-speed, two-lane highway, passengers and drivers might naturally have questions or concerns. In these instances, the LLMs don't just execute the task but also articulate the reasoning behind each step of the decision-making process. By providing real-time, detailed explanations in understandable language, the LLMs demystify the vehicle's actions and underlying logic. This not only satisfies the innate human curiosity about how autonomous systems work but also builds a higher level of trust between the vehicle and its occupants.
Moreover, the advantage of "zero-shotting" was particularly evident during the complex overtaking maneuver on a high-speed Indiana highway. Despite the LLMs not having encountered this specific set of circumstances before--varying speeds, distances, and even driver alertness--it was able to use its generalized training to safely and efficiently generate a trajectory for the overtaking action. This ensures that even in dynamic or rare scenarios, the system can make sound judgments while keeping users informed, hence building trust in autonomous technology.
## 5 Conclusion
In conclusion, our paper has provided a comprehensive framework for integrating Large Language Models (LLMs) into the ecosystem of autonomous vehicles. We highlighted how LLMs offer advanced reasoning capabilities that can make autonomous systems more flexible and responsive to complex, real-world scenarios. Additionally, by leveraging the capabilities of LLMs, we can enrich the human-vehicle interaction, providing a more reliable, intuitive, and responsive interface. Unlike traditional autonomous systems, which lack the capacity for language understanding, LLMs can handle complex requests, offer real-time feedback and comprehensive explanations, and assist in decision-making during complex or rare driving scenarios. This suggests a future where LLMs can significantly enhance efficiency, safety, and user-centric design in autonomous vehicles. |
2309.09792 | System-level Testing of the Congestion Management Capability of a
Hardware-Independent Optimal Power Flow Algorithm | The integration of distributed energy resources (DERs) into the electrical
grid causes various challenges in the distribution grids. The complexity of
smart grids as multi-domain energy systems requires innovative architectures
and algorithms for system control. While these solutions are good on paper,
several testing methods are required to test the applicability of components,
functions and entire systems to the existing energy grids. In this paper, a
full-scale low-voltage test setup in the Smart Grid Technology Lab (SGTL) at TU
Dortmund University is used to evaluate the capability of an Optimal Power Flow
Algorithm (OPF) to support voltage control, congestion management, and to
provide redispatch to the higher grid levels. While conventional redispatch is
commonly done preemptively, this paper analyses the possibility of providing
redispatch to the higher voltage levels without taking the future grid state
into consideration. The importance of this implementation is that the smart
grid application used to execute the OPF is configured based on IEC 61850 data
models, making the software independent of the hardware. Such standardised
control algorithms are interoperable and can be implemented on any hardware
that suits the requirements. | Thomas Schwierz, Rajkumar Palaniappan, Oleksii Molodchyk, Christian Rehtanz | 2023-09-18T14:11:56Z | http://arxiv.org/abs/2309.09792v1 | System-level Testing of the Congestion Management Capability of a Hardware-Independent Optimal Power Flow Algorithm
###### Abstract
The integration of distributed energy resources (DERs) into the electrical grid causes various challenges in the distribution grids. The complexity of smart grids as multi-domain energy systems requires innovative architectures and algorithms for system control. While these solutions are good on paper, several testing methods are required to test the applicability of components, functions and entire systems to the existing energy grids. In this paper, a full-scale low-voltage test setup in the Smart Grid Technology Lab (SGTL) at TU Dortmund University is used to evaluate the capability of an Optimal Power Flow Algorithm (OPF) to support voltage control, congestion management, and to provide redispatch to the higher grid levels. While conventional redispatch is commonly done preemptively, this paper analyses the possibility of providing redispatch to the higher voltage levels without taking the future grid state into consideration. The importance of this implementation is that the smart grid application used to execute the OPF is configured based on IEC 61850 data models, making the software independent of the hardware. Such standardised control algorithms are interoperable and can be implemented on any hardware that suits the requirements.
Laboratory Testing, Smart Grids, Distribution Grid Automation, Congestion Management, Redispatch
## I Introduction
Power systems have proven to be an integral part of modern society, and the electrical energy industry of today is changing due to several reasons. The planning and operation of electrical energy grids has become enormously complicated in recent years. With the integration of various new loads and distributed energy resources (DERs) into the grid, the power system industry is looking for new and innovative solutions to counter the challenges of increased grid loading, reverse power flows, voltage problems and low power reserves on the transmission system level, only to mention some of them [1]. Simultaneously, new technologies and developments in Information and Communication Technology (ICT) infrastructure promise to enable the distribution system operators (DSO) with additional options to control the assets in their grid and address the dynamic changes in their grid. With these new technologies, innovative monitoring and control algorithms are proposed for active distribution grids [2]. Whenever new algorithms are introduced in the literature, it is common practice to test them before implementing them in the field. The reason is simple: Many programming and algorithmic errors can be corrected, which may have been overlooked during the development phase.
Hardware-in-the-loop (HIL) testing methods are a common testing method in the present-day world since they include the domain of communication and enable the testing of hardware devices, such as inverters, or controllers in real-time. HIL testing usually entails a digital real-time simulated model and the hardware under test connected using analogue/digital inputs/outputs. In the context of the control of power systems, the power system is usually modelled on a real-time simulator (RTS) and communicates measurements and status values of the power system to the controller, whilst the controller sends outputs to the flexible assets of the simulated grids [3]. This concept is called Controller-Hardware-in-the-loop (CHIL). In Power-Hardware-in-the-loop (PHIL) testing, the Input-Output-Interface of the RTS is connected to a power interface to emulate the currents and voltages as they occur in the power system. Whilst CHIL testing can be beneficial for individual controller testing, the transition of distribution grids from passive distribution grids to active distribution grids, also known as smart grids, demands a more integrated testing approach. Therefore, the approach of system-level validation has been developed [4, 5]. System-level validation approaches include multiple - potentially HIL controlled - components, communication, a model of or a physical realisation of a power grid, a visualisation and a control system. At TU Dortmund University, the Smart Grid Technologies Lab (SGTL) has been setup to enable component testing using HIL approaches as well as holistic and system-level testing [6]. In the past, Power-Hardware-in-the-loop component testing (PHIL) of Photovoltaic inverters and distributed series reactors took place in the lab [7, 8].
In order to efficiently control low-voltage grids, observability and automation are necessary prerequisites. Conventionally, low-voltage grids have been treated as power sinks without any significant control possibilities. While there were extremely low to no measurements in the distribution grids of the past, the evolution of smart meters and sensors have made the grids more observable. Due to the number of low-voltage
grids as well as the number of DERs and new loads such as electric vehicles and heat pumps, the amount of controllable components has increased manifold and led the way towards smart grids. Historically, while a number of contributions in the literature deal with developing automation systems for smart grid applications, only a few of them were dedicated to experimentally testing the algorithms. Testing smart grid applications with laboratory experiments and field tests has been an important research field lately. There have been several projects in the distribution grid dealing with the topic of congestion management [11] - [15].
One of the drawbacks of most of the proposed solutions in the literature is that almost all of the control applications are proprietary solutions and not modular. That means that the software is specific to a particular hardware, and updating or integrating new algorithms from other users becomes an arduous task every time. To enhance the feasibility of interoperability, previous work at the TU Dortmund University dealt with creating algorithms based on software using a standard developed for the automation of distribution grid substations [16, 17]. Thus, the control algorithms are configured using data models according to IEC 61850-7-3 [18] and IEC 61850-7-4 [19]. This offers modularity and the possibility of multiple users creating various applications that can essentially work together on the same hardware device. The novel system architecture can aggregate various protection and control functions onto industrial hardware platforms, thereby accentuating the need for hardware independence, which makes this research the first of its kind. This paper deals with the implementation of one such control algorithm and experimentally verifies it in a low-voltage test grid. Taking the previous introduction on system-level testing and hardware-independent smart grid applications into account, the research questions of this paper are as follows:
1. How can hardware-independent control algorithms be designed and validated in close-to real-world conditions?
2. Can congestions be effectively managed curatively using an Optimal Power Flow (OPF) algorithm in low-voltage grids?
3. Can redispatch be curatively provided from low-voltage grids to the higher grid level without the knowledge of the future grid state?
The aims of this paper are to test and evaluate the capability of a conventional OPF algorithm for congestion management in an exemplary low-voltage grid on a system level. In [5], applications on the system level are described as applications which influence the operation of the power system, which fits the test presented in this paper. The communication and control of several DERs using modern control approaches are often data-driven control approaches or require predictions of the future grid state [20, 21]. Due to the lack of data and forecasts of future grid states in low-voltage grids, this paper evaluates whether classical control approaches are suitable for usage in low-voltage grids. Consequently, the contribution of this paper is as follows:
* The hardware-independent smart grid control application (SG App) is presented and is tested with a low-voltage test setup in the SGTL at TU Dortmund University,
* A modified OPF implemented on the SG App is presented and used together with a Weighted-Least-Square-Value SE to monitor and control the low-voltage grid,
* The capability of the OPF algorithm to perform congestion management and curatively provide redispatch at the medium-voltage-low-voltage transformer is evaluated.
The remainder of this paper is as follows: Section II describes the current state of the proposed hardware-independent software research at TU Dortmund University. Section III explains the test setup and test case of the given experiment. Section IV explains the modified OPF implementation. While Section V explains the results, the paper ends in Section VI with concluding remarks.
## II The hardware-independent Smart Grid Application
As already mentioned, the biggest problem among the proposed solutions for distribution grid automation in the literature is that most of the applications are bound to some kind of hardware. In order to avoid this problem, a system architecture enabling distribution grid automation in the form of higher-level and coordinated control functions for grid monitoring and control, executed on various industrial hardware devices such as [22] or more generic embedded computers such as raspberry pi, has been developed [23]. To achieve this, the SG App has been developed in several layers: The Parsing Layer, the Communication Layer, the Function Layer and the Hardware Layer. The structure of the SG App is depicted in Fig. 2. In the Parsing Layer, a configuration file according to the Substation Configuration Language (SCL) specified in IEC
Fig. 1: The components of the SGTL
61850-6, including the intelligent electronic devices (IEDs), substations and the communication is parsed into an online data model [24]. In the Communication Layer, the communication between the different IEDs according to various protocols is initialised, and the data is communicated. Communication protocols include Transfer Control Protocol (TCP), Modbus-TCP, the IEC 60870-5 protocol and IEC 61850-8 based MMS [25, 26]. The Function Layer contains the function controllers, the function implementation itself and input/output layers for every function assigned to a specific IED running the application. Thus, distributed control architectures, including several devices running the SG App with the same SCL file, are possible. Finally, the Hardware Layer specific to different hardware devices is called, where measurements from the hardware device are processed and written to the respective logical nodes specified in the SCL file.
## III The test case and test specification
In this section, the test case, the control algorithm under test and the test specification are described based on the Holistic Test Description given in [5]. The applied procedure is a proposed guideline to standardise test descriptions and documentation in power system testing and is adopted in this publication. A guideline for the usage of the proposed test description is given in [27, 28].
### _The test case_
The test objective is to test the capability of an OPF to cure congestions and to provide curative redispatch to the DSO in the form of adapting the apparent power flow over the transformer at any given time if it exceeds predefined limits. Thus, the objective function of the OPF is altered, and time-variant constraints are included. The SE is used to monitor the grid state, and the OPF is used to control the operational behaviour of its components, considering the goals specified above. The object under investigation is thus the OPF algorithm developed in the previously described software. The OPF is used to control several hardware devices, including a PHIL-controlled PV inverter, in a configurable low-voltage test grid. Thus, the OPF algorithm and the behaviour of the hardware devices are jointly tested. The test setup of this paper is depicted in Fig. 3.
The system under test consists of the components and subsystems depicted as part of the research infrastructure facility Fig. 1, whilst the object under test is the controller. The components of the system are described in the following and in [6, 29]:
* \(10/0.4\) kV On-Load-Tap-Changer (OLTC)
* Redox-Flow Battery Storage System (BSS) with a nominal power \(S_{\max}^{(3)}=30\) kVA and capacity \(E_{\mathrm{total}}=100\) kWh
* PHIL system consisting of a PV inverter with a nominal power \(P_{\max}^{(3)}=60\) kVA, a RTS and a Power-Amplifier with a nominal output power of \(200\) kVA
Fig. 3: Structure of the given laboratory test setup
Fig. 2: Structure of the smart grid application
* Controllable Resistor with a maximal load of \(P_{\max}=200\) kW
* Charging-station (CS) and EVs with charging current limits \(I_{\max}=16\) A and \(I_{\min}=6\) A.
* Power-quality-measurement-devices at busbar 007 and busbar 008
* NAYY-J low-voltage cables with a cross-section of \(25\ \mathrm{mm^{2}}\) and \(150\ \mathrm{mm^{2}}\)
The components are electrically connected in the testbed using busbars in the cabinets. There it is possible to flexibly arrange the assets using underground cables and additional busbars. As observable in Fig. 3, the domains under investigation are not only the electric domain but also communication and control. The communication link is based on Modbus-TCP, whereas the components act as Modbus servers while the SG App acts as Modbus client [30]. The data that is being communicated is given in Table I. Three phase daata of the electrical grid is considered to be symmetrical. In order to perform an OPF, nodal powers for each node in the grid are required. The SE is commonly used in the transmission grid to identify the exact grid state based on field measurements with a high amount of measurement redundancy. In distribution grids, several approaches of estimating the grid state are finding more and more usage to obtain the grid state in previously unobservable grids. As using a state estimation algorithm to estimate the grid with measurement sparsity is not the focus of this paper, it is assumed that the grid is observable or pseudo-observable, i.e. \(\eta=\frac{2n-1}{m}\geq 1\) holds, where \(n\) is the number of nodes and \(m\) the number of measurements. The SE is thoroughly described in the literature and will not be further described in this paper, except that the SE used in this approach is the classic Weighted-Least Squares approach [32].
The OPF is a mathematical optimization technique which is commonly used in power system analysis and operation. The general formulation is given in equations (1) to (4) [33]. Use cases of the OPF in power systems are the economic dispatch of power plants on the transmission grid level or the provision of ancillary services such as voltage and frequency control [34, 35]. In this paper, the OPF is used for curative congestion management and the provision of redispatch. The concrete specification of the OPF follows in section IV.
\[\min_{x}f(x) \tag{1}\]
subject to
\[g(x) =0 \tag{2}\] \[h(x) \leq 0\] (3) \[x_{\min} \leq x\leq x_{\max} \tag{4}\]
The repeatability of the tests is ensured by synchronising the first execution of the SE and the start of the resistor's load profile given in Fig. 6 with the start of the model of the RTS. This is realised by deploying a virtual Modbus server and waiting for the RTS to write in a specific Modbus register. The test criteria are the capability of the OPF to cure voltage violations, apparent power-flow violations over lines, and to keep the apparent power-flow over the transformer within the predefined limits. Moreover, the successful execution of a test using the SG App in an embedded system is validated. The target metrices are given in equations (5), (6), (8) and (9). The total number of times that voltage violations \(N_{v}\) and thermal violations \(N_{s}\) could not be cured is an indicator of the capability of the OPF to cure violations in the grid. The variable \(n_{v}(t)\) is \(1\) for every time step \(t\) in which a previous violation could not be cured and is defined in equation (7), the definition of \(n_{s}(t)\) is done accordingly.
\[N_{v} =\sum_{t=2}^{T}n_{v}(t) \tag{5}\] \[N_{s} =\sum_{t=2}^{T}n_{s}(t) \tag{6}\]
\[n_{v}(t)=\begin{cases}1,\ \text{voltage violation occurs in $t-1$ and $t$}\\ 0,\ \text{otherwise}\end{cases} \tag{7}\]
The second test metric is given in equations (8) and (9). The sum of violations over all respective violations in the timesteps \(t\ \in\ \{t_{0},...,T\}\) and nodes \(i\ \in\ \{1,...,n\}\) after a congestion took place in the previous timestep is calculated. The superscript "control" depicts the controlled grid state, while "reference" depicts the uncontrolled reference grid state. In order for the OPF to successfully cure voltage violations, power-flow violations and to provide redispatch to the upper
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Asset** & **Communicated Data** & **R/W** \\ \hline OLTC & tap position \(\tau\) & R/W \\ & phase to ground voltage \(V\) & R \\ & tap position limits \(r_{min},r_{max}\) & R \\ & voltage change per step & R \\ \hline PV inverter & phase to ground voltage \(V\) & R \\ & three phase power \(P^{(3)}\), \(Q^{(3)}\) & R \\ & three phase power setpoint \(P^{(3)}_{\mathrm{set}},Q^{(3)}_{\mathrm{set}}\) & R/W \\ & maximum three phase power \(P^{(3)}_{\mathrm{max}}\) & R \\ \hline CS & charging current \(I\) & R/W \\ & charging power \(P\) & R \\ & current limits \(I_{\min}\), \(I_{\max}\) & R \\ & vehicle state according to IEC 61851 [31] & R \\ \hline BSS & State of Charge (SoC) \(E_{t_{0}}\) & R \\ & SoC limits \(E_{\max}\) and \(E_{\min}\) & R \\ & phase to ground voltage \(V\) & R \\ & current \(I\) & R \\ & active, reactive and apparent power \(P\), \(Q\), \(S\) & R \\ & three phase setpoints \(P^{(3)}_{\mathrm{set}},\cos(\phi)^{(3)}_{\mathrm{set}}\) & R/W \\ & maximum power \(S^{(3)}_{\mathrm{max}}\) & R \\ \hline measurement & current \(I\) & R/W \\ devices & phase to ground voltage \(V\) & R \\ & active, reactive and apparent power \(P\), \(Q\), \(S\) & R \\ & power factor \(\cos(\phi)\) & R \\ \hline RTS & Temperature \(T\) in \({}^{\circ}C\) & R \\ & Irradiation \(E\) in \(\frac{W}{m^{2}}\) & R \\ & Synchronisation bit & R \\ \hline \end{tabular}
\end{table} TABLE I: Data obtained by the assets in the SGTL
grid level, this value has to be close to \(0\). A qualitative assessment of the results of the OPF follows in section V.
\[A_{v}=\sum_{t=2}^{T}\sum_{i=1}^{n}n_{v}(t)\cdot|V_{i}(t)^{\rm control }-V_{i}(t)^{\rm reference}| \tag{8}\] \[A_{s}=\sum_{t=2}^{T}\sum_{i=1}^{n}n_{s}(t)\cdot|S_{ij}(t)^{\rm control }-S_{ij}(t)^{\rm reference}| \tag{9}\]
### _The test specification_
The system under test is presented in Fig. 4. There, two \(400\) m long cables are used to depict a long feeder as it typically occurs in rural grids [36]. At the end of the feeder, the PV inverter feeds in active power corresponding to irradiation data of a sunny day as presented in Fig. 5. The test takes \(11\) minutes as the PV-model on the RTS presented in [7] reassembles the feed-in of a day in \(11\) minutes. The test is synchronized with the start of the PHIL control of the PV inverter to achieve reproducible testing. The SE and OPF are executed every 15 seconds, with \(t_{0}=\) 00:00 and \(T=\) 00:11. At busbar 008, the resistor imposes a load on the grid as presented in Fig. 6. The battery is connected to busbar 008 through a NYY-J 5x16 RE cable with approximated length of 100 m and is operating at a SoC of \(50\) % and a feed-in of \(0\) kW. The charging station is connected to busbar 007 with a cable of type H07RN-F 5G6 and a length of 35 m, a car will be plugged in directly after \(t=\) 00:05:45. The measurement data that is obtained is given in Table I. Measurement equipment installed in the busbars is used as well as the internal measurements from the BSS, the CS, the PV inverter and the OLTC. These measurements make the grid observable. The output of the SE is the system state consisting of nodal voltages, and power-flows as given in (10).
\[x=[V_{i},\delta_{i},P_{i},Q_{i}]\quad\forall\ i\in\{1,...,n\} \tag{10}\]
The system state (10) serves as an input to the OPF. The output of the OPF and cascaded control of the OLTC are setpoints \(P_{\rm set}\) and \(Q_{\rm set}\) for active and reactive power of the PV inverter and the BSS, for the active power of the EV, as well as the tap position \(\tau_{\rm set}\) of the OLTC. The control of the OLTC is executed in the controller of the OPF function and is not part of the OPF itself. The current grid state is observed and if only a voltage violation is detected and the OLTC has not been stepped in the last time instance of the controller, a tap change \(\tau_{\rm t}=\tau_{\rm t-1}\pm 1\) will be executed instead of the optimization. For the EV, only the active power is controllable by setting the charging current \(I^{\rm set}\) as an integer between \(6\) A and \(16\) A.
To specify the occurrence of voltage violations, power-flow violations and the provision of redispatch in the given test, limits \(S_{\rm max}\), \(S_{\rm min}\), \(V_{\rm max}\) and \(V_{\rm min}\) have to be specified for every
Fig. 4: The laboratory set up in the SGTL
Fig. 5: The time series of the PV inverter
line or respectively node. The voltage limits \(V_{\rm max}\) and \(V_{\rm min}\) for every node are given in (11) and are specified according to the norm DIN EN 50160 [37]. The power-flow limits \(S_{\rm max}\) and \(S_{\rm min}\) for the apparent power-flows are specified in Table II. It is observable that the apparent power-flow over the line (Busbar 007, Busbar 008) has been artificially reduced to \(40\) kVA from initially \(170.4\) kVA. Moreover, due to congestion in the higher grid levels, between the time steps \(t_{1}=\) 00:03:30 and \(t_{2}=\) 00:05:00, the feed-in of the low-voltage grid to the medium-voltage grid has been restricted to \(15\) kVA.
\[0.9~{}{\rm p.u.}\leq V_{i}\leq 1.1~{}{\rm p.u.}~{}~{}~{}\forall~{}i~{}\in\{1,...,n\} \tag{11}\]
## IV The modified Optimal Power Flow Algorithm
The optimization vector
\[x=[V_{i},\delta_{i},P_{i},Q_{i}]~{}~{}~{}\forall~{}i~{}\in\{1,...,n\}\]
consists of complex nodal voltages and powers for every node. The objective function is given in (12). It aims to minimise the difference of the active power setpoint to a target value of the active power for every flexibility as well as minimising the reactive power feed-in of all the assets. The variable \(N_{f}\) denotes the number of flexibilities.
\[\min f(P,Q)=\sum_{i=1}^{N_{f}}[c_{i}^{\rm P}\cdot(P_{i}-P_{i}^{\rm target})^{2 }+c_{i}^{\rm Q}\cdot Q_{i}^{2}] \tag{12}\]
The cost factors \(c_{i}^{\rm P}\) and \(c_{i}^{\rm Q}\) for different assets are described in Table III. The costs are purely of qualitative nature and do not depict the real costs of curtailing these flexibilities. The weighing of cost factors follows the following principles:
* Reactive power should be cheaper than active power.
* The battery storage system should be the cheapest flexibility.
* The curtailment of the PV inverter is cheaper than the curtailment of the charging process of the EV, as it does not influence the customer comfort.
In the following, the target values for active power of the flexibilities are explained. The aim is to minimise the influence on the operational behaviour of the flexibilities. Thus, if an EV is connected, \(P^{\rm target}\) is the maximum charging power for the EV, otherwise \(P_{\rm EV}^{\rm target}=0\) W holds. The information, if an EV is connected and ready for charging, is obtained from the CS. The calculation of \(P_{\rm PV}^{\rm target}\) is more complex. The currently available power for the PV inverter is a very important quantity, as without it, a proper upper bound for the respective inequality constraint cannot be formulated. This can either be obtained by the inverter providing this data or by having irradiance and temperature measurements. The given inverter in the SGTL does not communicate the maximally available active power. Thus, the irradiance and temperature of the emulated PV module of the PHIL setup presented in Fig. 4 are used to calculate the maximum available power of the inverter according to equation (13). The variable \(T\) denotes the temperature in \({}^{\circ}C\), the variable \(E\) denotes the irradiation in \(\frac{\rm W}{\rm m^{2}}\) and the variable \(\alpha\) the temperature coefficient of the respective PV module in \(\frac{\rm W}{\rm K}\).
\[P_{\rm max}^{\rm PV}(E,T,\alpha)=P_{\rm ref}\cdot\frac{E}{E_{\rm ref}}\cdot(1 +\alpha\cdot(T-T_{\rm ref}))\cdot\eta_{\rm inverter} \tag{13}\]
The temperature coefficient \(\alpha\) has been calculated in a test case with \(T=45~{}^{\circ}C\), \(E=1000~{}\frac{\rm W}{\rm m^{2}}\) and is \(\alpha=0,00273~{}\frac{W}{K}\). The efficiency \(\eta\) of the inverter has been calculcated using the European efficiency as given in (14) [38].
\[\eta_{\rm inverter} =0.03\cdot\eta_{5\%}+0.06\cdot\eta_{10\%}+0.13\cdot\eta_{20 \%}+0.10\cdot\eta_{30\%}\] \[+0.48\cdot\eta_{50\%}+0.20\cdot\eta_{100\%} \tag{14}\]
For the given inverter, \(\eta_{\rm inverter}=0.93\) resulted according to the measurements given in Table IV. The target power for the BSS is calculated using a case distinction as given in Table III. If the SoC is below \(50~{}\%\), then \(P^{\rm target}\) is set as \(|\frac{E_{10}-E_{\rm total}}{\Delta T}|\), if it is above \(50~{}\%\), then \(P^{\rm target}\) is set as \(-|\frac{E_{10}-E_{\rm total}}{\Delta T}|\). The quantity \(\frac{E_{10}-E_{\rm total}}{\Delta T}\) denotes the missing power for the BSS to reach a SoC of \(50\%\cdot E_{\rm total}\) in the timespan \(\Delta T\). As a result, the aim of the optimization is to keep the SoC of the BSS at \(50\%\). The equality constraints \(g\) in (2) are specified in the following equations (15) to (16) and describe the power-flow balance at every node \(i\) in the grid. The properties \(y_{ij}\) and \(\theta_{ij}\) are the respective elements of the complex admittance matrix
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(t\) & (Busbar 007, MV Grid) & (Busbar 008, Busbar 007) \\ \hline
[00:00, 00:03:30] & \(70\) & \(40\) \\ \hline
[00:03:45, 00:05] & \(15\) & \(40\) \\ \hline
[00:05:15, 00:11] & \(70\) & \(40\) \\ \hline \end{tabular}
\end{table} TABLE II: Absolute value of apparent power-flow limits for the lines and OLTC in kVA
Fig. 6: The time series of the resistive load
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Flexibility** & \(c_{i}^{\rm P}\) & \(c_{i}^{\rm Q}\) & \(P^{\rm target}\) \\ \hline BSS & 100 & 1 & \(\left\{\begin{array}{ll}|\frac{E_{10}-E_{\rm total}}{\Delta T}|,&E<50~{}\% \cdot E_{\rm total}\\ -|\frac{E_{10}-E_{\rm total}}{\Delta T}|,&E\geq 50~{}\%\cdot E_{\rm total}\\ \end{array}\right.\) \\ \hline PV & 1000 & 10 & \(-\frac{P_{\rm max}^{\rm(3)}}{\Delta T}\) \\ \hline EV & 10000 & \(\infty\) & \(P_{\rm max}=\frac{I_{\rm max}}{\max}\cdot V_{i}\cdot\sqrt{3}\) \\ \hline \end{tabular}
\end{table} TABLE III: Cost factors for the different flexibilities as well as target values for active power
\(Y\), \(\delta_{i}\) and \(\delta_{j}\) to the voltage phases of the nodes \(i\) and \(j\)[32, 33]. The nodal powers \(P_{i}\) and \(Q_{i}\) are given in the consumer counting system.
\[P_{i}+V_{i}\cdot\sum_{j=1}^{n}V_{j}\cdot y_{ij}\cdot\text{cos}( \delta_{i}-\delta_{j}-\theta_{ij})=0 \tag{15}\] \[Q_{i}+V_{i}\cdot\sum_{j=1}^{n}V_{j}\cdot y_{ij}\cdot\text{sin}( \delta_{i}-\delta_{j}-\theta_{ij})=0 \tag{16}\]
The inequality constraints \(h\) in (3) correspond to the power-flow constraints over the lines in the grid. They are specified in equations (17) and (18). The limit of the apparent power-flow \(S_{\text{ij, max}}(t)\) for a line \((i,j)\) is given as an external input and can continuously be updated by the DSO. This enables the DSO to dynamically reduce the load or feed-in from the low-voltage grid if its flexibility potential is high enough.
\[S_{ij}(\theta_{i},V_{i})-S_{ij,\ \max}(t)\leq 0 \tag{17}\] \[S_{ji}(\theta_{j},V_{j})-S_{ji,\ \max}(t)\leq 0 \tag{18}\]
The second set of inequalities as given in (4) deals with voltage and power limitations and is specified in equations (19) to (22). The limits for active and reactive power for the BSS, CS and the PV inverter are given in Table V where a maximal \(\sin(\phi)\) of 0.44 is assumed.
\[V_{i}^{\min} \leq V_{i}\leq V_{i}^{\max} \forall\ i\in\{1,...,n\} \tag{19}\] \[P_{i}^{\min} \leq P_{i}\leq P_{i}^{\max} \forall\ i\in\{1,...,N_{f}\}\] (20) \[-360^{\circ} \leq\delta_{i}\leq 360^{\circ} \forall\ i\in\{1,...,n\}\] (21) \[Q_{i}^{\min} \leq Q_{i}\leq Q_{i}^{\max} \forall\ i\in\{1,...,N_{f}\} \tag{22}\]
## V Scenarios and Results
In this section, the uncontrolled system state as presented in subsection III-A is given and compared to the controlled system state. Moreover, the target metrics (5), (6), (8) and (9) are presented and evaluated regarding the question of the capability of the OPF to cure voltage violations, power-flow violations and provide curative redispatch. The uncontrolled and the controlled grid state are presented in Fig. 7 and Fig. 8. It is observable that without any control actions, the apparent power-flow over the line (Busbar 008, Busbar 007) exceeds \(S_{\min}=-40\) kVA between 00:08:30 and 00:10:15. Moreover, the temporary power-flow-limit of \(15\) kVA over the transformer given between 00:03:30 and 00:05:00 is exceeded at all times in the time interval. It is observable that in the uncontrolled grid state, a voltage violation occurs between 00:03:30 and 00:07:45. The target metrices \(N_{v}\),\(N_{s}\), \(A_{v}\) and \(A_{s}\) for the controlled and uncontrolled grid state are presented in Table VI. One can see that the OPF manages to cure the power-flow violations on the line (Busbar 008, Busbar 007) two times completely, thus \(N_{s}^{007,008}\) reducing from \(7\) in the uncontrolled grid state to \(5\) in the controlled grid state.
Moreover, the amount of power-flow violations reduces by \(72.23\) %. The optimization significantly reduces the power-flow congestions over the given line, but is not able to completely cure them. A close-up in Fig. 9 in the time interval between 00:08 and 00:10 shows that an alternating pattern results: The violation is cured and \(S_{007,008}\approx-40\) kVA occurs. In the next time step the violation is increasing again. The load of the resistor is constant between 00:08 and 00:10 with 19 kW per phase. However, the feed-in of the PV inverter drastically reduces from approximately \(30\) kVA to less than \(10\) kVA in the given time interval. Thus, the loading of the grid increases and the OPF is able to reduce the power-flow violations, but not completely cure them.
Moreover, the limit \(S_{\max}=15\) kVA over the OLTC in the uncontrolled grid state cannot be cured in any of the given time steps, i.e. \(N_{v}^{\rm PV}\) reduces by \(0\) %. The power-flow violations of the given limit \(S_{\max}=15\) kVA can only be reduced by \(10.45\)% compared to the uncontrolled grid state. Closely looking at the power-flows over (Busbar 007, MV grid) in Fig. 9 in the given interval [00:03:30,00:05:00] shows that at 00:03:45 and at 00:04:30 the violation can be cured, however, due to the increasing feed-in of the PV inverter at the time, in the next time steps \(t+1=\)00:04 and 00:05, a violation occurs again.
The third violation that can be observed in the uncontrolled grid state is the voltage violation at the PV inverter. There, in the uncontrolled grid state, \(17\) voltage violations occur. The number of voltage violations is significantly reduced by \(64.71\) % to \(6\) in the controlled state. Moreover, the amount of voltage violations \(A_{v}^{\rm PV}\) reduces by \(65.28\) %. These results are obtained by a step of the transformer at 00:04:15 from
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & BSS & PV inverter & Charging Station \\ \hline \(P_{\min}\) & \(-S_{\max}\) & 0 kW & \(3\cdot V_{\rm CS}\cdot I_{\min}\) \\ \hline \(P_{\max}\) & \(S_{\max}\) & \(-S_{\max}\) & \(3\cdot V_{\rm CS}\cdot I_{\max}\) \\ \hline \(Q_{\min}\) & \(-S_{\max}\cdot\sin(\phi)\) & \(-S_{\max}\cdot\sin(\phi)\) & 0 kVar \\ \hline \(Q_{\max}\) & \(S_{\max}\cdot\sin(\phi)\) & \(S_{\max}\cdot\sin(\phi)\) & 0 kVar \\ \hline \end{tabular}
\end{table} TABLE V: Active and reactive power limits for the flexibilities
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \% of nominal power & \(P_{\rm DC}\) in kW & \(P_{\rm AC}\) in kW & \(\eta_{i}\) \\ \hline
100 & 62.22 & 60\({}^{1}\) & 0.9643 \\ \hline
50 & 31.11 & 29.7 & 0.9547 \\ \hline
30 & 18.66 & 16.6 & 0.8893 \\ \hline
20 & 12.44 & 10.8 & 0.8679 \\ \hline
10 & 6.22 & 5.2 & 0.8357 \\ \hline
5 & 3.11 & 2.4 & 0.7715 \\ \hline \end{tabular}
* This measurement could not be obtained due to stability issues with the inverter and was assumed
\end{table} TABLE IV: Measurements for the calculation of the efficiency of the given PV inverter
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & Uncontrolled state & Controlled state & Reduction in \% \\ \hline \(N_{v}^{\rm PV}\) & 17 & 6 & 64.71 \\ \hline \(A_{v}^{\rm PV}\) & 0.3243 p.u. & 0.1126 p.u. & 65.28 \\ \hline \(N_{v}^{\rm OLTC}\) & 6 & 6 & 0 \\ \hline \(A_{s}^{\rm OLTC}\) & 40.19 & 35.99 kVA & 10.45 \\ \hline \(N_{s}^{\rm OLTC,008}\) & 7 & 5 & 28.57 \\ \hline \(A_{s}^{\rm OLOT,008}\) & 94.30 kVA & 26.19 kVA & 72.23 \\ \hline \end{tabular}
\end{table} TABLE VI: Target matrices for lines and nodes
position \(5\) to \(4\), thus reducing the voltage in the grid by \(0.025\) p.u. The cascaded control of the OLTC is one of the main reasons for the effective voltage control of the OPF, as a step change of the OLTC significantly impacts the voltage in the grid. However, as it prevents controlling the different DERs of the grid, it reduces the quality of congestion management and redispatch provision.
## VI Conclusion and Outlook
While various papers in the literature have dealt with the provision of congestion management in the distribution grids, this paper evaluated the same along with the provision of curative redispatch capability of an OPF algorithm in a low-voltage test grid. The test setup enables the testing of grid monitoring and control algorithms in a real low-voltage grid
Fig. 8: Controlled and uncontrolled voltage in the grid in p.u.
Fig. 7: Controlled and uncontrolled apparent power-flows in kVA
in a controlled environment. The OPF is able to reduce power flow and voltage congestions by \(72.23\) % and \(64.71\) % in the given test case. Thus, complete curing of congestions is not possible and the capability of the OPF to cure congestions can only be partially verified. Research question B) dealt with the question of curative redispatch without taking the future grid state into consideration. Here, it is observable that the OPF is not able to guarantee given apparent power flows over the transformer, the violation of the predefined power-flow trajectory can only be reduced by \(10.45\) % compared to the uncontrolled grid state. At last, the successful validation of a hardware-independent control software using the SE and OPF algorithms in a low-voltage test grid was shown. In future work, preventive control algorithms such as Model Predictive Control will be used and their capability to provide congestion management and redispatch will be evaluated in the presented test grid, whilst considering varying quality of forecast data for the future grid state.
|
2309.07967 | iHAS: Instance-wise Hierarchical Architecture Search for Deep Learning
Recommendation Models | Current recommender systems employ large-sized embedding tables with uniform
dimensions for all features, leading to overfitting, high computational cost,
and suboptimal generalizing performance. Many techniques aim to solve this
issue by feature selection or embedding dimension search. However, these
techniques typically select a fixed subset of features or embedding dimensions
for all instances and feed all instances into one recommender model without
considering heterogeneity between items or users. This paper proposes a novel
instance-wise Hierarchical Architecture Search framework, iHAS, which automates
neural architecture search at the instance level. Specifically, iHAS
incorporates three stages: searching, clustering, and retraining. The searching
stage identifies optimal instance-wise embedding dimensions across different
field features via carefully designed Bernoulli gates with stochastic selection
and regularizers. After obtaining these dimensions, the clustering stage
divides samples into distinct groups via a deterministic selection approach of
Bernoulli gates. The retraining stage then constructs different recommender
models, each one designed with optimal dimensions for the corresponding group.
We conduct extensive experiments to evaluate the proposed iHAS on two public
benchmark datasets from a real-world recommender system. The experimental
results demonstrate the effectiveness of iHAS and its outstanding
transferability to widely-used deep recommendation models. | Yakun Yu, Shi-ang Qi, Jiuding Yang, Liyao Jiang, Di Niu | 2023-09-14T18:03:30Z | http://arxiv.org/abs/2309.07967v1 | # iHAS: Instance-wise Hierarchical Architecture Search
###### Abstract.
Current recommender systems employ large-sized embedding tables with uniform dimensions for all features, leading to overfitting, high computational cost, and suboptimal generalizing performance. Many techniques aim to solve this issue by feature selection or embedding dimension search. However, these techniques typically select a fixed subset of features or embedding dimensions for all instances and feed all instances into one recommender model without considering heterogeneity between items or users. This paper proposes a novel instance-wise Hierarchical Architecture Search framework, iHAS, which automates neural architecture search at the instance level. Specifically, iHAS incorporates three stages: searching, clustering, and retraining. The searching stage identifies optimal instance-wise embedding dimensions across different field features via carefully designed Bernoulli gates with stochastic selection and regularizers. After obtaining these dimensions, the clustering stage divides samples into distinct groups via a deterministic selection approach of Bernoulli gates. The retraining stage then constructs different recommender models, each one designed with optimal dimensions for the corresponding group. We conduct extensive experiments to evaluate the proposed iHAS on two public benchmark datasets from a real-world recommender system. The experimental results demonstrate the effectiveness of iHAS and its outstanding transferability to widely-used deep recommendation models.
recommender system, instance-wise, embedding dimension search +
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
methods can be applied for dimension selection, we have empirically observed that the probabilities learned are often indistinguishable. Therefore, selecting the top \(K\) features/dimensions based on these probabilities may inadvertently result in either the exclusion of critical features/dimensions or the inclusion of irrelevant ones.
Furthermore, prior approaches uniformly apply embedding dimension selection across all instances in the datasets, therefore disregarding the inherent variations between individual samples. This one-size-for-all approach can be inadequate in many scenarios, especially when dealing with highly heterogeneous populations where relevant features can significantly diverge across users or across items. For example, in a movie recommendation system, the feature "age" usually plays a crucial role in recommending Disney movies, thereby possibly necessitating a larger embedding dimension. Conversely, "age" is less relevant for comedy films, resulting in a smaller dimension size. Thus, it is evident that treating all instances identically may overlook these context-specific nuances. Intuitively speaking, when dimension selection is performed at the instance level, we can create neural architectures that are better suited to individual samples. Such an approach not only results in superior performance but also enables faster inference times by focusing on the most relevant dimensions of each sample.
In this paper, we propose an instance-wise Hierarchical Architecture Search framework, iHAS, which attempts to perform automatic architecture search on the instance level, using hierarchical training procedures for DLRMs. Specifically, iHAS includes three learning stages: searching, clustering, and retraining. The searching stage aims to find the optimal instance-wise embedding dimensions across different fields via a carefully designed "Bernoulli gate" with stochastic selection mode and a regularizer. After selecting instance-wise embedding dimensions, we separate samples into different groups based on a novel deterministic selection approach in the clustering stage. The retraining stage trains different recommender models, with optimal dimensions tailored to different groups. During inference time, each test sample will first be assigned to a suitable group, where predictions are made by the corresponding recommender model. We summarize our major contributions as:
* We propose a hierarchical training framework that uses instance-wise "Bernoulli gates" to facilitate effective dimension search for each sample.
* We apply a sparse and bi-polarization regularizer in the objective function to help the model learn distinguishable Bernoulli RVs, and use a threshold selector for downstream deterministic selection.
* To balance the trade-off between the one-size-for-all and full-customization (which is not applicable with finite data size) strategies, we propose to divide samples into clusters and develop tailored recommender models for each cluster.
We empirically evaluate the performance of our framework on two large-scale benchmark datasets. Our experimental results indicate a notable superiority of our approach over various state-of-the-art baseline models on both datasets. Furthermore, the transferability analysis demonstrates our framework can be effectively transferred to diverse deep recommender models, thereby enhancing their performance. Additionally, our framework offers an efficiency advantage as it requires less inference time than competing baseline models.
## 2. Related Work
This section introduces the main related works to our study, focusing on feature-based recommender models and AutoML approaches for recommendation systems.
### Feature-based Recommender Models
Feature-based recommender models take sparse, high-dimensional features from users and items as input and transform them into low-dimensional representations to capture user preferences for improved recommendations. For example, Cheng et al. (2017) propose Wide&Deep (W&D), a model composed of a linear module and a Multi-Layer Perceptron (MLP) layer to combine the benefits of memorization and generalization for recommender systems. Guo et al. (2017) propose DeepFM that further integrates the power of factorization machines based on W&D to learn high-order feature interactions for recommendations. Recently, advanced neural networks, such as attention-based models (Xiong et al., 2018), have been developed. However, these techniques apply a fixed embedding dimension for all features, which would downgrade the model performance and consume substantial computational resources.
### AutoML for Recommendations
Automated Machine Learning (AutoML) has recently become a research hotspot due to its potential to automate the design process for recommender systems, minimizing human involvement. The research directions include feature selection (Kipf and Welling, 2015; Wang et al., 2016), embedding dimension search (Wang et al., 2016; Wang et al., 2016), model architecture search (Chen et al., 2017), and other component search (Wang et al., 2016; Wang et al., 2016; Wang et al., 2016). Feature selection involves selecting a subset of field features in recommendation systems. For example, AutoField (Wang et al., 2016) uses a simple controller based on differentiable architecture search (Kipf and Welling, 2015) to select the top \(K\) field features. AdaFS (Kipf and Welling, 2015) enhances AutoField by modifying the controller to assign feature weights to fields for different samples. The objective of embedding dimension search is to find mixed embeddings sizes for each field. For example, AutoEmb (Wang et al., 2016) finds the optimal dimension for each feature using differentiable search (Kipf and Welling, 2015). AutoDim (Wang et al., 2016) selects the best dimension for each field from a group of candidate dimensions in the same way as AutoEmb. Model architecture search explores various network architectures and selects the optimal one (Chen et al., 2017).
Our method is in alignment with embedding dimension search. All instances in the above methods share a uniform dimension size for each field. In contrast, our approach adaptively selects dimensions for each instance via the proposed Bernoulli gates, thereby considering the difference between individuals. Moreover, we introduce a polarization regularizer to overcome the shortcomings of the commonly-used top \(K\) selection strategy. Furthermore, rather than processing all samples through a single model, we propose to divide samples into clusters and train different recommender models with optimal dimensions tailored to different clusters. These unique and innovative designs in our proposed method have been proven effective in terms of both performance enhancement and inference cost saving.
## 3. Method
This section introduces the technical specifications of the proposed iHAS framework, as visualized in Figure 1. We first provide a concise overview of the entire hierarchical training framework. Subsequently, the primary modules within our framework are described, as well as how to optimize them within each hierarchical stage.
### Overview
The methodology for the iHAS framework comprises three stages: searching, clustering, and retraining. It involves three principal modules: deep recommendation models (consisting of an embedding layer and an MLP component) for predicting user preferences, a Bernoulli gates layer responsible for dimension selection, and a K-means cluster algorithm that partitions the heterogeneous data.
In the searching stage, the key objective is to identify the optimal, instance-wise embedding dimensions across different fields, thus facilitating an accurate recommendation prediction. As shown in Figure 1, the categorical field features are directed to an embedding layer to generate embedding representations. These representations are then processed through the Bernoulli gates to produce embedding masks using a stochastic selection mode (see Section 3.3.1). Each embedding mask comprises a binary vector that serves as a gate on whether the corresponding dimension should be incorporated into the downstream architecture. Then the framework conducts an element-wise multiplication between a sample's embedding representations and embedding masks. This resultant masked embedding representation is then directed to the base MLP component to predict user preference.
In the clustering stage, the main objective is to utilize K-Means algorithm (Kang et al., 2017) to cluster samples into groups. This stage mostly mirrors the procedure used in the searching stage: using the embedding layer and Bernoulli gates (but using deterministic selection mode, see Section 3.3.2) to calculate the masked embedding representations. These masked embedding representations are then used to train a mini-batch K-Means cluster (Kang et al., 2017). As shown in Figure 1, the K-means separates the red and yellow samples from the blue and green samples.
The retraining stage aims to develop cluster-customized DLRMs, considering the variation in dimension patterns across different clusters. In each cluster, we calculate the embedding masks (using deterministic selection mode) for all samples. The resultant masks are then averaged to obtain one vector, which is used to determine the final embedding dimensions of each DLRM.
### Deep Learning Recommender Models
In this subsection, we provide a brief introduction to the basic architecture of the DLRM. It typically comprises two primary components: an embedding layer and an MLP component.
#### 3.2.1. Embedding Layer
In classic DLRM, the embedding layer is commonly used to convert the categorical inputs into a dense vector of real numbers.
Let us denote the input of \(N\) categorical field features for sample \(i\) as \(\mathbf{X}_{i}=[\mathbf{x}_{i,1},\cdots,\mathbf{x}_{i,n},\cdots,\mathbf{x}_{i,N}]\), where \(\mathbf{x}_{i,n}\in\mathbb{Z}^{|\mathbf{n}|}\) represents the one-hot vector comprising sparse, high-dimensional binary values. The term \(|\mathbf{n}|\) denotes the number of unique values for \(n\)-th categorical field. For instance, a categorical field such as "gender" with unique values - male, female, and unknown - can be expressed through three-bit vectors \([1,0,0]\), \([0,1,0]\), and \([0,0,1]\), respectively. To process a numerical field feature, we will discretize it through custom-width binning, followed by applying a one-hot operation. Then, the operation of the embedding layer can be represented as:
\[\mathbf{e}_{i,n}=\mathbf{v}_{n}\,\mathbf{x}_{i,n},\]
where \(\mathbf{v}_{n}\in\mathbb{R}^{d\times|\mathbf{n}|}\) is the embedding table of the \(n\)-th field, \(d\) is the predefined embedding dimension (typically consistent across all fields), and \(\mathbf{e}_{i,n}\) is the low-dimensional embedding representation. Therefore, the final embedding of the input data \(\mathbf{X}_{i}\) through \(N\) embedding tables is \(\mathbf{E}_{i}=[\mathbf{e}_{i,1},\cdots,\mathbf{e}_{i,n},\cdots,\mathbf{e}_{i,N}]\).
Notably, the embedding dimension search techniques we discussed earlier in Section 2.2 (also the focus of this paper) aim at searching the optimal dimensions for embedding tables \(\mathbf{V}=[\mathbf{v}_{1},\cdots,\mathbf{v}_{n},\cdots,\mathbf{v}_{N}]\). Specifically, our goal is to discover the optimal individual embedding dimension for each field, given the inherent diversity in the heterogeneous dataset. This could potentially enhance prediction performance.
#### 3.2.2. MLP Component
The MLP component plays a crucial role in DLRMs, tasked with encoding embedding representations and predicting the recommendation. Empirically, it comprises multiple fully-connected (FC) layers (characterized by parameter \(\mathbf{\theta}\)) and is also equipped with non-linear activation functions such as ReLU (Bengio et al., 2017) or Sigmoid, thereby facilitating the nonlinear encoding process of these representations.
In the iHAS system, we will train three different DLRMs, as illustrated in Figure 1. Each DLRM consists of an embedding layer and an MLP component. These three models are named the base recommender model, recommender model 1, and recommender model 2, which are characterized by the parameter groups \(\{\mathbf{V},\mathbf{\theta}\}_{b}\), \(\{\mathbf{V},\mathbf{\theta}\}_{1}\), and \(\{\mathbf{V},\mathbf{\theta}\}_{2}\), respectively.
### Bernoulli Gates
Bernoulli gates operate as switches, facilitating the transmission of a sample's information from embedding tables to the downstream MLP component. Analogous to the \(\ell_{0}\) norm, we hope these "switches" to be capable of fully opening or closing without compromising the information integrity (shrinking the embedding representation). Inspired by the approach presented in (Kang et al., 2017; Wang et al., 2018), we use Bernoulli gates to predict each sample's relevant dimensions given its embedding representation. The detailed process of the Bernoulli gates is graphically depicted in Figure 2.
The Bernoulli gates operate in two distinct modes: stochastic selection and deterministic selection. Under the stochastic selection mode, the gates operate as independent Bernoulli distributions, to independently "open" or "close" dimensions given the probabilities. The principle behind stochastic selection rests on the assumption that, given a sufficiently large number of training iterations, the gates will stochastically and comprehensively traverse all potential combinations of dimensions. This prompts the Bernoulli parameters to increase for beneficial dimensions and penalize unhelpful ones.
Once the Bernoulli distributions (gates) have been fully explored, we capitalize on the learned distribution by deterministically opening the most advantageous dimensions in the deterministic selection mode. However, learned Bernoulli probabilities often exhibit heavy-tailedness, making it challenging to distinguish between important and unimportant dimensions. To mitigate this, we suggest employing a polarization regularizer and an automatic threshold searcher (both discussed in Section 3.3.3) inside Bernoulli gates.
#### 3.3.1. Stochastic Selection
In our previous discussion, we aim for Bernoulli gates to function as independent Bernoulli distributions in the stochastic selection mode during the searching stage. The first objective is to encode the embedding representations to the desired independent Bernoulli probabilities. To this end, we employ an FC layer (with parameter \(\mathbf{w}\)) and a Sigmoid activation layer (\(\sigma\)) to project these embedding representations of the \(i\)-th sample onto Bernoulli probabilities (upper left of Figure 2), denoted by \(\{p_{i,j}\}_{j=1}^{N^{*}}=\sigma(\mathbf{w}\,E_{i})\), where \(N^{*}=N\times d\) is the total length of the embedding representations. This enables us to initiate a combinatorial search process over the space of Bernoulli probabilities and FC parameters.
However, optimizing a loss function, which includes discrete RVs (Bernoulli distributions), incurs high variance (Kal
differentiable approximation of these operations. The softmax function uses a temperature parameter \(\tau\in\mathbb{R}^{+}\) to regulate the approximation degree (or the entropy of the distribution), as formalized:
\[\mathbf{z}_{i,j}=\frac{\left[\exp(\left(\log p_{i,j}+G_{i,j}\right)/\tau\right),\; \exp(\left(\log(1-p_{i,j})+G^{\prime}_{i,j}\right)/\tau)\right]}{\exp(\left( \log p_{i,j}+G_{i,j}\right)/\tau)\;+\;\exp(\left(\log(1-p_{i,j})+G^{\prime}_{i, j}\right)/\tau)}. \tag{2}\]
As \(\tau\) approaches 0, \(\mathbf{z}_{i,j}\) approximates the true binary vector, making the Gumbel-Softmax distribution become identical to the desired Bernoulli distribution. Then the final embedding masks, \(\mathbf{m}_{i}\), are created by concatenating the first bit of \(\{\mathbf{z}_{i,j}\}_{j=1}^{N}\).
However, our goal remains to produce true binary masks, which would effectively eliminate information from unimportant dimensions, as opposed to significantly shrinking them. The straight-through (ST) Gumbel-Softmax (Gumbel and Softmax, 2018; Gumbel and Softmax, 2018) serves well in this context. In the ST variant, the operation from Equation 1 is implemented in the forward pass while the continuous approximation from Equation 2 is used in the backward gradient descent. This approach enables sparse selection even when the temperature \(\tau\) is high, while still allowing the gradient to propagate and update the parameters.
#### 3.3.2. Deterministic Selection
After training the Bernoulli probabilities (\(p_{i,j}\)) during the searching phase, we utilize these probabilities to determine which dimensions will contribute to the accuracy of the recommendation predictions. However, \(p_{i,j}\) are characterized by high variance and heavy-tailedness, as shown by the histogram in Figure 3 (left). These present two complications: (1) distinguishing important dimensions from unimportant ones becomes challenging; and (2) even the unimportant dimensions still possess a small probability of being selected. Moreover, masks created using Bernoulli gates introduce an element of randomness (Gumbel noise), which hinders their direct application during inference (where given the same data each time, consistent results should be generated).
To overcome these limitations, we propose a deterministic selection mode that directly selects the beneficial dimensions using the knowledge derived from the well-trained Bernoulli probabilities. This process is outlined in Figure 2 (bottom left). Firstly, we use the same FC layer and sigmoid layer to estimate the Bernoulli probabilities, \(p_{i,j}\), analogous to the first step in the stochastic selection mode.
Figure 3. Histogram of the Bernoulli probabilities for a sample from Avazu dataset, trained (a) with and (b) without polarization regularizer. Note that y-axes use log scales, and within the same range, to facilitate better visual comparison.
Figure 2. The detailed process of Bernoulli gates to generate embedding masks from the embedding representation. “\(\Box\)” represents the element-wise summation operation.
Then, for each sample \(i\), we search a threshold among the probabilities \(\{p_{i,j}\}_{j=1}^{N}\) (see details in Section 3.3.3). We automatically adjust the gates to be open for probabilities exceeding this threshold and closed for those falling below it. The resulting embedding masks are utilized during the clustering and retraining phase (see Figure 1).
#### 3.3.3. Polarization and Automatic Threshold Searcher
Let's consider an empirical optimization procedure with an \(\ell_{0}\) regularization on the embedding masks during the searching stage:
(3) \[\mathcal{R}(\{\mathbf{V},\mathbf{\theta}\}_{\mathbf{b}},\mathbf{m})=\mathbb{E}_{i} \,\mathbb{E}_{\mathbf{m}_{i}}\left[\mathcal{L}(\,\mathbf{f}_{\mathbf{\theta}_{\mathbf{b}}}(\bm {V}_{\mathbf{b}}\cdot\mathbf{X}_{i}\odot\mathbf{m}_{i}\,),\,y_{i}\,)\right],\] (4) \[\text{with\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
tentative reliable embedding representation, we initialize the parameters for the Bernoulli gates and start the stochastic selection. Later we adopt the bi-level optimization strategy (Han et al., 2017; Zhang et al., 2018) to disjointly update the parameters \(\mathbf{w}\) in Bernoulli gates and the parameters \(\{\mathbf{V},\mathbf{\theta}\}_{\text{b}}\) in the base recommender model.
#### 3.5.2. Clustering Stage
In the clustering stage, we can obtain the masked embedding representations using the embedding tables and Bernoulli gates which have been trained during the searching stage. Remember we use deterministic selection mode for Bernoulli gates to generate embedding masks in this clustering stage. For each sample, compute its Euclidean distance to the centroid using the masked embedding representations, then assign it to the nearest centroid (group), and later update the centroids.
#### 3.5.3. Retraining Stage
As to the retraining stage, we first divide all samples via the trained K-Means cluster. Then we find the majority dimensions for each group from their embedding masks using the deterministic mode. After that, we initialize the deep recommender model 1 and 2 by their corresponding optimal dimensions and train them separately using the samples of each cluster.
## 4. Experiment
In this section, we conduct extensive experiments to evaluate our proposed framework. Specifically, the main research questions we care about are as follows:
* **RQ1**: How does iHAS perform compared with other mainstream selection methods?
* **RQ2**: Can the proposed iHAS be successfully transferred to more powerful recommender models?
* **RQ3**: How does each component contribute to the overall performance of the proposed iHAS?
* **RQ4**: Does the proposed iHAS demonstrate efficiency when compared to baseline models?
* **RQ5**: Does iHAS construct rational recommender model structures?
### Datasets
We conduct our experiments mainly on two commonly used public datasets, Avazu1 and Criteo2, which are both large-scale real-world datasets and serve as benchmarks in click-through rate (CTR) prediction tasks. Table 1 presents the detailed statistics of both datasets. Each dataset has been randomly segmented into training/validation/testing sets based on the proportions of 80%, 10%, and 10%.
Footnote 1: [https://www.kaggle.com/c/avazu-ctr-prediction/](https://www.kaggle.com/c/avazu-ctr-prediction/)
Footnote 2: [https://www.kaggle.com/c/criteo-display-ad-challenge/](https://www.kaggle.com/c/criteo-display-ad-challenge/)
* **Avazu** dataset consists of 40 million users' click records on ads over 11 days. Each record contains 22 categorical field features. Following the general preprocessing steps (Kang et al., 2017; Zhang et al., 2018), we group fields of which frequency is less than ten as a single field "others".
* **Criteo** dataset consists of 46 million users' click records on display ads. Each record contains 26 categorical fields and 13 numerical fields. we use the preprocessing method as Avazu for the low-frequency fields (less than ten) and transform each numerical field \(x\) by \(log^{2}(x)\) if \(x>2\).
### Evaluation Metrics
Following the previous works (Kang et al., 2017; Zhang et al., 2018), we evaluate the performance of our method using two common metrics: **AUC** and **Logloss**. AUC refers to the area under the ROC curve, which means the probability that a model will rank a randomly selected positive instance higher than a randomly selected negative one. A higher AUC value indicates superior model performance. On the other hand, Logloss, aka binary cross-entropy loss, directly quantifies the model's performance, with a lower score denoting more accurate predictions. Note that a marginal **0.001-level** improvement in AUC (increase) or Logloss (decrease) is perceived as a significant enhancement in model performance (Kang et al., 2017; Zhang et al., 2018; Zhang et al., 2018).
### Baseline Methods
We compare our proposed method with the following state-of-the-art methods:
* **PEP** (Kang et al., 2017): It adopts trainable thresholds to prune redundant embedding dimensions.
* **AutoField** (Zhang et al., 2018): It utilizes neural architecture search techniques (Kang et al., 2017) to select important field features.
* **OptEmbed**(Kang et al., 2017): It trains a supernet with various selected embedding dimensions, then uses evolution search to find the optimal embedding dimensions based on the supernet.
* **AdaFS**(Kang et al., 2017): It assigns weights to different fields in a soft manner (AdaFS-soft) or masks unimportant fields in a hard manner (AdaFS-hard) via a novel controller network.
* **OptFS**(Kang et al., 2017): It simultaneously selects optimal field features and the optimal interactions between these features using "binary gates".
### Implementation Details
We implement our method based on a public library3 that involves sixteen commonly-used DLRMs. As our framework is model-agnostic, it can be seamlessly integrated with any of these models, see Section 4.6. For the embedding layer, we set the initial embedding size of all fields as 16 in accordance with the previous works (Kang et al., 2017; Zhang et al., 2018). For the MLP component, we adopt two fully-connected layers of size \((16,8)\) with the ReLU activation function. We use Adam optimizer (Kingmaa et al., 2014) with an initial learning rate of 0.001, and weight decay of 1e-6. The batch size is set to 2048. We sample one validation batch every 100 training batches for bi-level optimization. The temperature \(\tau\) for ST Gumbel-Softmax is set to 0.1.
Footnote 3: [https://github.com/rixwew/pytorch-fm](https://github.com/rixwew/pytorch-fm)
The baseline models are implemented by the codes provided by their authors. For a fair comparison, we set the initial embedding dimension as 16 for all baselines. All the experiments are run on a single machine with an Nvidia RTX 3090 GPU.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Dataset & \#Instances & \#Fields & \#Features \\ \hline Avazu & 40,400,000 & 22 & 645,394 \\ Criteo & 45,840,617 & 39 & 1,086,810 \\ \hline \hline \end{tabular}
\end{table}
Table 1. The statistics of Avazu and Criteo datasets.
### Overall Performance (RQ1)
Table 2 compares the overall performance of our proposed iHAS and other baseline models on the Avazu and Criteo datasets. We summarize our observations below.
First, our iHAS outperforms all the state-of-the-art baseline methods as it can achieve higher AUC and lower Logloss on both datasets, demonstrating the effectiveness of iHAS in deep recommendation systems. Specifically, iHAS outperforms the runner-ups by 0.0038 (AUC) and 0.0021 (Logloss) on the Avazu datasets, and by 0.0004 (AUC) and 0.0006 (Logloss) on the Criteo datasets.
Secondly, among all baselines, AdaFS-soft is the most effective model for the Avazu and Criteo datasets. However, it only shrinks the field features using a feature weighting layer and therefore does not completely eliminate the effect of unimportant fields. Although AdaFS-hard attempts to mask unimportant fields by uniformly keeping the top \(K\) features, the trained feature weights may still exhibit a high variance pattern (remember the non-distinguishable probabilities in Figure 3, left panel). Therefore, this top \(K\) selection manner may lead to selecting unimportant features or omitting the important feature in their final model, further compromising the model performance. Our polarization regularizer and threshold searcher can help with this issue, as detailed in Section 3.3.3 and evidenced by the empirical ablation study in Section 4.7.
Lastly, other baselines apply global feature/dimension selection across all samples, which fails to account for inherent variations among heterogeneous individuals, and consequently leads to suboptimal performance. Additionally, PEP mainly emphasizes on the model size, i.e., it stops searching once the embedding table reaches a predefined parameter size. This approach may result in a suboptimal embedding table due to overlooking the model performance. AutoField4 also employs the top \(K\) selection manner, again leading to feature misselection and inferior model performance.
Footnote 4: The performance score for AutoField is borrowed from its original paper (Nagumura et al., 2019) as they use the same experimental settings and have not publicly released the codes.
### Transferability Analysis (RQ2)
In this subsection, we explore the transferability of iHAS. Specifically, we freeze the parameters of the well-trained Bernoulli gates and utilize them to help train other popular deep recommendation models, including FM (Zhou et al., 2019), W&D (Chen et al., 2019), and DeepFM (Chen et al., 2019).
Table 3 shows the experimental results on Avazu, where "original" refers to the corresponding model without any selection. We can observe that: (i) all the recommendation models have great improvement by adopting iHAS, which again demonstrates the importance of performing selection in the recommendations; (ii) The transferability of iHAS is better than the best baseline (by comparing the iHAS and AdaFS-soft in Table 3), which validates the effectiveness of our Bernoulli gates.
In summary, we conclude that iHAS has outstanding transferability across different recommendation models, which enables it to be leveraged in complicated real-world recommender systems.
### Ablation Study (RQ3)
In this subsection, we conduct the ablation study of key components in iHAS, as shown in Table 4. The Base model keeps all fields and the uniform embedding dimensions without any selections, and we derive four variants from iHAS: (i) iHAS-1: This variant is the model directly obtained in the searching stage, i.e., we remove the clustering and retraining stages; (ii) iHAS-2: This variant consists of a searching stage and a retraining stage. After we have the well-trained Bernoulli gates, we select dimensions across all samples to retrain one recommender model instead of separating samples into different clusters for retraining cluster-customized recommender models; (iii) iHAS-3: This variant is the standard iHAS without using the polarization regularizer described in Section 3.3.3; (iv) iHAS-4: This variant doesn't consider instance-wise differences by disconnecting the Bernoulli probabilities with the embedding representations of each sample. That means the Bernoulli probabilities become \(\{p_{j}\}_{j=1}^{N^{*}}=\sigma(w^{*})\) where \(w^{*}\) is simply a randomly
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Metric} & \multicolumn{5}{c}{Methods} \\ \cline{2-7} & Base & iHAS-1 & iHAS-2 & iHAS-3 & iHAS-4 & iHAS \\ \hline AUC \(\uparrow\) & 0.7765 & 0.7772 & 0.7767 & 0.7801 & 0.7768 & **0.7815** \\ Logloss \(\downarrow\) & 0.3818 & 0.3813 & 0.3816 & 0.3800 & 0.3817 & **0.3791** \\ \hline \hline \end{tabular}
\end{table}
Table 4. Ablation study on the Avazu datasets.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Dataset & Metric & \multicolumn{5}{c}{Methods} \\ \cline{3-8} & & PEP & AutoField & OptEmbed & AdaFS-soft & AdaFS-hard & OptFS & iHAS \\ \hline \multirow{2}{*}{Avazu} & AUC \(\uparrow\) & 0.7665 & 0.7773 & 0.7630 & 0.7777 & 0.7763 & 0.7724 & **0.7815** \\ & Logloss \(\downarrow\) & 0.3874 & 0.3813 & 0.3894 & 0.3812 & 0.3821 & 0.3840 & **0.3791** \\ \hline \multirow{2}{*}{Criteo} & AUC \(\uparrow\) & 0.8006 & 0.8029 & 0.7962 & 0.8039 & 0.8031 & 0.8015 & **0.8043** \\ & Logloss \(\downarrow\) & 0.4507 & 0.4490 & 0.4543 & 0.4484 & 0.4560 & 0.4504 & **0.4478** \\ \hline \hline \end{tabular}
\end{table}
Table 2. Performance comparison between iHAS and baseline models.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Metric} & \multicolumn{3}{c}{Transfer Type} \\ \cline{3-5} & & Original & AdaFS-soft & iHAS \\ \hline \multirow{2}{*}{FM} & AUC \(\uparrow\) & 0.7766 & 0.7799 & **0.7826** \\ & Logloss \(\downarrow\) & 0.3815 & 0.3797 & **0.3793** \\ \hline \multirow{2}{*}{W&8D} & AUC \(\uparrow\) & 0.7772 & 0.7790 & **0.7797** \\ & Logloss \(\downarrow\) & 0.3815 & 0.3802 & **0.3800** \\ \hline \multirow{2}{*}{DeepFM} & AUC \(\uparrow\) & 0.7806 & 0.7817 & **0.7840** \\ & Logloss \(\downarrow\) & 0.3795 & 0.3786 & **0.3784** \\ \hline \hline \end{tabular}
\end{table}
Table 3. Transferability of iHAS on the Avazu dataset.
initialized vector of the dimension length, making every sample share the same probability for every dimension.
Based on the results in Table 4, we can find: (i)HAS and its variants can increase the AUC and decrease the Logloss compared with the Base model, which indicates the necessity of performing selection on embedding dimensions for boosting model performance; (ii) iHAS performs better than iHAS-1, which indicates the necessity of the subsequent clustering and retraining stages; (iii) iHAS-1 outperforms iHAS-2, therefore, separating instances into different groups, i.e., the clustering stage, is beneficial for boosting the performance; (iv) Polarization is vital for acquiring better Bernoulli gates by comparing iHAS with iHAS-3; (v) Respecting the difference between instances can further boost the performance by comparing iHAS with iHAS-4.
### Efficiency Analysis (RQ4)
In addition to model performance, efficiency is vital when deploying the recommendation model into online systems, especially inference efficiency. We report the inference time on the whole test set of iHAS and other baselines in Figure 4. We can find that iHAS achieves the least inference time. This is because iHAS feed different test data into its preferred recommender model of smaller size instead of feeding all test data into a single model which may lead to additional inference cost on some data. On the contrary, PEP requires the longest inference time because its embedding table is usually sparse and hardware-unfriendly.
### Case Study (RQ5)
In this subsection, we first use a case study to investigate the optimal embedding dimensions for each cluster from iHAS. We show the results on Avazu as an example and exclude all anonymous field features in Figure 5.
We can observe that: (i) Each field's optimal dimensions greatly vary from one to another (from 4 to 12), which highlights the necessity of dimension search in recommender systems; (ii) id-related features, e.g., site_id and app_id, typically possess more dimensions. This aligns with human intuition as the id-related features are the core of recommender systems; (iii) Samples within different clusters tend to select different dimensions for each field, which validates our claim that different clusters present different patterns and should be trained separately to enhance performance and reduce inference time in Section 3.4.
Furthermore, we use four samples to illustrate the effectiveness of the iHAS framework consisting of group-customized recommender models. Figure 6 shows four samples grouped into two clusters (two in pink and two in cyan). Each cluster has its customized recommender model. We can find that the predictions are more correct (lower Logloss) if we feed the sample into its corresponding model. However, if feeding all of them together into one of the recommender models, we will receive some wrong predictions.
## 5. Conclusion
This study proposes an instance-wise Hierarchical Architecture Search framework, iHAS, as an innovative solution to the challenges associated with identifying optimal embedding dimensions for DLRMs. iHAS employs a three-stage hierarchical training strategy including searching, clustering, and retraining. The searching stage aims to identify the optimal embedding dimensions for each sample across different fields. Subsequent stages of clustering and retraining provide a mechanism for gathering similar samples as clusters and training cluster-customized DLRMs based on the individual optimal dimensions, thereby enhancing recommendation predictions. We conduct extensive experiments on two large-scale datasets to authenticate the efficacy of the proposed framework. The results demonstrate that iHAS could boost the performance of deep recommendations while reducing inference costs. Additionally, iHAS exhibits outstanding transferability to popular DLRMs.
Figure 4. Inference time (in log scale) of iHAS and other baselines on the Avazu dataset.
Figure 5. Case study of selected dimensions of each field for each DLRM in iHAS on the Avazu dataset.
Figure 6. Example data predictions on the Avazu dataset, where the samples with the same edge color belong to the same cluster. The ground truths and prediction scores are displayed in the diamonds. |
2301.00194 | Chordal graphs with bounded tree-width | Given $t\geq 2$ and $0\leq k\leq t$, we prove that the number of labelled
$k$-connected chordal graphs with $n$ vertices and tree-width at most $t$ is
asymptotically $c n^{-5/2} \gamma^n n!$, as $n\to\infty$, for some constants
$c,\gamma >0$ depending on $t$ and $k$. Additionally, we show that the number
of $i$-cliques ($2\leq i\leq t$) in a uniform random $k$-connected chordal
graph with tree-width at most $t$ is normally distributed as $n\to\infty$.
The asymptotic enumeration of graphs of tree-width at most $t$ is wide open
for $t\geq 3$. To the best of our knowledge, this is the first non-trivial
class of graphs with bounded tree-width where the asymptotic counting problem
is solved. Our starting point is the work of Wormald [Counting Labelled Chordal
Graphs, Graphs and Combinatorics (1985)], were an algorithm is developed to
obtain the exact number of labelled chordal graphs on $n$ vertices. | Jordi Castellví, Michael Drmota, Marc Noy, Clément Requilé | 2022-12-31T13:13:09Z | http://arxiv.org/abs/2301.00194v2 | # Chordal graphs with bounded tree-width
###### Abstract
Given \(t\geq 2\) and \(0\leq k\leq t\), we prove that the number of labelled \(k\)-connected chordal graphs with \(n\) vertices and tree-width at most \(t\) is asymptotically \(cn^{-5/2}\gamma^{n}n!\), as \(n\to\infty\), for some constants \(c,\gamma>0\) depending on \(t\) and \(k\). Additionally, we show that the number of \(i\)-cliques (\(2\leq i\leq t\)) in a uniform random \(k\)-connected chordal graph with tree-width at most \(t\) is normally distributed as \(n\to\infty\).
The asymptotic enumeration of graphs of tree-width at most \(t\) is wide open for \(t\geq 3\). To the best of our knowledge, this is the first non-trivial class of graphs with bounded tree-width where the asymptotic counting problem is solved. Our starting point is the work of Wormald [Counting Labelled Chordal Graphs, _Graphs and Combinatorics_ (1985)], were an algorithm is developed to obtain the exact number of labelled chordal graphs on \(n\) vertices.
## 1 Introduction
Tree-width is a fundamental parameter in structural and algorithmic graph theory, as illustrated for instance in [9]. It can be defined in terms of tree-decompositions or equivalently in terms of \(k\)-trees. A \(k\)-tree is defined recursively as either a complete graph on \(k+1\) vertices or a graph obtained by adjoining a new vertex adjacent to a \(k\)-clique of a \(k\)-tree. The tree-width of a graph \(\Gamma\) is the minimum \(k\) such that \(\Gamma\) is a subgraph of a \(k\)-tree. In particular, \(k\)-trees are the maximal graphs with tree-width at most \(k\). The number of \(k\)-trees on \(n\) labelled vertices was independently shown [3, 18] to be
\[\binom{n}{k}(k(n-k)+1)^{n-k-2}=\frac{1}{\sqrt{2\pi}\,k!\,k^{k+2}}\,n^{-5/2}\,( ek)^{n}\,n!\,(1+o(1)), \tag{1}\]
where the estimate holds for \(k\) fixed and \(n\to\infty\). However, there are relatively few results on the enumeration of graphs of given tree-width or on properties of random graphs with given tree-width. Graphs of tree-width one are forests (acyclic graphs) and their enumeration is a classical result, while graphs of tree-width at most two are series-parallel graphs and were first counted in [6]. The problem of counting graphs of tree-width three is still open. From now on, we will use \(t\) to denote the tree-width while \(k\) will denote the connectivity of a graph. All graphs we consider are labelled.
Given that tree-width is non-increasing under taking minors, the class of graphs with tree-width at most \(t\) is'small' when \(t\) is fixed, in the sense that the number \(g_{n,t}\) of labelled graphs with \(n\) vertices and tree-width at most \(t\) grows at most like \(c^{n}n!\) for some \(c>0\) depending on \(t\) (see [19, 14]). The best bounds known for \(g_{n,t}\) are, up to lower order terms,
\[\left(\frac{2^{t}tn}{\log t}\right)^{n}\leq g_{n,t}\leq(2^{t}tn)^{n}.\]
The upper bound follows by considering all possible subgraphs of \(t\)-trees, and the lower bound uses a suitable construction developed in [2]. In the present work we determine the asymptotic number of labelled chordal graphs with tree-width at most \(t\), following the approach in [15] and [12], and based on the analysis of systems of equations satisfied by generating functions.
A graph is chordal if every cycle of length greater than three contains at least one chord, that is, an edge connecting non-consecutive vertices of the cycle. Chordal graphs have been extensively studied in structural graph theory and graph algorithms (see for instance [16]), but not so much from the point of view of enumeration. Wormald [21] used generating functions to develop a method for finding the exact number of chordal graphs with \(n\) vertices for a given value of \(n\). It is based on decomposing chordal graphs into \(k\)-connected components for each \(k\geq 1\). As remarked in [21], it is difficult to define the \(k\)-connected components of arbitrary graphs for \(k>3\), but for chordal graphs they are well defined. Wormald introduces generating functions \(C_{k}(x)\) for \(k\)-connected chordal graphs and finds an equation linking \(C_{k}(x)\) and \(C_{k+1}(x)\) reflecting the decomposition into \(k\)-connected components. This is precisely the starting point of our work.
An important parameter in [21] is the size \(w\) of the largest clique. For chordal graphs one can show that the tree-width is equal to \(w-1\), hence bounding the tree-width by \(t\) is the same as bounding \(w\) by \(t+1\). The parameter \(w\) also plays a substantial role in showing that almost all chordal graphs are split graphs [4], which in turn implies that the number of chordal graphs with \(n\) vertices is of order \(2^{n^{2}/4}(1+o(1))\).
For fixed \(n,t\geq 1\) and \(0\leq k\leq t\), let \(\mathcal{G}_{t,k,n}\) be the set of \(k\)-connected chordal graphs with \(n\) labelled vertices and tree-width at most \(t\). Our two main results are the following.
**Theorem 1.1**.: _For \(t\geq 1\) and \(0\leq k\leq t\), there exist constants \(c_{t,k}>0\) and \(\gamma_{t,k}\in(0,1)\) such that_
\[|\mathcal{G}_{t,k,n}|=c_{t,k}\,n^{-5/2}\,\gamma_{t,k}^{n}\,n!\,(1+o(1))\qquad \text{as $n\to\infty$}.\]
Remark that by setting \(t=k\) in Theorem 1.1 one recovers the general form of the asymptotic estimate of the number of \(k\)-trees (1). The fact that \(\gamma_{k,k}=ek\) is reproven in Section 3. In principle, for fixed \(t\) and \(k\) the constants \(c_{t,k}\) and \(\gamma_{t,k}\) can be computed, at least approximately. Table 1 in Section 4 displays approximations of \(1/\gamma_{t,k}\) for \(1\leq k\leq t\leq 7\).
**Theorem 1.2**.: _Let \(t\geq 1\), \(0\leq k\leq t\). For \(i\in\{2,\ldots,t\}\) let \(X_{n,i}\) denote the number of \(i\)-cliques in a uniform random graph in \(\mathcal{G}_{t,k,n}\), and set \(\mathbf{X_{n}}=(X_{n,2},\ldots,X_{n,t})\). Then \(\mathbf{X_{n}}\) satisfies a multivariate central limit theorem, that is, as \(n\to\infty\) we have_
\[\frac{1}{\sqrt{n}}\,(\mathbf{X}_{n}-\mathbb{E}\,\mathbf{X}_{n})\stackrel{{ d}}{{\to}}N(0,\Sigma),\qquad\text{with}\quad\mathbb{E}\,\mathbf{X}_{n} \sim\alpha n\quad\text{and}\quad\mathbb{C}\mathrm{ov}\,\mathbf{X}_{n}\sim \Sigma n,\]
_and where \(\alpha\) is a \((t-1)\)-dimensional vector of positive numbers and \(\Sigma\) is a \((t-1)\times(t-1)\)-dimensional positive semi-definite matrix._
Let us mention that more structural asymptotic results can be expected. Notably, the class of chordal graphs with tree-width at most \(t\) is _subcritical_ in the sense of [13], as further discussed in the concluding Section 4. It follows from [20] that the uniform random connected chordal graph with tree-width at most \(t\) with distances rescaled by \(1/\sqrt{n}\) admits the _Continuum Random Tree_ (CRT) [1] as a scaling limit, multiplied by a constant that depends on \(t\).
The proofs of Theorems 1.1 and 1.2 are based on a recursive decomposition of chordal graphs translated in the combinatorial language of generating functions, the so-called _symbolic method_[15],
into a system of functional equations explicited in Section 2 and analysed in Section 3 by means of analytic methods [15]. More precisely, in Section 2.1 we show the unicity of the decomposition of chordal graphs into their \(k\)-connected components. This is translated in Section 2.2 into a system of funtional equations satisfied by exponential generating functions encoding the numbers of graphs in each class. In Section 2.3 we prove an alternative encoding of an integral operator, which is instrumental for computing the numerical values in Table 1. The subsequent asymptotic analysis is rather delicate as it involves singularity functions depending on several variables. Our notions of _proper_ and _fully movable singularity functions_ are key ingredients, and are defined along several technical notions in Section 3.1. Some useful propeties are proven in Section 3.2, before embarking in the proofs of the main results in Section 3.3.
## 2 Decomposition of chordal graphs
All graphs considered in this work will be simple and labelled, that is with vertex-set \([n]\). Let \(\Gamma\) be a graph and \(k\geq 1\). A \(k\)-separator of \(\Gamma\) is a subset of \(k\) vertices whose removal disconnects \(\Gamma\). The graph \(\Gamma\) is said to be \(k\)-connected if it contains no \(i\)-separator for \(i\in[k-1]\). Notice that with this definition we consider the complete graph on \(k\) vertices to be \(k\)-connected, for any \(k\geq 1\), contrary to the usual definition of connectivity (see for instance [10]). A \(k\)-connected component of \(\Gamma\) is a \(k\)-connected subgraph that is maximal, in term of subgraph containment, with that property.
An essential consequence of chordality is that \(k\)-connected chordal graphs admit a unique decomposition into \((k+1)\)-connected components through its \(k\)-separators. This is the subject of the next section.
### Unicity of the components
The following well-known result will play a central role in our definition of the decomposition of a chordal graph into \(k\)-connected components. For completeness we provide a short proof.
**Proposition 2.1** (Dirac [11]).: _In a chordal graph every minimal separator is a clique._
Proof.: Let \(\Gamma\) be a chordal graph and let \(S\) be a minimal separator with at least two vertices. Suppose for contradiction that there are \(u,v\in S\) such that \(uv\) is not an edge. Let \(A\) and \(B\) be different components of \(\Gamma-S\), and consider shortest paths \(P\) and \(Q\) between \(x\) and \(y\) whose inner vertices are, respectively, in \(A\) and \(B\). Then the concatenation of \(P\) and \(Q\) is a chordless cycle of length at least four, contradicting the fact that \(\Gamma\) is chordal.
For the rest of this section, we fix \(k\geq 1\).
**Definition 2.2**.: _Let \(\Gamma\) be a \(k\)-connected chordal graph with a \(k\)-separator \(S\), and, for \(m\geq 1\), let \(C_{1},\ldots,C_{m}\) be the (possibly empty) connected components of \(\Gamma-S\). Then, for \(i\in[m]\), the induced subgraphs \(\Gamma_{i}=\Gamma[V(C_{i})\cup S]\) will be called the slices of \(\Gamma\), obtained after cutting \(\Gamma\) through \(S\)._
Remark that by Proposition 2.1, \(S\) is a clique of \(\Gamma\). Furthermore, as each slice \(\Gamma_{i}\) (\(i\in[m]\)) contains a copy of \(S\), \(\Gamma\) can be obtained by identifying together these \(m\) copies of \(S\). This operation will be called _gluing_ through \(S\). The next Proposition 2.3 implies that the slices \(\Gamma_{i}\) are \(k\)-connected.
**Proposition 2.3**.: _Let \(\Gamma\) be a \(k\)-connected chordal graph with a \(k\)-separator \(S\) inducing the slices \(\Gamma_{1},\ldots,\Gamma_{m}\). If, for some \(i\in[m]\), \(\Gamma_{i}\) has a separator \(T\), then \(T\) is also a separator of \(\Gamma\). Furhtermore, \(T\neq S\) is a \(k\)-separator of \(\Gamma\) if it is a separator of the slice \(\Gamma_{i}\) it belongs to._
Proof.: If \(S\subseteq T\), the first claim is direct. Suppose now that \(v\in S\setminus T\). Then, \(T\) separates \(v\) from some other vertex \(w\in\Gamma_{i}\), because otherwise every vertex would be reachable from \(v\) and \(T\) would not be a separator. Since the vertices in \(S\) form a clique, \(w\) is in fact separated from all vertices in \(S\setminus T\). Then, in \(\Gamma\), the same set \(T\) also separates \(w\) from \(S\setminus T\).
For the second claim, observe that since all the vertices and edges in \(\Gamma\) are also in some \(\Gamma_{i}\) and vice-versa, a set of vertices \(T\neq S\) is a subclique of \(\Gamma\) if and only if it is a subclique of some slice \(\Gamma_{i}\). Now suppose to a contradiction that \(\Gamma_{i}-T\) is connected. Since, for \(j\neq i\), \(T\) is not entirely contained in any other slice \(\Gamma_{j}\), and \(\Gamma_{j}\) is \(k\)-connected, it follows that \(\Gamma_{j}-T\) is also connected. In particular, every vertex in \(\Gamma-T\) is reachable from some vertex \(v\in S\setminus T\), a contradiction.
Consider now the set of slices obtained after cutting \(\Gamma\) through all its \(k\)-separators. Because they contain no \(k\)-separators, all these slices are \((k+1)\)-connected and form in fact the \((k+1)\)-connected components of \(\Gamma\). They are well defined thanks to the following result.
**Proposition 2.4**.: _Let \(\Gamma\) be a \(k\)-connected chordal graph with \(k\)-separators \(S\) and \(T\). Then the slices \(\Gamma_{1},\ldots,\Gamma_{m}\) obtained after cutting \(\Gamma\) first through \(S\) then through \(T\) can be characterised as follows:_
1. _Let_ \(i\in[m]\)_. For each connected component_ \(C_{i}\) _of_ \(\Gamma-(S\cup T)\) _there will be a slice_ \(\Gamma_{i}\) _with vertex set_ \(V_{i}\)_, in such a way that_ \(\Gamma_{i}=\Gamma[V_{i}]\)_. The set_ \(V_{i}\) _contains the vertices in_ \(C_{i}\)_, and also contains the vertices of_ \(S\) _(resp._ \(T\)_) if there is some vertex in_ \(S\setminus T\) _(resp._ \(T\setminus S\)_) that has a neighbour in_ \(C_{i}\)_._
2. _If none of the slices described in (i) contains the vertices in both_ \(S\) _and_ \(T\)_, there will be an additional slice_ \(\Gamma_{m+1}=\Gamma[S\cup T]\)_, and these are all the possible slices._
_In particular, doing the cuts first through \(T\) and then through \(S\) results in the same slices._
Proof.: We first consider the slices obtained after cutting only through \(S\). For \(1\leq i\leq m_{S}\), each of the slices \(\Gamma_{i}^{S}\) corresponds to a connected component \(C_{i}^{S}\) of \(\Gamma-S\). Among these slices there is only one containing \(T\) as a subclique, say \(\Gamma_{1}^{S}\), because the vertices in \(T\setminus S\) necessarily belong to the same component \(C_{1}^{S}\). Therefore \(\Gamma_{1}^{S}\) is the unique slice that will be cut through \(T\) while the others stay unchanged. For \(2\leq i\leq m_{S}\), each of the other slices \(\Gamma_{i}^{S}\) contains the vertices of \(C_{i}^{S}\), which is certainly a connected component of \(\Gamma-(S\cup T)\), and the vertices in \(S\), but contains no vertex in \(T\setminus S\). This agrees with \((i)\) because all the vertices in \(S\) have a neighbour in \(C_{i}^{S}\), since \(S\) is a minimal separator, while the vertices in \(T\setminus S\) have no neighbours in \(C_{i}^{S}\).
We now consider the slices obtained after cutting \(\Gamma_{1}^{S}\) through \(T\). Again, for \(1\leq i\leq m_{T}\) each of these slices corresponds to a connected component \(C_{i}^{T}\) of \(\Gamma_{1}^{S}-T\), denoted by \(\Gamma_{i}^{T}\). Among these slices, there is only one containing \(S\) as a subclique, say \(\Gamma_{1}^{T}\), because the vertices in \(T\setminus S\) necessarily belong to the same component \(C_{1}^{T}\). The rest of the slices contain no vertex in \(S\setminus T\). In fact, they are analogous to the slices \(\Gamma_{i}^{S}\), \(2\leq i\leq m_{S}\), and they agree with \((i)\) for the same reasons.
There are two possibilities for \(C_{1}^{T}\): either it contains vertices other than the ones in \(S\setminus T\) or it does not. In the first case, observe that \(C_{1}^{T}-S\) is connected. Indeed, if this was not the case \(\Gamma_{1}^{S}\) would have \(S\cup T\) as a separator, while neither \(S\) nor \(T\) are separators. But then there would be a minimal \(k\)-separator of \(\Gamma_{1}^{S}\) that is not a clique, which is not possible. Therefore, \(C_{1}^{T}-S\) is a connected component of \(\Gamma-(S\cup T)\), which has some neighbour in \(S\setminus T\) and also in \(T\setminus S\), since \(C_{1}^{S}\) is connected. This also agrees with \((i)\). On the other hand, if \(C_{1}^{T}\) contains no vertices other than the ones in \(S\setminus T\), then it is not a component of \(\Gamma-(S\cup T)\) and we are in the case \((ii)\).
Since this characterisation does not depend on the order of the cuts, the last claim follows.
The \(k\)-connected components of \(\Gamma\) are thus the maximal \(k\)-connected subgraphs, since, for \(i\in[k-1]\), there is a single way of cutting through all the \(i\)-separators. And we can uniquely define the 2-connected components of a connected chordal graph, the 3-connected components of these 2-connected components, the 4-connected components of these 3-connected components, and so on. An illustration is given in Figure 1.
This decomposition is the generalisation of the well-known decomposition of a connected graph into blocks (i.e. maximal 2-connected components). And it induces, as shown in the next section, a system of functional equations satisfied by the generating function counting chordal graphs of tree-width at most \(k\).
### Functional equations for the generating functions
For the rest of this section, we fix some \(t\geq 1\) and let \(\mathcal{G}\) be the family of chordal graphs with tree-width at most \(t\). For a graph \(\Gamma\in\mathcal{G}\) and \(j\in[t]\), let us denote by \(n_{j}(\Gamma)\) the number of \(j\)-cliques of \(\Gamma\). In the rest of the paper, we will write \(\mathbf{x}\) as a short-hand for \(x_{1},\ldots,x_{t}\), and define the multivariate (exponential) generating function associated to \(\mathcal{G}\) to be
\[G(\mathbf{x})=G(x_{1},\ldots,x_{t})=\sum_{\Gamma\in\mathcal{G}}\frac{1}{n_{1} (\Gamma)!}\prod_{j=1}^{t}x_{j}^{n_{j}(\Gamma)},\]
Figure 1: Decomposition of a connected graph with tree-width 3 into its 2, 3 and 4-connected components. Vertices with the same label are identified.
Let \(g_{n}\) denote the number of chordal graphs with \(n\) vertices and tree-width at most \(t\). Then,
\[G(x,1,\ldots,1)=\sum_{n\geq 1}\frac{g_{n}}{n!}x^{n}.\]
For \(0\leq k\leq t+1\), let \(\mathcal{G}_{k}\) be the family of \(k\)-connected chordal graphs with tree-width at most \(t\) and \(G_{k}(\mathbf{x})\) be the associated generating function. In particular, we have
\[G_{t+1}(\mathbf{x})=\frac{1}{(t+1)!}\prod_{j\in[t]}x_{j}^{\binom{t+1}{j}}. \tag{2}\]
For other values of \(k\), we need to consider graphs rooted at a clique. Rooting the graph \(\Gamma\in\mathcal{G}_{k}\) at an \(i\)-clique means distinguishing one \(i\)-clique \(K\) of \(\Gamma\) and choosing an ordering of (the labels of) the vertices of \(K\). In order to avoid over-counting, we will discount the subcliques of \(K\). Let \(i\in[k]\) and define \(\mathcal{G}_{k}^{(i)}\) to be the family of \(k\)-connected chordal graphs with tree-width at most \(t\) and rooted at an \(i\)-clique. Let then \(G_{k}^{(i)}(\mathbf{x})\) be the associated generating function, where now for \(1\leq j\leq i\) the variables \(x_{j}\) mark the number of \(j\)-cliques that are not subcliques of the root.
**Lemma 2.5**.: _Let \(k\in[t]\). Then the following equations hold:_
\[G_{k+1}^{(k)}(\mathbf{x}) =k!\prod_{j=1}^{k-1}x_{j}^{-\binom{k}{j}}\frac{\partial}{\partial x _{k}}G_{k+1}(\mathbf{x}), \tag{3}\] \[G_{k}^{(k)}(\mathbf{x}) =\exp\left(G_{k+1}^{(k)}\big{(}x_{1},\ldots,x_{k-1},x_{k}G_{k}^{( k)}(\mathbf{x}),x_{k+1},\ldots,x_{t}\big{)}\right),\] (4) \[G_{k}(\mathbf{x}) =\frac{1}{k!}\prod_{j=1}^{k-1}x_{j}^{\binom{k}{j}}\int G_{k}^{(k) }(\mathbf{x})\ dx_{k}. \tag{5}\]
Proof.: As per the symbolic method, taking the derivative of a generating function with respect to the variable \(x_{i}\) amounts to rooting a graph at an \(i\)-clique, while taking the integral will correspond to "unrooting". Thus, both Equations (3) and (5) follow from the definition.
On the other hand, Equation (4) is derived from the decomposition of \(k\)-connected chordal graphs into their \((k+1)\)-connected components, and is illustrated in Figure 2. Indeed, a \(k\)-connected
Figure 2: Recursive decomposition of a \(k\)-connected chordal graph into \((k+1)\)-connected components.
chordal graph rooted at a \(k\)-clique \(K\) is obtained by gluing through \(K\) a set of \((k+1)\)-connected graphs \(\Gamma_{1},\ldots,\Gamma_{m}\) containg \(K\), and then further gluing some \(k\)-connected chordal graph at each \(k\)-clique of \(\Gamma_{i}\), other than \(K\), for all \(i\in[m]\). Recall that the \(k\)-clique is itself considered to be \(k\)-connected. The substitution of \(x_{k}\) by \(x_{k}G_{k}^{(k)}\) in Equation (4) reflects the recursive process of gluing through \(k\)-cliques. Finally, the fact that the graphs are rooted at ordered cliques ensures that the gluing process is made in all possible ways.
Finally, the fact that a graph is the set of its connected components can be translated as
\[G(\mathbf{x})=G_{0}(\mathbf{x})=\exp(G_{1}(\mathbf{x})),\]
Observe then that one can derive \(G_{0}(\mathbf{x})\) from \(G_{t+1}(\mathbf{x})\) by successively using Identities (3), (4) and (5) from Lemma 2.5, as illustrated in Figure 3. Furthermore, in the steps where Identity (5) is used, one needs to compute a formal integral. An alternative is to use the dissymmetry theorem for tree-decomposable classes, as presented in Proposition 2.6. This is the purpose of the next section.
### Combinatorial integration
In this section, we prove an alternative equation for (5) which does not involve an integral operator. It is useful when computing the numerical values in Table 1.
A class of graphs \(\mathcal{A}\) is said to be _tree-decomposable_ if for each graph \(\Gamma\in\mathcal{A}\) we can associate in a unique way a tree \(\tau(\Gamma)\) whose nodes are distinguishable, for instance by using the labels. Let \(\mathcal{A}_{\bullet}\) denote the class of graphs in \(\mathcal{A}\) where a node of \(\tau(\Gamma)\) is distinguished. Similarly, \(\mathcal{A}_{\bullet-\bullet}\) is the class of graphs in \(\mathcal{A}\) where an edge of \(\tau(\Gamma)\) is distinguished, and \(\mathcal{A}_{\bullet-\bullet}\) those where an edge \(\tau(\Gamma)\) is distinguished and given a direction. As presented in [8], the _dissymmetry theorem for tree-decomposable classes_ is a generalisation of the well-known _dissymmetry theorem for trees_ of [5], and allows one to express the class of unrooted graphs in \(\mathcal{A}\) in terms of the rooted classes.
**Proposition 2.6** (Dissymmetry Theorem [8]).: _Let \(\mathcal{A}\) be a tree-decomposable class, then_
\[\mathcal{A}+\mathcal{A}_{\bullet\to\bullet}\simeq\mathcal{A}_{\bullet}+\mathcal{ A}_{\bullet-\bullet},\]
_where \(\simeq\) is a bijection preserving the number of nodes. In particular, if the encoding trees have no adjacent nodes of the same type then we have_
\[\mathcal{A}\simeq\mathcal{A}_{\bullet}-\mathcal{A}_{\bullet-\bullet}.\]
An example of the decomposition of a chordal graph \(\Gamma\) of bounded tree-width and its associated tree \(\tau(\Gamma)\) is depicted in Figure 4. Next, we make use of this decomposition to obtain, via the above Proposition, the generating function of unrooted chordal graphs of bounded tree-width.
**Lemma 2.7**.: _Let \(k\in[t]\). Then the following equation holds:_
\[\begin{split} G_{k}(\mathbf{x})&=G_{k+1}\big{(}x_{ 1},\ldots,x_{k-1},x_{k}G_{k}^{(k)}(\mathbf{x}),x_{k+1},\ldots,x_{t}\big{)}\\ &\qquad+\frac{1}{k!}\prod_{j\in[k]}x_{j}^{\binom{k}{j}}G_{k}^{(k )}(\mathbf{x})\left(1-G_{k+1}^{(k)}\big{(}x_{1},\ldots,x_{k-1},x_{k}G_{k}^{(k )}(\mathbf{x}),x_{k+1},\ldots,x_{t}\big{)}\right).\end{split} \tag{6}\]
Proof.: To each \(\Gamma\in\mathcal{G}_{k}\) different from the couple graph on \(k\) vertices, we associate a unique tree \(\tau(\Gamma)\) as follows. The tree \(\tau(\Gamma)\) admits two different types of nodes, namely \(b\) and \(c\). Nodes of type \(b\) represent the \((k+1)\)-connected components of \(\Gamma\), while those of type \(c\) represent the \(k\)-cliques of \(\Gamma\) through which the \((k+1)\)-connected components are glued together.
Let \(B_{k}\) and \(C_{k}\) be the generating functions counting the trees \(\tau(\Gamma)\) (\(\Gamma\in\mathcal{G}_{k}\)) rooted at nodes of type \(b\) and \(c\), respectively, and \(E_{k}\) be the generating function of those trees rooted at an undirected edge between nodes of types \(b\) and \(c\). They can also be respectively seen as the generating functions counting \(k\)-connected graphs with a distinguished \((k+1)\)-connected component, a distinguished \(k\)-clique that belongs to more than one \((k+1)\)-connected components, or a distinguished \((k+1)\)-connected component \(C\) together with a distinguished \(k\)-clique in \(C\) that belongs to at least another
Figure 4: Tree-decomposition (right) associated to a \(2\)-connected chordal graph (left) of tree-width \(3\).
\((k+1)\)-connected component \(C^{\prime}\). They are specified next using the symbolic method:
\[B_{k}(\mathbf{x}) =G_{k+1}\big{(}x_{1},\ldots,x_{k-1},x_{k}G_{k}^{(k)}(\mathbf{x}),x_ {k+1},\ldots,x_{t}\big{)},\] \[C_{k}(\mathbf{x}) =\frac{1}{k!}\prod_{j\in[k]}x_{j}^{\binom{k}{j}}\left(G_{k}^{(k)} (\mathbf{x})-\Big{(}1+G_{k+1}^{(k)}\big{(}x_{1},\ldots,x_{k-1},x_{k}G_{k}^{(k)} (\mathbf{x}),x_{k+1},\ldots,x_{t}\big{)}\Big{)}\right),\] \[E_{k}(\mathbf{x}) =\frac{1}{k!}\prod_{j\in[k]}x_{j}^{\binom{k}{j}}G_{k+1}^{(k)} \big{(}x_{1},\ldots,x_{k-1},x_{k}G_{k}^{(k)}(\mathbf{x}),x_{k+1},\ldots,x_{t} \big{)}\left(G_{k}^{(k)}(\mathbf{x})-1\right).\]
The equation defining \(B_{k}(\mathbf{x})\) follows directly from the decomposition discussed in the previous section, while the equation for \(C_{k}(\mathbf{x})\) is obtained from Equation (4) by substracting the first two terms of the exponential (because there are at least two \((k+1)\)-connected components glued through the \(k\)-clique). The equation for \(E_{k}(\mathbf{x})\) can be derived by considering a \((k+1)\)-connected chordal graph \(\Gamma\) rooted at a \(k\)-clique \(K\) and gluing through it a \(k\)-connected chordal graph \(\Gamma^{\prime}\) rooted at \(K\), and containing at least one \((k+1)\)-connected component, then further gluing some \(k\)-connected chordal graph to other \(k\)-cliques of \(\Gamma^{\prime}\). The correcting factors in the last two equations are there to mark all the subcliques of the root \(k\)-clique and forget the order of its vertices.
Finally, recall that we consider the complete graph on \(k\) vertices to be \(k\)-connected. So that Proposition 2.6 directly implies that the unrooted graphs are counted by
\[G_{k}(\mathbf{x})=\frac{1}{k!}\prod_{j\in[k]}x_{j}^{\binom{k}{j}}+B_{k}( \mathbf{x})+C_{k}(\mathbf{x})-E_{k}(\mathbf{x}).\]
By translating this equation in light of the above three equations, one concludes the proof.
## 3 Asymptotic analysis
Fix \(t\geq 1\). In this section we prove Theorems 1.1 and 1.2. We use rather classical methods from [15], which consist in deriving asymptotic estimates from local expansions of the generating functions from Section 2 at their singularities. Those expansions are in turn derived from successive applications of the implicit system of equations described in Lemma 2.5, to "transfer" the local expansion of \(G_{t+1}(\mathbf{x})\) to \(G_{0}(x_{1},1,\ldots,1)\), as illustrated by the schema in Figure 3.
We will follow the method developed in [12, Chapter 2], but will need to extend some of the tools and notions present there in order to deal with multivariate generating functions and the fact that the local expansions are with respect to different variables from one step to the next. This is the purpose of the next section.
### Proper singularity functions and singular expansions
Let \(\rho:U\to\mathbb{C}\) be an analytic function defined on an open set. For \(u\in U\) and \(\delta,\eta>0\), a \(\Delta\)_-domain at_\(\rho(u)\) is a complex region of the form
\[\Delta(\rho(u),\delta,\eta)=\Delta(\rho(u))=\{z\in\mathbb{C}:|z|<\rho(u)+\eta \text{ and }|\arg(z/\rho(u)-1)|>\delta\}.\]
Our main tool is a "transfer theorem". The proof can be found in [12] (see also [15, Chapter VI.3]).
**Proposition 3.1**.: _(Transfer Theorem [12, Lemma 2.18]). Let \(f(z,u)\) be a power series in \(z\) and a parameter \(u\in U\), and suppose that it admits an expansion of the form_
\[f(z,u)=C(u)\left(1-\frac{z}{\rho(u)}\right)^{-\alpha(u)}+O\left(\left(1-\frac{z }{\rho(u)}\right)^{-\beta(u)}\right),\]
_that is uniform for \(u\in U\) and \(z\in\Delta(\rho(u))\), and with functions \(C(u)\), \(\rho(u)\), \(\alpha(u)\) and \(\beta(u)\) that remain bounded and satisfy \(\beta(u)<\Re(\alpha(u))\) for all \(u\in U\)._
_Then the following estimate holds uniformly for \(u\in U\) and as \(n\to\infty\)_
\[[z^{n}]f(z,u)=C(u)\frac{n^{\alpha(u)-1}}{\Gamma(\alpha(u))}\rho(u)^{-n}+O \left(\rho(u)^{-n}\,n^{\max(\Re(\alpha(u))-2,\;\beta(u)-1)}\right).\]
By setting \(u=1\) in Proposition 3.1, one recovers the "classical" transfer theorem for univariate analytic functions, see for instance [12, Lemma 2.15].
Next we introduce several definitions which will allow us to extend the notion of local expansion of an analytic function at an _algebraic singularity_ to our multivariate setting. First is the notion of _fully movable proper singularity function_.
**Definition 3.2**.: _We say that a function \(\rho(x_{2},\ldots,x_{t})\) is a **proper singularity function** if it satisfies the following conditions:_
1. _It is defined in a_ \((t-1)\)_-dimensional proper complex neighbourhood of_ \(\mathbb{R}_{+}^{t-1}\)_, where it is also analytic._
2. _It is positive and real if_ \(x_{2},\ldots,x_{t}\) _are positive and real, and it is strictly decreasing with negative derivatives in all_ \(t-1\) _positive real variables._
_Furthermore we say it is **fully movable** with respect to the variables \(x_{2},\ldots,x_{k}\) if the following condition holds:_
1. \(\rho(x_{2},\ldots,x_{t})\to 0\) _(resp._ \(\infty\)_) if one of the variables_ \(x_{2},\ldots,x_{k}\) _tends to_ \(\infty\) _(resp._ \(0\)_), whereas all the other variables including_ \(x_{k+1},\ldots,x_{t}\) _are fixed positive real numbers._
With this notion at hand, we can define that of a _positive function with a proper \(\alpha\)-singularity_.
**Definition 3.3**.: _Let \(\alpha\in\mathbb{R}\setminus\mathbb{Z}\). We say that a function \(G(\mathbf{x})\) is a **positive function with a proper \(\alpha\)-singularity** if the following properties hold:_
1. \(G(\mathbf{x})\) _is a power series in_ \(\mathbf{x}=(x_{1},\ldots,x_{t})\) _with non-negative coefficients._
2. _There exists a proper singularity function_ \(\rho(x_{2},\ldots,x_{t})\) _such that for every fixed choice of_ \(x_{2},\ldots,x_{t}\in\mathbb{R}_{+}\)_,_ \(\rho(x_{2},\ldots,x_{t})\) _is the radius of convergence of the power series_ \(x_{1}\mapsto G(\mathbf{x})\)_._
3. _For every choice of_ \(X_{0},X_{1}\in\mathbb{R}\) _with_ \(0<X_{0}<X_{1}\)_, there exist_ \(\delta>0\) _and analytic functions_ \(g_{1}(\mathbf{x})\)_,_ \(g_{2}(\mathbf{x})\)_, that are defined and non-zero for_ \(X_{0}<|x_{2}|,\ldots,|x_{t}|<X_{1}\) _and_ \(|x_{1}-\rho(x_{2},\ldots,x_{t})|<\delta\) _with_ \(x_{1},\ldots,x_{t}\) _sufficiently close to the positive real axis, such that in this range, provided that_ \(\arg(x_{1}-\rho(x_{2},\ldots,x_{t}))\neq 0\)_, we have_ \[G(\mathbf{x})=g_{1}(\mathbf{x})+g_{2}(\mathbf{x})\left(1-\frac{x_{1}}{\rho(x_{ 2},\ldots,x_{t})}\right)^{\alpha}.\] (7) _In this case we say that_ \(x_{1}\) _is the_ **leading variable** _of_ \(G(\mathbf{x})\)
Finally, in order to apply Proposition 3.1 to a positive function with a proper \(\alpha\)-singularity, we need some notion of analytic continuation to a \(\Delta\)-domain.
**Definition 3.4**.: _A positive function \(G(\mathbf{x})\) with a proper \(\alpha\)-singularity (\(\alpha\in\mathbb{R}\setminus\mathbb{Z}\)) and proper singularity function \(\rho(x_{2},\ldots,x_{t})\) is said to be **aperiodic and analytically continuable with respect to the variable**\(x_{1}\) if the following holds:_
1. _For every fixed choice of_ \(x_{2},\ldots,x_{t}\in\mathbb{R}_{+}\)_,_ \(\rho(x_{2},\ldots,x_{t})\) _is the unique singularity of the function_ \(x_{1}\mapsto G(\mathbf{x})\) _on the circle_ \(|x_{1}|=\rho(x_{2},\ldots,x_{t})\)_._
2. _There exists_ \(\delta>0\) _such that_ \(x_{1}\mapsto G(\mathbf{x})\) _can be analytically continued to the region_ \[|x_{1}|<|\rho(x_{2},\ldots,x_{t})|+\delta/2\quad\text{and}\quad|x_{1}-\rho(x_{ 2},\ldots,x_{t})|>\delta.\] (8)
_In particular this function cannot be represented as a function of the form \(x_{1}^{a}f(x_{1}^{b})\) for some positive integers \(a,b\), where \(b>1\)._
Fix \(k\in\{2,\ldots,t\}\) and observe that setting \(x_{i}=1\) for \(i\neq k\) in Definition 3.4_(ii)_ implies that \(G(x_{1},1,\ldots,1,x_{k},1,\ldots,1)\) can be analytically continued to a domain of the form \(\Delta(x_{k})\).
### Transfer properties of proper singular expansions
We now prove certain "transfer properties" of proper \(\alpha\)-singular expansions in the neighbourhood of a proper singularity function. Our main tools here will be the _Implicit Function Theorem_ (IFT) for analytic functions, and its refinement known as the _Weierstrass Preparation Theorem_ (WPT). For a statement and a proof of those famous theorems, we refer the reader to [17].
The first property generalises [12, Lemma 2.28] to proper singularity functions. The proof follows the same line and we only sketch it here.
**Lemma 3.5**.: _Let \(k\in\{2,\ldots,t-1\}\) and suppose that \(\rho(x_{2},x_{3},\ldots,x_{t})\) is a proper singularity function that is fully movable with respect to the variables \(x_{2},\ldots,x_{k}\). Then there exists a proper singularity function \(\kappa(x_{1},\ldots,x_{k-1},x_{k+1},\ldots,x_{t})\) that is fully movable with respect to \(x_{1},\ldots,x_{k-1}\) such that_
\[x_{1}=\rho(x_{2},\ldots,x_{k-1},\kappa(x_{1},\ldots,x_{k-1},x_{k+1},\ldots,x_ {t}),x_{k+1},\ldots,x_{t}). \tag{9}\]
_Furthermore there exists a function \(K(\mathbf{x})\) that is analytic and non-zero on a \(t\)-dimensional complex neighbourhood of \(\mathbb{R}_{+}^{t}\) such that_
\[x_{1}-\rho(x_{2},\ldots,x_{t})=K(\mathbf{x})\left(x_{k}-\kappa(x_{1},\ldots,x_ {k-1},x_{k+1},\ldots,x_{t})\right). \tag{10}\]
Proof.: Suppose first that \(x_{1},\ldots,x_{t}\) are positive real variables. Since \(\rho\) is strictly decreasing and tends to \(0\) (resp. \(\infty\)), if one of the variables \(x_{2},\ldots,x_{k}\) tends to \(\infty\) (resp. \(0\)) then it immediately follows from the continuity of \(\rho\) and the IFT that a function \(\kappa=\kappa(x_{1},\ldots,x_{k-1},x_{k+1},\ldots,x_{t})\) satisfying (9) exists. Furthermore, \(\kappa\) is strictly decreasing and tends to \(0\) (resp. \(\infty\)) if one of the variables \(x_{1},\ldots,x_{k-1}\) tends to \(\infty\) (resp. \(0\)).
Next, fix \(x_{2},\ldots,x_{t}\in\mathbb{R}_{+}\) and set \(x_{1}=\rho(x_{2},\ldots,x_{t})\). Since from Definition 3.2\(\rho\) is analytic and satisfies \(\frac{\partial}{\partial x_{k}}\rho<0\) for \(2\leq k\leq t\), it follows, by applying the IFT to (9), that the function \(\kappa\) can be (uniquely) analytically continued to a complex neighbourhood of \((x_{1},\ldots,x_{k-1},x_{k+1},\ldots,x_{t})\). In fact, using the WPT in the degree one case, it can further be shown that there exists a function \(K(\mathbf{x})\) that is analytic and non-zero in a complex neighbourhood of \(\mathbf{x}\) such that (10) holds.
Finally, a standard analytic continuation argument shows that both \(\kappa\) and \(K\) can be _globally_ defined so that (10) holds in the proposed range, that is, a complex neighbourhood of \(\mathbb{R}_{+}^{t}\).
An important consequence of Lemma 3.5 is that for \(k\in[t]\) the representation (7) can be rewritten into
\[G(\mathbf{x})=g_{1}(\mathbf{x})+\overline{g}_{2}(\mathbf{x})\left(1-\frac{x_{k} }{\kappa}\right)^{\alpha},\]
with \(\kappa=\kappa(x_{1},\ldots,x_{k-1},x_{k+1},\ldots,x_{t})\) and where the analytic function
\[\overline{g}_{2}(\mathbf{x})=g_{2}(\mathbf{x})\left(\frac{K(\mathbf{x})\kappa }{\rho(x_{2},\ldots,x_{t})}\right)^{\alpha}\]
is defined and non-zero for \(X_{0}<|x_{2}|,\ldots,|x_{t}|<X_{1}\). This means that any of the variables \(x_{1},\ldots,x_{k}\) can be the leading one in the definition of a positive function with a proper \(\alpha\)-singularity, provided that the proper singularity function \(\rho(x_{2},\ldots,x_{t})\) is fully movable with respect to \(x_{2},\ldots,x_{k}\). Furthermore \(\kappa\) is certainly a singularity of the mapping \(x_{k}\mapsto G(\mathbf{x})\). And by the monotonicity property in Definition 3.2_(ii)_ there is no smaller positive real singularity. Thus, \(\kappa\) is the radius of convergence of the mapping \(x_{k}\mapsto G(\mathbf{x})\), provided that \(x_{1},\ldots,x_{k-1},x_{k+1},\ldots,x_{t}\in\mathbb{R}_{+}\).
Next, we extend [12, Lemma 2.27] to the context of positive function with a proper \(\alpha\)-singularity.
**Lemma 3.6**.: _For \(k\in[t]\) and \(\alpha\in\mathbb{R}\setminus\mathbb{Z}\), let \(G(\mathbf{x})\) be a positive function with a proper \(\alpha\)-singularity, aperiodic and analytically continuable with respect to \(x_{1}\), and with a proper singularity function \(\rho(x_{2},\ldots,x_{t})\) that is fully movable with respect to \(x_{2},\ldots,x_{k}\). Set_
\[D(\mathbf{x})=\frac{\partial}{\partial x_{k}}G(\mathbf{x})\quad\text{and} \quad H(\mathbf{x})=\int_{0}^{x_{k}}G(x_{1},\ldots,x_{k-1},y,x_{k+1},\ldots,x _{t})\,dy. \tag{11}\]
_Then \(D(\mathbf{x})\) is a positive function with a proper \((\alpha-1)\)-singularity, while and \(H(\mathbf{x})\) is a positive function with a proper \((\alpha+1)\)-singularity, and both are aperiodic and analytically continuable with respect to \(x_{1}\). Furthermore the proper singularity functions of \(G(\mathbf{x})\), \(D(\mathbf{x})\) and \(H(\mathbf{x})\) coincide._
Proof.: Fix \(\delta>0\). First, the analytic continuability of both mappings \(x_{1}\mapsto D(\mathbf{x})\) and \(x_{1}\mapsto H(\mathbf{x})\) to a region of the form (8) is immediate by assumption on \(G(\mathbf{x})\) and properties of the derivative and the integral. Second, if \(|x_{1}-\rho(x_{2},\ldots,x_{t})|<\delta\) then Lemma 3.5 implies that there exist \(\delta^{\prime}>0\) and a proper singularity function \(\kappa=\kappa(x_{1},\ldots,x_{k-1},x_{k+1},\ldots,x_{t})\) such that for \(|x_{k}-\kappa|<\delta^{\prime}\), \(G(\mathbf{x})\) can be represented as
\[G(\mathbf{x})=g_{1}(\mathbf{x})+\overline{g}_{2}(\mathbf{x})\left(1-\frac{x_ {k}}{\kappa}\right)^{\alpha}. \tag{12}\]
Set \(d_{1}(\mathbf{x})=(\partial/\partial x_{k})g_{1}(\mathbf{x})\) and \(d_{2}(\mathbf{x})=\overline{g}_{2}(\mathbf{x})/(2\kappa)+(\partial/\partial x _{k})\overline{g}_{2}(\mathbf{x})\left(1-x_{k}/\kappa\right)\). Then taking the partial derivative of (12) with respect to \(x_{k}\) gives
\[D(\mathbf{x})=\frac{\partial}{\partial x_{k}}g_{1}(\mathbf{x})+\frac{\partial }{\partial x_{k}}\overline{g}_{2}(\mathbf{x})\left(1-\frac{x_{k}}{\kappa} \right)^{\alpha}+\frac{\overline{g}_{2}(\mathbf{x})}{2\kappa}\left(1-\frac{x_ {k}}{\kappa}\right)^{\alpha-1}=d_{1}(\mathbf{x})+d_{2}(\mathbf{x})\left(1- \frac{x_{k}}{\kappa}\right)^{\alpha-1}.\]
Now, in order to compute the integral of (12) we first compute the Taylor expansions of the functions \(g_{1}(\mathbf{x})\) and \(\overline{g}_{2}(\mathbf{x})\) at \(x_{k}\sim\kappa\) and obtain a representation of the form
\[G(\mathbf{x})=\sum_{\ell\geq 0}G_{\ell}(x_{1},\ldots,x_{k-1},x_{k+1},\ldots,x_{t}) \left(1-\frac{x_{k}}{\kappa}\right)^{\alpha\ell} \tag{13}\]
that is certainly convergent for \(|x_{k}-\kappa|<\delta^{\prime}\). Next we split up the integral in (11) into three parts
\[I_{1}(x_{1},\ldots,x_{k-1},x_{k+1},\ldots,x_{t}) :=\int_{0}^{(1-\eta)\kappa}G(x_{1},\ldots,x_{k-1},y,x_{k+1},\ldots,x_{t})\,dy,\] \[I_{2}(x_{1},\ldots,x_{k-1},x_{k+1},\ldots,x_{t}) :=\int_{(1-\eta)\kappa}^{\kappa}G(x_{1},\ldots,x_{k-1},y,x_{k+1},\ldots,x_{t})\,dy,\] \[I_{3}(\mathbf{x}) :=\int_{\kappa}^{x_{k}}G(x_{1},\ldots,x_{k-1},y,x_{k+1},\ldots,x_ {t})\,dy,\]
where \(\eta>0\) is chosen in such a way that \(\eta<\delta^{\prime}/|\kappa|\). The first integral is certainly an analytic function in \(x_{1},\ldots,x_{k-1},x_{k},\ldots,x_{t}\), as a definite integral with respect to \(y\) in a range where \(G\) is analytic. The second integral can be directly computed by the series expansion (13)
\[I_{2}(x_{1},\ldots,x_{k-1},x_{k+1},\ldots,x_{t}) =\int\limits_{(1-\eta)\kappa}^{\kappa}G(x_{1},\ldots,x_{k-1},y,x_ {k+1},\ldots,x_{t})\,dy\] \[=\sum\limits_{\ell\geq 0}G_{\ell}(x_{1},\ldots,x_{k-1},x_{k+1}, \ldots,x_{t})\int\limits_{(1-\eta)\kappa}^{\kappa}(1-y/\kappa)^{\alpha\ell}\,dy\] \[=\kappa\sum\limits_{\ell\geq 0}\frac{G_{\ell}(x_{1},\ldots,x_{k-1},x_{k+1},\ldots,x_{t})}{\alpha\ell+1}\eta^{\alpha\ell+1}.\]
This series is absolutely convergent and represents an analytic function in \(x_{1},\ldots,x_{k-1}\), \(x_{k+1},\ldots,x_{t}\). Finally, for the third integral we use again the series expansion (13) and obtain
\[I_{3}(\mathbf{x}) =\int_{\kappa}^{x_{k}}G(x_{1},\ldots,x_{k-1},y,x_{k+1},\ldots,x_{ t})\,dy\] \[=\sum\limits_{\ell\geq 0}G_{\ell}(x_{1},\ldots,x_{k-1},x_{k+1}, \ldots,x_{t})\int\limits_{\kappa}^{x_{k}}(1-y/\kappa)^{\alpha\ell}\,dy\] \[=-\kappa\sum\limits_{\ell\geq 0}\frac{G_{\ell}(x_{1},\ldots,x_{k-1},x_{k+1},\ldots,x_{t})}{\alpha\ell+1}(1-x_{k}/\kappa)^{\alpha\ell+1}.\]
This series representation can be rewritten into
\[I_{3}(\mathbf{x})=h_{1}(\mathbf{x})+h_{2}(\mathbf{x})\left(1-\frac{x_{k}}{ \kappa}\right)^{\alpha+1}.\]
Next, note that since
\[\overline{g}_{2}(x_{1},\ldots,x_{k-1},\kappa,x_{k+1},\ldots,x_{t})=-\frac{ \alpha+1}{\kappa}h_{2}(x_{1},\ldots,x_{k-1},\kappa,x_{k+1},\ldots,x_{t}),\]
then both \(\overline{g}_{2}\) and \(h_{2}\) are non-zero, even if \(|x_{k}-\kappa|<\delta^{\prime\prime}\) for a sufficiently small \(\delta^{\prime\prime}>0\). Moreover, since the coefficients of \(G(\mathbf{x})\) and \(H(\mathbf{x})\) are non-negative we have \(g_{1}(\mathbf{x})>0\) for \(x_{j}\in\mathbb{R}_{+}\) and \(\overline{h}_{1}(\mathbf{x})=h_{1}(\mathbf{x})+I_{1}(x_{1},\ldots,x_{k-1},x_{ k+1},\ldots,x_{t})+I_{2}(x_{1},\ldots,x_{k-1},x_{k+1},\ldots,x_{t})>0\).
Finally, by another application of Lemma 3.5, we can rewrite \(D(\mathbf{x})\) and \(H(\mathbf{x})\) into
\[D(\mathbf{x})=d_{1}(\mathbf{x})+\overline{d}_{2}(\mathbf{x})\left(1-\frac{x_{1} }{\rho(x_{2},\ldots,x_{t})}\right)^{\alpha-1}\text{ and }H(\mathbf{x})=\overline{h}_{1}(\mathbf{x})+ \overline{h}_{2}(\mathbf{x})\left(1-\frac{x_{1}}{\rho(x_{2},\ldots,x_{t})} \right)^{\alpha+1}\]
with \(\overline{d}_{2},\overline{h}_{2}\neq 0\). This completes the proof.
Finally, we generalise [12, Theorem 2.21]. This theorem states that if \(G(x,u)\) is a univariate function, with parameter \(u=(x_{2},\ldots,x_{t})\), defined implicitely in terms of another function \(F(x,u,y)\) (i.e. such that \(F(x,u,G(x,u))=0\)) that admits a \(1/2\)-singular expansion at some \(R>0\), then \(G(x,u)\) also admits a \(1/2\)-singular expansion at some \(\rho<R\). The next result extends this to the case where \(F\) has a **proper**\(1/2\)-singularity. The proof follows the same lines and we sketch it next.
**Lemma 3.7**.: _Suppose that \(F(\mathbf{x},y)\) is a positive function in \(t+1\) variables with a proper \(1/2\)-singularity and singularity function \(R(x_{2},\ldots,x_{t};y)\) that is fully movable with respect to the variables \(x_{2},\ldots,x_{k}\) and \(y\) for some \(2\leq k\leq t\). Furthermore assume that \(F(x_{1},x_{2},\ldots,x_{t},y)=0\) if one of the variables \(x_{1},\ldots,x_{k}\) are zero. Then the functional equation_
\[G=\exp(F(\mathbf{x},G)) \tag{14}\]
_has a unique solution \(G=G(\mathbf{x})\) with \(G(\mathbf{0})=1\) which is a positive function with a proper \(1/2\)-singularity, too. Its singularity function \(\rho(x_{2},\ldots,x_{t})\) is fully movable with respect to the variables \(x_{2},\ldots,x_{k}\) and satisfies_
\[\rho(x_{2},\ldots,x_{t})<R\big{(}x_{2},\ldots,x_{t};G(\rho(x_{2},\ldots,x_{t}),x_{2},\ldots,x_{t})\big{)}.\]
_Moreover, if \(F(\mathbf{x},y)\) is periodic and analytically continuable with respect to the variable \(x_{1}\) then the same property holds for \(G(\mathbf{x})\)._
Proof.: First, by iteration (or by the IFT), Equation (14) admits a unique power series solution with \(G(\mathbf{0})=1\) and non-negative coefficients.
Next we define a singularity function \(\rho(x_{2},\ldots,x_{t})\). For this purpose we fix \(x_{2},\ldots,x_{t}\in\mathbb{R}_{+}\) and vary \(x_{1}\). We claim that there exists a unique \(\overline{x}_{1}>0\) such that for \(\overline{\mathbf{x}}=(\overline{x}_{1},x_{2},\ldots,x_{t})\) we have \(\overline{x}_{1}<R(x_{2},\ldots,x_{t},G(\overline{\mathbf{x}}))\) and satisfying
\[G(\overline{\mathbf{x}})=\exp(F(\overline{\mathbf{x}},G(\overline{\mathbf{x}} ))), \tag{15}\]
and
\[1=\exp(F(\overline{\mathbf{x}},G(\overline{\mathbf{x}})))\frac{\partial F}{ \partial y}(\overline{\mathbf{x}},G(\overline{\mathbf{x}})). \tag{16}\]
Since all the coefficients of \(G\) are non-negative, the solution function \(G\) is strictly increasing in \(x_{1}\). Consequently, the factor \(\exp(F(\overline{\mathbf{x}},G(\overline{\mathbf{x}})))\) in (16) is also strictly increasing in \(x_{1}\).
Now we study the factor \((\partial F/\partial y)(\overline{\mathbf{x}},G(\overline{\mathbf{x}}))\) in (16). By assumption we have
\[\frac{\partial F}{\partial y}(0,x_{2},\ldots,x_{t},G(0,x_{2},\ldots,x_{t}))=0.\]
Moreover, since \(F\) is a positive function with a proper \(1/2\)-singularity, by Lemma 3.6\(\partial F/\partial y\) is a positive function with a proper \((-1/2)\)-singularity, that is, when \(x_{1}\sim R(x_{2},\ldots,x_{t},y)\) it can be represented as
\[\frac{\partial F}{\partial y}(\mathbf{x},y)=\overline{f}_{1}(\mathbf{x},y)+ \overline{f}_{2}(\mathbf{x},y)\left(1-\frac{x_{1}}{R(x_{2},\ldots,x_{t},y)} \right)^{-1/2},\]
where \(\overline{f}_{1}\) and \(\overline{f}_{2}\) are analytic and \(\overline{f}_{2}\neq 0\). Since \(G\) is strictly increasing in \(x_{1}\) and \(R\) is a proper singularity function fully movable in \(y\), \(R(x_{2},\ldots,x_{t},G(\mathbf{x}))\) is strictly decreasing in \(x_{1}\) and goes to \(0\) as \(x_{1}\to\infty\). Therefore, \(\left(1-x_{1}/R(x_{2},\ldots,x_{t},G(\mathbf{x}))\right)^{-1/2}\) is strictly increasing in \(x_{1}\) and unbounded while \(x_{1}<R(x_{2},\ldots,x_{t},G(\mathbf{x}))\). The same is true for \(\left(\partial F/\partial y\right)(\mathbf{x},G(\mathbf{x}))\) because \(\overline{f}_{1}(\mathbf{x},G(\mathbf{x}))\) and \(\overline{f}_{2}(\mathbf{x},G(\mathbf{x}))\) are strictly increasing functions in \(x_{1}\) when \(x_{1}<R(x_{2},\ldots,x_{t},G(\mathbf{x}))\). Our claim follows.
From [12, Theorem 2.19], which amounts to evaluating the parameter \(u\) in \(\mathbb{R}_{+}\) in [12, Theorem 2.21], this implies that for \(x_{2},\ldots,x_{t}\in\mathbb{R}_{+}\) the univariate function \(x_{1}\to G(\mathbf{x})\) has a \(1/2\)-singularity at \(x_{1}=\overline{x}_{1}\). Therefore we set
\[\rho(x_{2},\ldots,x_{t}):=\overline{x}_{1}.\]
The system formed by equations (15) and (16) can be used to get more information on \(\rho\). Notice first that, given \(x_{2},\ldots,x_{t}\), the system determines \(\overline{x}_{1}=\rho(x_{2},\ldots,x_{t})\) and \(G(\overline{\mathbf{x}})\). Then, since the determinant
\[\left|\begin{array}{cc}-e^{F}\frac{\partial F}{\partial x_{1} }&1-e^{F}\frac{\partial F}{\partial y}\\ -e^{F}\frac{\partial F}{\partial x_{1}}\frac{\partial F}{\partial y}-e^{F} \frac{\partial^{2}F}{\partial y\partial x_{1}}&-e^{F}\left(\frac{ \partial F}{\partial y}\right)^{2}-e^{F}\frac{\partial^{2}F}{\partial y^{2}} \end{array}\right| =e^{2F}\left|\begin{array}{cc}\frac{\partial F}{\partial x _{1}}&0\\ \frac{\partial F}{\partial x_{1}}\frac{\partial F}{\partial y}+\frac{ \partial^{2}F}{\partial y\partial x_{1}}&\left(\frac{\partial F}{\partial y} \right)^{2}+\frac{\partial^{2}F}{\partial y^{2}}\end{array}\right|\] \[=e^{2F}\frac{\partial F}{\partial x_{1}}\left(\left(\frac{ \partial F}{\partial y}\right)^{2}+\frac{\partial^{2}F}{\partial y^{2}}\right)\]
is positive, it follows by the IFT that the function \(\rho(x_{2},\ldots,x_{t})\) can be locally analytically continued. Fix now some \(2\leq j\leq k\). By differentiating (15) with respect to \(x_{j}\), one obtains
\[0 =\frac{\partial[G(\overline{\mathbf{x}})]}{\partial x_{j}}-\exp(F (\overline{\mathbf{x}},G(\overline{\mathbf{x}})))\left[\frac{\partial F}{ \partial x_{1}}(\overline{\mathbf{x}},G(\overline{\mathbf{x}}))\frac{ \partial\rho}{\partial x_{j}}+\frac{\partial F}{\partial x_{j}}(\overline{ \mathbf{x}},G(\overline{\mathbf{x}}))+\frac{\partial F}{\partial y}(\overline{ \mathbf{x}},G(\overline{\mathbf{x}}))\frac{\partial[G(\overline{\mathbf{x}})] }{\partial x_{j}}\right]\] \[=-\exp(F(\overline{\mathbf{x}},G(\overline{\mathbf{x}})))\left[ \frac{\partial F}{\partial x_{1}}(\overline{\mathbf{x}},G(\overline{ \mathbf{x}}))\frac{\partial\rho}{\partial x_{j}}+\frac{\partial F}{\partial x _{j}}(\overline{\mathbf{x}},G(\overline{\mathbf{x}}))\right].\]
In other words,
\[\frac{\partial\rho}{\partial x_{j}}=-\frac{\frac{\partial F}{ \partial x_{j}}(\overline{\mathbf{x}},G(\overline{\mathbf{x}}))}{\frac{ \partial F}{\partial x_{1}}(\overline{\mathbf{x}},G(\overline{\mathbf{x}}))}<0.\]
This means that \(\rho\) is strictly decreasing in all variables, provided they are real and positive. Let us finally consider the behaviour of \(\rho\) as \(x_{j}\) tends to \(0\) or \(+\infty\). Suppose first that \(\overline{x}_{1}=\rho\) is bounded away from \(0\) when \(x_{j}\to\infty\). Then \(G(\overline{\mathbf{x}})\to+\infty\), and \(R\to 0\). However, this is impossible since
\(\overline{x}_{1}<R\). Thus \(\rho\to 0\) as \(x_{j}\to\infty\). On the other hand, suppose that \(\overline{x}_{1}=\rho\) stays bounded when \(x_{j}\to 0\). In this case, \(G(\overline{\mathbf{x}})\) stays bounded and so by assumptions on the zeros of \(F\), \(F(\overline{x},G(\overline{\mathbf{x}})))\to 0\) and \((\partial F/\partial y)(\overline{x},G(\overline{\mathbf{x}})))\to 0\) as \(x_{j}\to 0\). However, this is not possible by (16). Summing up, this means from Definition 3.2 that the function \(\rho\) is a proper singularity function that is fully movable with respect to \(x_{2},\ldots,x_{k}\).
Furthermore, it follows from [12, Theorem 2.21] that we also get an expansion of the form (7) with \(\alpha=1/2\) for \(G(\mathbf{x})\). And it remains to check that for \(x_{2},\ldots,x_{t}\in\mathbb{R}_{+}\), the mapping \(x_{1}\mapsto G(\mathbf{x})\) admits an analytic continuation away from \(\rho\), in a region of the form (8). To that end, we now consider (14) as a functional equation for the function \(x_{1}\mapsto G(\mathbf{x})\). Since \(G(\mathbf{x})\) can be written as \(G(\mathbf{x})=1+\tilde{G}(\mathbf{x})\), where \(\tilde{G}\) is a power series with non-negative coefficients, we have that
\[G(\mathbf{x})=\exp(F(\mathbf{x},G(\mathbf{x}))) =\exp(F(\mathbf{x},1+\tilde{G}(\mathbf{x})))\] \[=\exp(F(\mathbf{x},1)+\tilde{F}(\mathbf{x},\tilde{G}(\mathbf{x})))\] \[=1+F(\mathbf{x},1)+\tilde{F}(\mathbf{x},\tilde{G}(\mathbf{x}))+ \sum_{n\geq 2}\frac{(F(\mathbf{x},1)+\tilde{F}(\mathbf{x},\tilde{G}(\mathbf{x} )))^{n}}{n!},\]
where \(\tilde{F}(\mathbf{x},y)\) is a power series with non-negative coefficients. Now, since \(F(\mathbf{x},y)\) is aperiodic in \(x_{1}\), it follows that \(G(\mathbf{x})\) has to also be aperiodic with respect to \(x_{1}\). This implies that \(\rho(x_{2},\ldots,x_{t})\) is the unique dominant singularity of the function \(x_{1}\mapsto G(\mathbf{x})\) and we conclude by a standard compactness argument that it can be analytically continued to a region of the form (8).
### Proofs of the main results
Fix \(t\geq 1\) and recall that starting with \(G_{t+1}\), which is an explicit monomial, one can recursively obtain the generating functions \(G_{t},G_{t-1},\ldots,G_{1}\), and finally \(G_{0}=\exp(G_{1})\). Let us next discuss the first step of this induction, from \(G_{t+1}\) to \(G_{t}\), since it is slightly different from the general step.
**Proposition 3.8**.: _Let \(t\geq 1\), \(x_{2},\ldots,x_{t}\in\mathbb{R}_{+}\). Then there exists two functions \(h_{1}(x)\) and \(h_{2}(x)\), that are analytic and non-zero at \(x=1/e\), such that for \(x\sim 1/e\) we have_
\[G_{t}(\mathbf{x})=\frac{\prod_{j=1}^{t}x_{j}^{\binom{t}{j}}}{t!}\left(h_{1}(tX )+h_{2}(tX)(1-etX)^{3/2}\right),\qquad\text{where }X=\prod_{j=1}^{t}x_{j}^{\binom{t}{j-1}}. \tag{17}\]
Proof.: From Equation (2) and by (3) we directly get
\[G_{t+1}^{(t)}(\mathbf{x})=\prod_{j=1}^{t}x_{j}^{\binom{t}{j-1}}.\]
Consequently, from the relation (4) the function \(G_{t}^{(t)}=G_{t}^{(t)}(\mathbf{x})\) satisfies the equation
\[G_{t}^{(t)}=\exp\left(\prod_{j=1}^{t}x_{j}^{\binom{t}{j-1}}[G_{t}^{(t)}]^{t} \right).\]
Let \(T(z)\) denote the _tree function_, i.e. that satisfies the equation \(T(z)=z\exp(T(z))\). Then, using the change of variable \(z=tX\) with \(X=\prod_{j=1}^{t}x_{j}^{\binom{t}{j-1}}\), we can represent \(G_{t}^{(t)}(\mathbf{x})\) as
\[G_{t}^{(t)}(\mathbf{x})=\left(\frac{T(tX)}{tX}\right)^{1/t}=\exp\left(T(tX)/t \right).\]
With the help of (6) and the relation \(T(x)=x\exp(T(x))\), this also leads to
\[G_{t}(\mathbf{x}) =\frac{1}{t!}\prod_{j=1}^{t}x_{j}^{\binom{t}{j}}G_{t}^{(t)}( \mathbf{x})-\frac{t}{(t+1)!}\prod_{j=1}^{t}x_{j}^{\binom{t+1}{j}}\left[G_{t}^ {(t)}(\mathbf{x})\right]^{t+1}\] \[=\frac{\prod_{j=1}^{t}x_{j}^{\binom{t}{j}}}{t!}\exp\left(\frac{T( tX)}{t}\right)\left(1-\frac{T(tX)}{t+1}\right). \tag{18}\]
It is well known (see for instance [15]) that \(T(z)\) has its dominant singularity at \(z_{0}=1/e\) and a local Puiseux expansion at \(z\sim z_{0}\) of the form
\[T(z)=1-\sqrt{2}\sqrt{1-ez}+\frac{2}{3}(1-ez)-\frac{11\sqrt{2}}{36}(1-ez)^{3/2 }+O\left((1-ez)^{2}\right).\]
Furthermore, \(z_{0}=1/e\) is the only singularity on the circle \(|z|=1/e\) and \(T(z)\) can be analytically continued to a region of the form \(|z|<1/e+\delta/2\), \(|z-1/e|>\delta\) for some \(\delta>0\).
Hence we get
\[\exp\left(\frac{T(x)}{t}\right)\left(1-\frac{T(x)}{t+1}\right) =\frac{te^{1/t}}{t+1}\left(1-\frac{1}{t^{2}}(1-ex)+\frac{2\sqrt{2 }(t+1)}{3t^{3}}(1-ex)^{3/2}+O\left((1-ex)^{2}\right)\right)\] \[=h_{1}(x)+h_{2}(x)(1-ex)^{3/2},\]
where \(h_{1}(x)\) and \(h_{2}(x)\) are functions that are analytic and non-zero at \(x\sim 1/e\). This directly leads to the claimed local representation of \(G_{t}(\mathbf{x})\).
Note that the appearance of the dominant singularity \((1-etX)^{3/2}\) is not unexpected since \(G_{t}^{(t)}(\mathbf{x})\) has a dominant singularity of the form \(\sqrt{1-etX}\) and \(G_{t}(\mathbf{x})\) and is as per (5) - more or less - the integral of \(G_{t}^{(t)}(\mathbf{x})\).
Furthermore, one can deduce from Proposition 3.8 the case \(k=t\) of Theorem 1.1 by a direct application of Proposition 3.1 (setting \(u=1\)). Similarly, a central limit theorem for the case \(k=t\) of Theorem 1.2 follows immediately from Proposition 3.8 by an application of [12, Theorem 2.25].
For \(k<t\), Theorems 1.1 and 1.2 can also be deduced from local representations of a form similar to (17), modulo some technical conditions. Thus the main step of the proofs is to show that the above representation for \(G_{t}(\mathbf{x})\) implies corresponding representation for \(G_{t-1}(\mathbf{x}),G_{t-2}(\mathbf{x}),\ldots,G_{1}(\mathbf{x})\). This is the object of the next proposition. Before stating it, let us remark the following.
**Remark 3.9**.: _For \(k\in[t+1]\), the function \(G_{k}(\mathbf{x})\) admits \(\prod_{j\in[k]}x_{j}^{\binom{k}{j}}\) as a factor. If \(k=t+1\), this is Equation (2). For \(k\leq t\), it follows from Equation (5). And by (3) this implies that the function \(G_{k}^{(k-1)}(\mathbf{x})\) has \(\prod_{j\in[k]}x_{j}^{\binom{k-1}{j-1}}\) as a factor. In particular, \(G_{k}^{(k-1)}(\mathbf{x})\) is zero if one of the variables \(x_{1},\ldots,x_{k}\) are zero._
**Proposition 3.10**.: _Suppose that \(2\leq k\leq t\) and let \(G_{k}(\mathbf{x})\) be a positive function with a proper \(3/2\)-singularity that is aperiodic and analytically continuable with respect to \(x_{1}\), and with a proper singularity function \(\rho_{k}(x_{2},\ldots,x_{t})\) that is fully movable with respect to \(x_{2},\ldots,x_{k}\)._
_Then the function \(G_{k-1}(\mathbf{x})\) is also a positive function with a proper \(3/2\)-singularity that is aperiodic and analytically continuable with respect to \(x_{1}\), where the singularity function \(\rho_{k-1}(x_{2},\ldots,x_{t})\) is again fully movable with respect to \(x_{2}\ldots,x_{k}\). Moreover, if \(x_{2},\ldots,x_{t}\in\mathbb{R}_{+}\) then_
\[\rho_{k-1}(x_{2},\ldots,x_{t})<\rho_{k}(x_{2},\ldots,x_{t}). \tag{19}\]
Proof.: In a first step, using Lemma 3.5 we replace \(\rho_{k}(x_{2},\ldots,x_{t})\) by the proper singularity function \(\kappa_{k}=\kappa_{k}(x_{1},\ldots,x_{k-2},x_{k},\ldots,x_{t})\), that is fully movable with respect to \(x_{1},\ldots,x_{k-2}\), so that we can represent \(G_{k}(\mathbf{x})\) as
\[G_{k}(\mathbf{x})=\overline{g}_{1}(\mathbf{x})+\overline{g}_{2}(\mathbf{x}) \left(1-\frac{x_{k-1}}{\kappa_{k}}\right)^{3/2}.\]
Next, we deduce from Lemma 3.6 and the relation (3) that \(G_{k}^{(k-1)}(\mathbf{x})\) is a positive function with a proper \(1/2\)-singularity and admits the same proper singularity function \(\kappa_{k}\) as \(G_{k}(\mathbf{x})\), in particular it is fully movable with respect to \(x_{1},\ldots,x_{k-2}\).
From there we apply Lemma 3.7 to the relation (4), noting that
\[F(\mathbf{x},y)=G_{k}^{(k-1)}(x_{1},\ldots,x_{k-2},x_{k-1}y,x_{k},\ldots,x_{t})\]
is a positive function with a proper \(1/2\)-singularity and that by Remark 3.9\(F(\mathbf{x},y)\) has zeros \(x_{1},\ldots,x_{k}\). Furthermore, it admits a proper singularity function given by
\[R(x_{1},\ldots,x_{k-2},x_{k},\ldots,x_{t},y)=\frac{1}{y}\kappa_{k}(x_{1}, \ldots,x_{k-2},x_{k},\ldots,x_{t}).\]
Clearly \(R\) is fully movable in \(x_{1},\ldots,x_{k-2},x_{k}\) and \(y\). Consequently, using Lemma 3.7 the solution function \(y=G_{k-1}^{(k-1)}(\mathbf{x})\) is a positive function with a proper \(1/2\)-singularity, and leading variable \(x_{k-1}\), for which the singularity function \(\kappa_{k-1}=\kappa_{k-1}(x_{1},\ldots,x_{k-2},x_{k},\ldots,x_{t})\) satisfies
\[\kappa_{k-1}(x_{1},\ldots,x_{k-2},x_{k},\ldots,x_{t})<\frac{\kappa_{k}(x_{1}, \ldots,x_{k-2},x_{k},\ldots,x_{t})}{G_{k-1}^{(k-1)}(\mathbf{x})}.\]
Note that \(G_{k-1}^{(k-1)}(\mathbf{0})=1\). Hence, \(G_{k-1}^{(k-1)}(\mathbf{x})>1\), and it follows that
\[\kappa_{k-1}(x_{1},\ldots,x_{k-2},x_{k},\ldots,x_{t})<\kappa_{k}(x_{1}, \ldots,x_{k-2},x_{k},\ldots,x_{t}).\]
Finally, we apply Lemma 3.6 on relation (5) and obtain that \(G_{k-1}(\mathbf{x})\) is a positive function with a proper \(3/2\)-singularity and leading variable \(x_{k-1}\). By another application of Lemma 3.5 we see that we can change it back to the leading variable \(x_{1}\), such that the corresponding proper singularity function \(\rho_{k-1}(x_{2},\ldots,x_{k})\) satisfies (19).
We are now in a position to prove the two main results of the paper.
Proof of Theorem 1.1.Proposition 3.8 implies that \(G_{t}(\mathbf{x})\) is a positive function with a proper \(3/2\)-singularity, and is aperiodic and analytically continuable with respect to \(x_{1}\). In particular, compare (7) with (17) and note that \(x_{1}\) appears in \(X\) only in the first power \(x_{1}^{\binom{t}{0}}\). In this case, the proper singularity function is explicitly given by
\[\rho_{t}(x_{2},\ldots,x_{t})=\frac{1}{et}\prod_{j=2}^{t}x_{j}^{-\binom{t}{j-1}}.\]
From there, successive applications of Proposition 3.10 imply that the function \(G_{k}(\mathbf{x})\) also has these properties for each \(k\in[t]\). Since the exponential is an entire function this also holds for \(G(\mathbf{x})=G_{0}(\mathbf{x})=\exp(G_{1}(\mathbf{x}))\). And we conclude the proof by setting \(x_{2}=\cdots=x_{t}=1\) then applying Proposition 3.1.
Proof of Theorem 1.2.Suppose that \((x_{2},\ldots,x_{t})\) is in a sufficiently small complex neighbourhood \(U\) of \((1,\ldots,1)\) in \(\mathbb{C}^{t-1}\). From Proposition 3.8 then \(k-1\) succesive applications of Proposition 3.10, we derive a local representation of the form (7) holds for \(G(\mathbf{x})=G_{k}(\mathbf{x})\), with \(\rho(x_{2},\ldots,x_{k})=\rho_{k}(x_{2},\ldots,x_{k})\) and \(\alpha=3/2\), when \(x_{1}\) is close to \(\rho_{k}(x_{2},\ldots,x_{k})\). Furthermore, by continuity there exists \(\delta>0\) such that the function \(x_{1}\mapsto G_{k}(\mathbf{x})\) is still analytically continuable to a region of the form (8), with \(\rho=\rho_{k}\). And we deduce from Proposition 3.1 the following asymptotic estimate for the coefficients of \(x_{1}\) in \(G_{k}(\mathbf{x})\)
\[[x_{1}^{n}]\,G_{k}(\mathbf{x})=C_{k}(x_{2},\ldots,x_{k})\,n^{-5/2}\,\rho_{k}( x_{2},\ldots,x_{t})^{-n}\,(1+o(1))\qquad\text{as $n\to\infty$},\]
for some non-zero function \(C_{k}(x_{2},\ldots,x_{k})\) analytic in \(U\). This leads to a _quasi-power_ situation for the probability generating function
\[\mathbb{E}\left[x_{2}^{X_{2}}\cdots x_{t}^{X_{t}}\right]=\frac{[x_{1}^{n}]\,G _{k}(\mathbf{x})}{[x_{1}^{n}]\,G_{k}(x_{1},1,\ldots,1)}\sim\frac{C_{k}(x_{2}, \ldots,x_{k})}{C_{k}(1,\ldots,1)}\left(\frac{\rho_{k}(1,\ldots,1)}{\rho_{k}(x _{2},\ldots,x_{t})}\right)^{n}.\]
Finally, setting \(x_{j}=e^{u_{j}}\) and \(\lambda_{n}=n\) in [12, Theorem 2.22] implies the claimed joint central limit theorem. Note that one could alternatively apply [12, Theorem 2.25]. Furthermore, the relation \(G_{0}(\mathbf{x})=\exp\left(G_{1}(\mathbf{x})\right)\) implies, as above, that \(G(\mathbf{x})=G_{0}(\mathbf{x})\) has the same singularities and singular expansion as \(G_{1}(\mathbf{x})\), up to a multiplicative constant. This concludes the proof.
## 4 Concluding remarks
With the help of a computer algebra system, making use of the representation in Lemma 2.7, we have been able to compute the following table of numerical values for the singularities \(\rho_{t,k}\). We have stopped at \(t=7\) since the size of the system of functional equations needed to determine \(\rho_{t,k}\) grows too fast.
Let us mention a recent result giving an estimate \(cn^{-5/2}\gamma^{n}n!\) for the number of labelled _planar_ chordal graphs with \(\gamma\approx 11.89\)[7]. Is is easy to see that the class of chordal graphs with tree-width at most three is exactly the same as the class of chordal graphs not containing \(K_{5}\) as a minor, whose asymptotic growth is, according to Theorem 1.1 and the table above, of the form \(cn^{-5/2}\delta^{n}n!\) with \(\delta=1/\rho_{3,1}\approx 12.98\).
Furthermore, notice that if we denote by \(C(x)\) and \(B(x)\), respectively, the generating functions of connected and \(2\)-connected graphs in \(\mathcal{G}_{t,0}\), then Equation 4 reads for \(k=1\)
\[C^{\prime}(x)=\exp(B^{\prime}(xC^{\prime}(x)).\]
If \(\rho_{C}\) and \(\rho_{B}\) are the singularities of \(C(x)\) and \(B(x)\), respectively, the condition for being subcritical is that \(\rho_{C}C^{\prime}(\rho_{C})<\rho_{B}\), so that the singularity of \(C(x)\) arises as a branch-point in the former equation and is not inherited by that of \(B(x)\); in our case this condition is safistied because of Lemma 3.7.
Since the number of all chordal graphs grows like \(2^{n^{2}/4}\), we know that the singularity \(\rho_{t}=\rho_{t,1}\) of chordal graphs with tree-width at most \(t\) goes to \(0\) as \(t\to\infty\). The question is at which rate \(\rho_{t}\to 0\) as \(t\to\infty\). Since the exponential growth of \(t\)-trees is \((etn)^{n}\), we have \(\rho_{t}=O(1/t)\). And since the growth of all graphs of tree-width at most \(t\) is at most \((2^{t}tn)^{n}\), we also have \(\rho_{t}=\Omega(1/(t2^{t}))\). We leave as an open problem to narrow the gap between the upper and lower bound. Heuristic arguments suggest that \(\rho_{t}\) decreases exponentially in \(t\).
As a final question, we consider letting \(t=t(n)\) grow with \(n\). Recall that a class of labelled graphs is small when the number of graphs in the class grows at most like \(c^{n}n!\) for some \(c>0\), and large otherwise. We know that the class of all chordal graphs is large, while the class of chordal graphs with tree-width at most \(t\) is small for fixed \(t\). Let us see that if \(t=(1+\epsilon)(\log n)\) then the class is large for each \(\epsilon>0\). A graph is split if the vertex set can be partitioned into a clique and an independent set. It is well-known and easy to prove that split graphs are chordal. Consider split graphs with a clique of size \(t=(1+\epsilon)\log n\) and the complement an independent set, so that he largest clique is of size at most \(t+1\) and the tree-width at most \(t\). Every edge between the clique and the complement can be chosen independently, hence there are at least
\[2^{(1+\epsilon)\log n(n-(1+\epsilon)\log n)}\]
such graphs, a quantity that grows faster than \(c^{n}n!\) for every \(c>0\). We leave as an open problem to determine at which order of magnitude between \(t=O(1)\) and \(t=\log n\) the class ceases to be small.
### Acknowledgements
We gratefully acknowledge earlier discussions with Juanjo Rue and Dimitrios Thilikos on the problem of counting chordal graphs with bounded tree-width.
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline & \(k=1\) & \(k=2\) & \(k=3\) & \(k=4\) & \(k=5\) & \(k=6\) & \(k=7\) \\ \hline \(t=1\) & \(0.36788\) & & & & & & \\ \(t=2\) & \(0.14665\) & \(0.18394\) & & & & & \\ \(t=3\) & \(0.07703\) & \(0.08421\) & \(0.12263\) & & & & \\ \(t=4\) & \(0.04444\) & \(0.04662\) & \(0.05664\) & \(0.09197\) & & & \\ \(t=5\) & \(0.02657\) & \(0.02732\) & \(0.03092\) & \(0.04152\) & \(0.07358\) & & \\ \(t=6\) & \(0.01608\) & \(0.01635\) & \(0.01773\) & \(0.02184\) & \(0.03214\) & \(0.06131\) & \\ \(t=7\) & \(0.00974\) & \(0.00984\) & \(0.01038\) & \(0.01204\) & \(0.01614\) & \(0.02583\) & \(0.05255\) \\ \hline \end{tabular}
\end{table}
Table 1: Approximations of the radii of convergence of the generating functions counting \(k\)-connected chordal graphs with tree-width at most \(t\) for small values of \(t\) and \(k\).
The authors acknowledge support from the Marie Curie RISE research network "RandNet" MSCA-RISE-2020-101007705. Moreover, M.D. was supported by the Special Research Program SFB F50-02 "Algorithmic and Enumerative Combinatorics", and by the project P35016 "Infinite Singular Systems and Random Discrete Objects" of the FWF (Austrian Science Fund). Additionally, M.N. and C.R. acknowledge the financial support of the Spanish State Research Agency through projects MTM2017-82166-P and PID2020-113082GB-I00, while M.N. acknowledges support from the Severo Ochoa and Maria de Maeztu Program for Centers and Units of Excellence (CEX2020-001084-M), and C.R. acknowledges support from the grant Beatriu de Pinos BP2019, funded by the H2020 COFUND project No 801370 and AGAUR (the Catalan agency for management of university and research grants).
|
2310.20505 | Thermodynamics and microscopic theory: an educational proposal for the
high school | We present an educational proposal which aims to illustrate the elegant,
refined and coherent physics contained in Thermodynamics, through a path which
assigns to the microscopic description of the physical systems a constantly
privileged role. This approach allows to reach a simple and, at the same time,
deep understanding of the laws of Thermodynamics, while still emphasizing their
great generality, which permits their application to all macroscopic systems,
from simple gases to black holes, arriving to characterize the evolution of the
entire Universe. | Alessandro Ercoli, Vittorio Lubicz | 2023-10-31T14:47:29Z | http://arxiv.org/abs/2310.20505v1 | # Thermodynamics and microscopic theory:
###### Abstract
We present an educational proposal which aims to illustrate the elegant, refined and coherent physics contained in Thermodynamics, through a path which assigns to the microscopic description of the physical systems a constantly privileged role. This approach allows to reach a simple and, at the same time, deep understanding of the laws of Thermodynamics, while still emphasizing their great generality, which permits their application to all macroscopic systems, from simple gases to black holes, arriving to characterize the evolution of the entire Universe.
## 1 Introduction
The history of physics shows us that Thermodynamics has always provided a solid reference for the theoretical developments, even and especially during the major advances in scientific theories. For instance, in the historical and pioneering papers by Planck on the black body radiation and by Einstein on the photoelectric effect and on the Brownian motion the arguments are based precisely on purely thermodynamic considerations, whose laws have such a broad generality that can be applied in the most different contexts.
In spite of that, the teaching of Thermodynamics in Italian high school is getting increasingly reduced in recent years. One also finds that in many cases, at the end of the studies, students have reached quite a limited view of Thermodynamics, which too often seems to concern only ideal gases and heat engines. In this way, they loose the possibility of appreciating a theory which could provide, instead, completeness and new perspectives to what they have previously learned.
It is also not so rare to find that the physical contents of Thermodynamics are poorly understood by the students, with concepts that are often unclear if not, in some cases, even erroneous. A typical difficulty, for example, consists in properly distinguishing the quantities _heat_, _work_ and _energy_. Heat and work are not forms of energy, which, as such, are possessed by the physical system. Rather, they are ways through which energy is exchanged. This concept, if not properly assimilated, can easily hinder the understanding of the first law of Thermodynamics and of the principle of energy conservation.
Another difficulty, concerns the definition of entropy. While the physical meaning of entropy, as explained to us by Boltzmann, is eventually understood by the students, it remains often obscure how the number of microscopic states is related to the Clausius definition of entropy, i.e. to the ratio \(\Delta Q/T\) between exchanged heat and temperature in a transformation, and why this ratio does never decrease for an isolated system.
From an educational point of view, the presentation of the second law based on the two historical statements by Clausius and Kelvin is quite unsatisfactory. Besides the rather artificial proof of the equivalence of the two statements, what is their deep physical meaning, that also explains, in turn, their equivalence? In addition, both the Clausius and Kelvin statements, as well as the Carnot's theorem, are strongly linked to the historical context in which they have been derived. In this way, the idea is transmitted to the students that the second principle of Thermodynamics mainly concerns the efficiency of heat machines, without showing the vast generality of this law which establishes, for any physical process, the unique direction in time in which it can occur.
Starting from these considerations, we developed an educational proposal, published in a textbook (in Italian) [1], which aims to illustrate the elegant, refined and coherent physics contained in Thermodynamics.
The conceptual strength of Thermodynamics, and at the same time its simplicity, stems from the great generality of its physical laws, whose derivation can disregard completely the knowledge of the microscopic structure of the physical systems. We believe, however, that a real, deep and even simpler understanding of the laws of Thermodynamics can only be achieved by starting from the microscopic description of the systems. This belief has led us to assign to the microscopic theory in our proposal a constantly privileged role1. It is important, at the same time, that students are able to appreciate the great generality of the thermodynamic laws, which do not concern only ideal gases or heat machines, but apply to all macroscopic physical systems, from gases to black holes, arriving to
characterize the evolution of the entire Universe.
## 2 Some illustrative examples
In this section, in order to present to the reader a more concrete idea of our proposal, we briefly discuss some of those which are, in our opinion, its most significant contents.
As already emphasized, the most characterizing aspect of our proposal is the assignment to the microscopic theory of a constantly privileged role. For the explanation of the main concepts, such as work or heat, as well as for several of the thermodynamic processes which are discussed, a microscopic description is also provided. Although this description definitively requires more details than the macroscopic treatment, it is nevertheless simple and clear for the students, since it only requires the application of the laws of motion, the knowledge of the general principles of mechanics and simple probabilistic arguments. As noted in [4], "the explanations proposed in Thermodynamics can be unsatisfactory for the student need of understanding, since they often only show how things should or should not be, but not how things actually happen".
### The microscopic treatment of work and heat
Work is the energy exchanged between two systems when a displacement occurs under the action of a force. From the very definition of work, as the product of the force times the displacement, it follows that it is always the work done on the system by the _external forces_ (rather than the work done by the system) to be responsible for the variation of the kinetic energy of the molecules, and therefore of the temperature of the system. We find that this point is often not enough emphasized in many thermodynamics textbooks.
Heat instead is the transfer of energy between two systems due to a difference in temperature. Since in turn the temperature is a measure of the thermal agitation of the molecules, one can equivalently state that heat is the transfer of energy between two systems due to the work done by the microscopic forces exerted in the collisions among the molecules in thermal agitation. We are thus also led to the conclusion that, at the microscopic level, the distinction between heat and work vanishes.
As an example of microscopic treatment of work in our textbook, we discuss here the case of the adiabatic expansion, or compression, of an ideal gas. When reversible, this transformation is described at the thermodynamic level by the Poisson equations
\[p\,V^{\gamma}=costant\qquad\mbox{or}\qquad T\,V^{\gamma-1}=costant\ \, \tag{1}\]
which relate pressure, volume and temperature of the gas during the transformation.
The thermodynamic derivation of Eq. (1), based on the first law, is straightforward. However, it is only with the microscopic description that one arrives at a real understanding of the physical process. By using the laws of energy and momentum conservation in the collision between a molecule of the gas and the moving piston, we can compute the change of velocity of the molecule. One finds, as expected, that the molecule slows down in the expansion, when the piston moves away from it, while its speed increases in the compression, when the piston approaches to it. Starting from this velocity change of the molecule, one can compute the variation of the average kinetic energy of the whole gas and, therefore, of its temperature. This detailed, but physically intuitive discussion, leads directly to the Poisson equations (1).
### The microscopic definition of entropy
A clear understanding of the intrinsic irreversibility we observe in nature for most of the physical processes, and their occurrence in only one direction in time (the egg becomes omelette, but the omelette never turns into an egg!) can only be reached through the microscopic description. As explained by Boltzmann, all physical processes occur in the direction in which macrostates with higher and higher probability are subsequently reached, corresponding to an increasing number of microscopic states.
Motivated by this observation, in our textbook we introduce entropy starting from the microscopic Boltzmann definition, \(S=k\log W\), preceded and followed by a number of examples and applications. It is only later, in the next chapter, that the Clausius thermodynamic definition \(\Delta S=\Delta Q_{rev}/T\) is presented. In this respect, therefore, we do not follow the historical order which is followed, instead, by the majority of textbooks.
A quite critical aspect in understanding the entropy concerns the equivalence of the Boltzmann and Clausius definitions, which look so different one from each other. Why "\(S=k\log W\)" is equivalent to "\(\Delta S=\Delta Q_{rev}/T\)"? In our textbook, we provide an argument for this equivalence for the ideal gas, based on the counting of microscopic states. To the best of our knowledge, the issue is never addressed in textbooks for the high school (and even in the introductory physics courses at the undergraduate level).
### The Clausius inequality and the second law of Thermodynamics
Historically, among the various (potentially infinite) statements of the second law, two have had particular importance, in connection in particular with the study of heat engines: the Clausius and the Kelvin-Planck statements. As is known, the two statements are equivalent, in the sense that each one can be derived from the other. But the standard proof of this equivalence, when presented, looks rather involved and not intuitive.
In our proposal we start from the observation that the second law can be always expressed in terms of the Clausius inequality,
\[\oint\delta Q/T\leq 0\, \tag{2}\]
which represents therefore its general mathematical formulation. Both the Clausius and the Kelvin-Planck statements, as well as the Carnot's theorem, are thus be derived in our textbook starting from the Clausius inequality. Besides its greater simplicity, this approach has the remarkable advantage of providing a highly unified picture of the second principle of Thermodynamics, one of the physical laws of the greatest impact in our everyday life.
## 3 The educational proposal
As already mentioned, the educational path we are proposing has been published in the form of a textbook (in Italian), entitled _Thermodynamics and Microscopic Theory_[1]. The book is intended for high school students, but we believe that a similar educational path can be also adapted to be taught in undergraduate physics courses, where the more advanced mathematical skills possessed by the students can be profitably used, making the presentation of the various arguments even more effective.
After an _Introduction_, the book contains five chapters entitled: 1. _Thermodynamics and kinetic theory_, 2. _Energy transfers: work and heat_; 3. _Energy and the first law of Thermodynamics_; 4. _Entropy and probability_; 5. _Entropy and the second law of Thermodynamics_; and it is eventually concluded by an ending section entitled _Past, future and the entropy of the Universe_.
A series of exercises are proposed at the end of each chapter, which mainly aim at testing the level of understanding reached by the students. In most of these exercises calculations play a marginal role, while correct arguments and good theoretical understanding are required for their resolution. In this spirit, for each exercise, ideas and suggestions for its resolution are provided at the end of the volume.
## Acknowledgments
The authors warmly thank the research group in Educational Physics of the Roma Tre University and, in particular, Ilaria De Angelis and Adriana Postiglione, for their precious and continuous support in the development of this project.
|
2301.13856 | Simplex Random Features | We present Simplex Random Features (SimRFs), a new random feature (RF)
mechanism for unbiased approximation of the softmax and Gaussian kernels by
geometrical correlation of random projection vectors. We prove that SimRFs
provide the smallest possible mean square error (MSE) on unbiased estimates of
these kernels among the class of weight-independent geometrically-coupled
positive random feature (PRF) mechanisms, substantially outperforming the
previously most accurate Orthogonal Random Features at no observable extra
cost. We present a more computationally expensive SimRFs+ variant, which we
prove is asymptotically optimal in the broader family of weight-dependent
geometrical coupling schemes (which permit correlations between random vector
directions and norms). In extensive empirical studies, we show consistent gains
provided by SimRFs in settings including pointwise kernel estimation,
nonparametric classification and scalable Transformers. | Isaac Reid, Krzysztof Choromanski, Valerii Likhosherstov, Adrian Weller | 2023-01-31T18:53:39Z | http://arxiv.org/abs/2301.13856v2 | # Simplex Random Features
###### Abstract
We present _Simplex Random Features_ (SimRFs), a new random feature (RF) mechanism for unbiased approximation of the softmax and Gaussian kernels by geometrical correlation of random projection vectors. We prove that SimRFs provide the smallest possible mean square error (MSE) on unbiased estimates of these kernels among the class of weight-independent geometrically-coupled positive random feature (PRF) mechanisms, substantially outperforming the previously most accurate _Orthogonal Random Features_ (ORFs, Yu et al., 2016) at no observable extra cost. We present a more computationally expensive _SimRFs+_ variant, which we prove is asymptotically optimal in the broader family of weight-dependent geometrical coupling schemes (which permit correlations between random vector directions and norms). In extensive empirical studies, we show consistent gains provided by SimRFs in settings including pointwise kernel estimation, nonparametric classification and scalable Transformers (Choromanski et al., 2020).1
Footnote 1: We will make all code publicly available.
Machine Learning, ICML
## 1 Introduction
Embedding methods, which project feature vectors into a new space, are ubiquitous in machine learning. The canonical example is the Johnson-Lindenstrauss Transform (JLT) (Johnson, 1984; Dasgupta et al., 2010; Kane and Nelson, 2014; Kar and Karnick, 2012), where a collection of high-dimensional points is embedded in a much lower dimensional space whilst (approximately) preserving their metric relationships, e.g. distances and dot-products. Another application is found in kernel approximation (Liu et al., 2022; Yang et al., 2014; Pennington et al., 2015; Li et al., 2010), where the nonlinear similarity measure (kernel) in the original space is translated to a linear kernel in the latent space. For example, a kernel \(K(\cdot,\cdot):\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\) can be approximated using so-called _random features_ (RFs): randomised nonlinear transformations \(\phi(\cdot):\mathbb{R}^{d}\rightarrow\mathbb{R}^{d^{\prime}}\) constructed such that
\[K(\mathbf{x},\mathbf{y})=\mathbb{E}[\widehat{K}(\mathbf{x},\mathbf{y})],\text{ where }\widehat{K}(\mathbf{x},\mathbf{y})\stackrel{{\mathrm{def}}}{{=}}\phi(\mathbf{x })^{\top}\phi(\mathbf{y}). \tag{1}\]
Provided \(K\) is stationary, meaning \(K(\mathbf{x},\mathbf{y})=K(\mathbf{x}-\mathbf{y})\), we can use Bochner's theorem to write
\[K(\mathbf{x}-\mathbf{y})=\int_{\mathbb{R}^{d}}p(\mathbf{w})e^{i\mathbf{w}^{\top}(\mathbf{x}-\mathbf{y} )}\mathrm{d}^{d}\mathbf{w}, \tag{2}\]
where \(p(\mathbf{w})\) is the Fourier transform of \(K\). If \(K\) is positive semidefinite, \(p(\mathbf{w})\) is non-negative so we can treat it as a probability density. This invites Monte Carlo (MC) sampling, yielding _Random Fourier Features_ (RFFs) of the following form, where vectors \(\mathbf{w}_{i}\) are sampled from \(p(\mathbf{w})\), \(m\) is their number and \(\odot\) denotes concatenation (Rahimi and Recht, 2007, 2008):
\[\phi_{\mathrm{RFF}}(\mathbf{z})\stackrel{{\mathrm{def}}}{{=}}\sqrt{ \frac{1}{m}}(\odot_{i=1}^{m}[\sin(\mathbf{w}_{i}^{\top}\mathbf{z}),\cos(\mathbf{w}_{i}^{ \top}\mathbf{z})])^{\top}. \tag{3}\]
Furthermore, if \(K\) is a Gaussian kernel, defined by
\[K_{\mathrm{gauss}}(\mathbf{x},\mathbf{y})\stackrel{{\mathrm{def}}}{{=}} \exp(-\frac{\|\mathbf{x}-\mathbf{y}\|_{2}^{2}}{2}), \tag{4}\]
random vectors \(\mathbf{w}_{i}\) are sampled from the multivariate Gaussian distribution \(\mathcal{N}(0,\mathbf{I}_{d})\). Another kernel, of key interest in Transformer architectures (Vaswani et al., 2017; Choromanski et al., 2020), is the so-called _softmax kernel_:
\[K_{\mathrm{smax}}(\mathbf{x},\mathbf{y})\stackrel{{\mathrm{def}}}{{=}} \exp(\mathbf{x}^{\top}\mathbf{y}). \tag{5}\]
Since \(K_{\mathrm{gauss}}(\mathbf{x},\mathbf{y})=K_{\mathrm{smax}}(\mathbf{x},\mathbf{y})\exp(-\frac{ \mathbf{x}^{2}}{2}-\frac{\mathbf{y}^{2}}{2})\), RF mechanisms for the Gaussian kernel can be readily converted into the corresponding mechanism for softmax and vice versa (Likhosherstov et al., 2022). Our results will hence apply to both settings. For brevity. we will mostly refer to \(K_{\mathrm{gauss}}\).
However, as noted in (Choromanski et al., 2020), RFFs lead to unstable training of implicit linear-attention Transformers. The authors address this by proposing _Positive Random Features_ (PRFs), defined by
\[K_{\mathrm{gauss}}(\mathbf{x},\mathbf{y})=\mathbb{E}[\phi_{\mathrm{PRF}}(\mathbf{x})^{ \top}\phi_{\mathrm{PRF}}(\mathbf{y})], \tag{6}\]
where for \(\mathbf{w}_{1},...,\mathbf{w}_{m}\sim\mathcal{N}(0,\mathbf{I}_{d})\),
\[\phi_{\mathrm{PRF}}(\mathbf{z})\stackrel{{\mathrm{def}}}{{=}}\sqrt{ \frac{1}{m}}\exp(-\|\mathbf{z}\|_{2}^{2})(\odot_{i=1}^{m}[\exp(\mathbf{w}_{i}^{\top}\bm {z})])^{\top}. \tag{7}\]
The straightforward implementation of PRFs (and RFFs) draws \(\mathbf{w}_{i}\) independently - a strategy we refer to as _IIDRFs_. However, the isotropy of the Gaussian distribution permits us to entangle different \(\mathbf{w}_{i}\) to be exactly orthogonal2 whilst preserving the Gaussian marginal distributions \(\mathbf{w}_{i}\sim\mathcal{N}(0,\mathbf{I}_{d})\)(Yu et al., 2016). This mechanism is referred to as _orthogonal random features_ (ORFs), and is an example of a _weight-independent geometrically-coupled RF mechanism_.
Footnote 2: All \(\mathbf{w}_{i}\) can be orthogonal if \(m\leq d\). If \(m>d\) we construct ensembles of independent orthogonal blocks.
**Definition 1.1**.: Consider the random vectors \(\{(\mathbf{w}_{i})_{i=1}^{m}\}\subset\mathbb{R}^{d}\), which can be described by norms \(w_{i}=\|\mathbf{w}_{i}\|_{2}\) and directions \(\widehat{\mathbf{w}}_{i}=\frac{\mathbf{w}_{i}}{\|\mathbf{w}_{i}\|_{2}}\). An RF mechanism is described as _geometrically-coupled_ if the norms of random vectors \(\{(w_{i})_{i=1}^{m}\}\) are independent, but the directions \(\{(\widehat{\mathbf{w}}_{i})_{i=1}^{m}\}\) are permitted to be correlated with one another and with the norms \(\{(w_{i})_{i=1}^{m}\}\). Such a coupling is _weight-independent_ under the further restriction that directions \(\{(\widehat{\mathbf{w}}_{i})_{i=1}^{m}\}\) are independent of the norms \(\{(w_{i})_{i=1}^{m}\}\).
Unless otherwise stated, all coupling mechanisms considered in this work will be geometrical. ORFs provide a lower mean squared error (MSE) on Gaussian kernel approximation than IIDRFs (Yu et al., 2016; Choromanski et al., 2020), though for RFFs only at asymptotically large \(d\). ORFs are used in a broad range of applications including kernel ridge regression and Transformers. In the latter case, they offer linear (cf. quadratic) space- and time-complexity of the attention module, enabling efficient long-range attention modelling as part of the so-called Performer architecture (Choromanski et al., 2020). Sec. 2 details further applications beyond Gaussian and softmax kernel estimation. Recently Likhosherstov et al. (2022) showed that further MSE reduction (for fixed \(m\) and preserving unbiasedness) can be achieved by collecting light data statistics. RFs can also be applied with more computationally expensive preprocessing to improve accuracy in downstream tasks (Trockic and Todorovic, 2019), but they no longer approximate the Gaussian kernel.
However, the following question remains open: _do ORFs provide the lowest possible MSE on unbiased estimates of the Gaussian kernel among the set of weight-independent geometrically-coupled PRF mechanisms?_
Here, we comprehensively answer this question, finding that ORFs are _not_ optimal. We derive the optimal mechanism, coined _Simplex Random Features_ (SimRFs), and show that it substantially outperforms ORFs at close to no extra computational cost. We also consider the broader family of weight-_dependent_ geometrically-coupled PRFs, where random vector directions \(\{\widehat{\mathbf{w}}_{i}\}\) can be correlated with norms \(\{w_{i}\}\), and present a _SimRFs+_ variant which we prove is asymptotically optimal in this more general class. Our empirical studies demonstrate the consistent gains provided by SimRFs in diverse settings, including pointwise kernel estimation, nonparametric classification and scalable Transformers (Choromanski et al., 2020).
In more detail, our principal contributions are as follows:
1. In Sec. 3, we introduce SimRFs and prove that they provide the lowest kernel estimator MSE of any weight-independent geometrically-coupled PRF mechanism, outperforming the previously most accurate ORFs. We demonstrate that a fast, simple scheme applying minor alterations to SimRFs yields SimRFs+: an even better weight-dependent mechanism.
2. In Sec. 4, we provide novel theoretical results to add insight to the discussion in Sec. 3, which may be of independent interest. We derive the first non-asymptotic **closed-form** formulae for the MSE for PRFs in the IIDRF, ORF and SimRF settings, and show how it is straightforward to generalise some of these forms to RFFs. This allows us to precisely quantify how much the kernel estimator MSE can be suppressed by geometrical coupling. We also compare the time- and space-complexities of the different PRF mechanisms and describe a faster, approximate implementation.
3. In Sec. 5, we support our theoretical results with comprehensive experiments, demonstrating the superiority of SimRFs over ORFs and IIDRFs. We empirically confirm that they offer lower kernel estimator MSE with a variety of data-types, and find that this translates to better downstream performance in **(a)** nonparametric classification tasks (Sec. 5.2), and **(b)** scalable Transformers (Sec. 5.3).
Proofs not provided in the main body are in Appendix A.
## 2 Related work
The literature on _structured_ RFs, where random vectors are conditionally dependent, is voluminous (Ailon and Chazelle, 2009; Liberty et al., 2011; Ailon and Liberty, 2013; Le et al., 2013; Yu et al., 2017). ORFs were first proposed for nonlinear kernel estimation in (Yu et al., 2016), where the authors derived strict asymptotic gains from ORFs compared to IIDRFs when using RFFs for Gaussian kernel approximation. We refer to this phenomenon - namely, the supression of kernel estimator MSE when random features are conditioned to be orthogonal - as _the orthogonality gap_.
Further progress towards an understanding of the orthogonality gap was provided in (Choromanski et al., 2018),
where the authors introduced and studied the so-called _charm property_ of stationary kernels. However, a rigorous mathematical analysis in the non-asymptotic setting remained out of reach. In (Choromanski et al., 2017), the authors showed the superiority of ORFs over IIDRFs for angular kernel estimation in any \(d\) (not just asymptotic) and conducted an extensive analysis of the linear (dot-product) kernel, but they did not address stationary kernels. The authors of (Lin et al., 2020) used the lens of _determinantal point processes_ and the negative dependence property (Kulesza and Taskar, 2012) to explore the efficacy of ORFs.
ORFs are used with PRFs in Performers (Choromanski et al., 2020; Schlag et al., 2021; Luo et al., 2021; Likhosherstov et al., 2021; Chowdhury et al., 2021; Xiao et al., 2022): a recently-proposed class of efficient Transformers (Kitaev et al., 2020; Roy et al., 2021) that can be applied to ultra-long sequences or to expedite inference on regular-size sequences.
## 3 Simplex Random Features (SimRFs)
We begin by presenting Simplex Random Features (SimRFs). In analogy to the square orthogonal block, we define the so-called _simplex block_, consisting of \(d\)\(d\)-dimensional random vectors \(\{(\mathbf{w}_{i})_{i=1}^{d}\}\). In practical applications where \(m>d\) random features are needed, multiple simplex blocks are constructed independently.
Instead of being orthogonal, the rows of the simplex block point towards the vertices of a \(d-1\)-dimensional simplex embedded in \(d\)-dimensional space, subtending angles \(\theta=\arccos(-\frac{1}{d-1})\). The entire simplex (or, equivalently, the vector it operates on) is randomly rotated to preserve isotropy, and the rows are independently renormalised by weights \(w_{i}\sim\chi_{d}\) such that they are marginally Gaussian. Explicitly, we define the simplex block \(\mathbf{W}_{\text{simp}}\in\mathbb{R}^{d\times d}\) by
\[\mathbf{W}_{\text{simp}}=\mathbf{D}\mathbf{S}\mathbf{R} \tag{8}\]
where \(\mathbf{D}\in\mathbb{R}^{d\times d}=\operatorname{diag}(w_{i})\) with \(w_{i}\) sampled from a \(\chi_{d}\)-distribution. \(\mathbf{R}\in\mathbb{R}^{d\times d}\) is a random orthogonal matrix drawn from Haar measure on \(\operatorname{O}(d)\), the group of orthogonal matrices in \(\mathbb{R}^{d\times d}\), constructed e.g by Gram-Schmidt orthogonalisation of an unstructured Gaussian matrix (Yu et al., 2016). The rows \(\mathbf{s}_{i}\) of the simplex projection matrix \(\mathbf{S}\in\mathbb{R}^{d\times d}\) are given by the unit vectors
\[\mathbf{s}_{i}=\begin{cases}\sqrt{\frac{d}{d-1}}\mathbf{\mathsf{e}}_{i}-\frac{\sqrt{d }+1}{(d-1)^{3/2}}(1,...,1,0)^{\top}&\text{for }1\leq i<d\\ \frac{1}{\sqrt{d-1}}(1,1,...,1,0)^{\top}&\text{for }i=d\end{cases} \tag{9}\]
which are manifestly normalised and subtend obtuse angles. Fig. 1 visualises the different geometrical couplings of IIDRFs, ORFs and SimRFs in low data dimensionality \(d\).
### RF-conformity function and SimRFs vs ORFs
Recalling again that the Gaussian and softmax kernels are readily interchanged, we focus on \(K_{\text{gauss}}\) without loss of generality. We begin by defining to so-called _RF-conformity_.
**Definition 3.1**.: The RF-conformity, \(\rho(\mathbf{x},\mathbf{y})\), is given by
\[\rho(\mathbf{x},\mathbf{y})\stackrel{{\text{def}}}{{=}}\frac{\Gamma( \frac{d}{2})}{m(m-1)}\sum_{i,j\neq i}\mathbb{E}_{w_{ij}}\left(\sum_{k=0}^{ \infty}\frac{v^{2k}w_{ij}^{2k}}{2^{2k}k!\Gamma(k+\frac{d}{2})}\right), \tag{10}\]
with \(w_{ij}=\|\mathbf{w}_{i}+\mathbf{w}_{j}\|_{2}\), \(v=\|\mathbf{x}+\mathbf{y}\|_{2}\) for \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{d}\), \(\Gamma\) the Gamma-function and \(m\) the no. random vectors \(\mathbf{w}_{i}\).
\(\rho(\mathbf{x},\mathbf{y})\) depends on correlations induced between random vector directions. It is bigger when random vectors point in similar directions, 'exploring' \(\mathbb{R}^{d}\) less effectively. In Appendix A.1, we prove the following important result.
**Theorem 3.2** (MSE depends on RF-conformity).: _For PRFs, the \(\operatorname{MSE}\) of the unbiased estimator \(\widehat{K}(\mathbf{x},\mathbf{y})\) is given by_
\[\begin{split}\operatorname{MSE}(\widehat{K})&=\frac{e ^{-2x^{2}-2y^{2}}}{m}\left((e^{2v^{2}}-e^{v^{2}})\right.\\ &\left.+(m-1)(\rho(\mathbf{x},\mathbf{y})-e^{v^{2}})\right).\end{split} \tag{11}\]
_That is, the MSE is an increasing function of the RF-conformity._
For any \(w_{i},w_{j}\), SimRFs give strictly smaller values of \(w_{ij}\) than ORFs because the random vectors subtend a bigger angle. Explicitly, \(w_{ij}=(w_{i}^{2}+w_{j}^{2}+2w_{i}w_{j}\cos\theta)^{1/2}\) is smaller when \(\cos\theta=-\frac{1}{d-1}\) (SimRFs) compared to when \(\cos\theta=0\) (ORFs). This leads to smaller values of \(\rho(\mathbf{x},\mathbf{y})\), which immediately implies the following important result.
**Corollary 3.3** (SimRFs outperform ORFs).: _For PRFs, the kernel estimator MSE achieved with SimRFs is strictly lower than with ORFs for arbitrary data dimensionality \(d\)._
In fact, we are able to make the following substantially stronger statement, proved in Appendix A.2.
Figure 1: Schematic of different geometrical couplings for small \(d\). Dotted lines have a component into the plane of the paper, thick lines have a component out, and \(\odot\) is purely out (i.e. perpendicular to the paper’s plane). With IIDRFs, the respective orientations of vectors are chosen independently. With ORFs, we condition the vectors to be perpendicular. With SimRFs, they subtend angles \(\theta=\arccos(-\frac{1}{d-1})\), which suppresses the kernel estimator MSE. All vector norms are drawn independently from a \(\chi_{d}\)-distribution.
**Theorem 3.4** (SimRFs optimal for weight-independent geometrical coupling).: _Supposing that \(d\) random vector norms \(\{(w_{i})_{i=1}^{d}\}\) are i.i.d., SimRFs constitute the **best possible weight-independent geometrical coupling mechanism**, giving the lowest possible PRF kernel estimator MSE._
### SimRFs+
Now we consider the broader family of weight-_dependent_ geometrical coupling mechanisms, where random vector directions \(\{\hat{\mathbf{w}}_{i}\}\) are permitted to be correlated with norms \(\{w_{i}\}\). In particular, given \(d\) vectors \(\{\mathbf{w}_{i}\}\) of known norms (from \(d\) draws of \(\chi_{d}\)), we would like to arrange them in \(d\)-dimensional space in order to minimise the sum3
Footnote 3: We remove the expectation value because, given a fixed set of norms, assigning any probability mass to suboptimal configurations will increase the RF-conformity in expectation – that is, the best geometrical coupling between vectors \(\{\mathbf{w}_{i}\}\) is deterministic.
\[\rho(\mathbf{x},\mathbf{y})=\frac{\Gamma(\frac{d}{2})}{m(m-1)}\sum_{i,j\neq i}\left( \sum_{k=0}^{\infty}\frac{v^{2k}w_{ij}^{2k}}{2^{2k}k!\Gamma(k+\frac{d}{2})} \right). \tag{12}\]
One brute-force approach is to parameterise each of the \(d\) random vectors in hyperspherical coordinates and use an off-the-shelf numerical optimiser (e.g. \(\mathrm{scipy.optimize}\)). This is prohibitively slow, and moreover the solution has data-dependence via \(v=\|\mathbf{x}+\mathbf{y}\|_{2}\) which frustrates the method's scalability: the optimisation needs to be carried out pairwise for every \((\mathbf{x},\mathbf{y})\), which undermines our ability to quickly evaluate \(\widehat{K}(\mathbf{x},\mathbf{y})=\phi(\mathbf{x})^{\top}\phi(\mathbf{y})\) for any given pair of input vectors. However, the numerical approach does benchmark the lowest possible RF-conformity that can be achieved with weight-dependent geometrical coupling.
The generic analytic minimisation of Eq. 12 is challenging, and solutions will suffer the same \(v\)-dependence described above, so we instead consider a tractable approximation. Dropping constant prefactors for clarity, the first few terms from Eq. 10 are given by:
\[\sum_{i,j\neq i}\mathbb{E}_{w_{ij}}\left(\frac{1}{\Gamma(\frac{d }{2})}+\frac{v^{2}w_{ij}^{2}}{4\Gamma(\frac{d}{2}+1)}+\frac{v^{4}w_{ij}^{4}}{ 32\Gamma(\frac{d}{2}+2)}+...\right) \tag{13}\] \[=\frac{1}{\Gamma(\frac{d}{2})}\sum_{i,j\neq i}1+\tau\left(1+\frac {v^{2}}{8}\frac{\Gamma(\frac{d}{2}+1)}{\Gamma(\frac{d}{2}+2)}\frac{\mathbb{E} (w_{ij}^{4})}{\mathbb{E}(w_{ij}^{2})}+...\right)\]
with \(\tau=\frac{\Gamma(\frac{d}{2})^{\nu}\mathbb{E}(w_{ij}^{2})}{4\Gamma(\frac{d}{ 2}+1)}\). The precise value of \(\frac{\mathbb{E}(w_{ij}^{4})}{\mathbb{E}(w_{ij}^{4})}\) will depend on the geometrical coupling scheme employed, but for the types we have considered we generally expect it to scale as \(\sim d\), with some constant prefactor4. Therefore the sum in Eq. 10 can be approximated by:
Footnote 4: For example, with orthogonal coupling \(\frac{\mathbb{E}(w_{ij}^{4})}{\mathbb{E}(w_{ij}^{2})}=\frac{\mathbb{E}(w_{ij} ^{4}+w_{ij}^{4}+2w_{ij}^{2}w_{ij}^{2})}{\mathbb{E}(w_{ij}^{4}+w_{ij}^{2})}= \frac{\mathbb{E}(w_{ij}^{4})}{\mathbb{E}(w_{ij}^{4})}+\mathbb{E}(w_{i}^{2})=2 \frac{\Gamma(\frac{d}{2}+1)}{\Gamma(\frac{d}{2}+1)}+2\frac{\Gamma(\frac{d}{2} +1)}{\Gamma(\frac{d}{2})}\sim d\), where we took moments of the \(\chi_{d}\) distribution. We can perform similar analyses in the i.i.d. and simplex cases.
\[\frac{1}{\Gamma(\frac{d}{2})}\sum_{i,j\neq i}1+\frac{\Gamma(\frac{d}{2})v^{2} \mathbb{E}(w_{ij}^{2})}{4\Gamma(\frac{d}{2}+1)}\left(1+\mathcal{O}(v^{2})+... \right). \tag{14}\]
In the limit of small \(v\), this invites us to truncate the sum at \(k=1\), dropping the \(\mathcal{O}(v^{2})\) terms. Omitting additive constants, we are left with the approximate objective
\[\tilde{\rho}(\mathbf{x},\mathbf{y})=\frac{\Gamma(d/2)v^{2}}{4m(m-1)\Gamma(1+d/2)}\sum_ {i,j\neq i}w_{ij}^{2}, \tag{15}\]
the physical analogue of which is the Heisenberg Hamiltonian with different coupling constants between different spin pairs. This is exactly minimised by
\[\mathbf{w}_{i}=-\frac{\sum_{j\neq i}\mathbf{w}_{j}}{\|\sum_{j\neq i}\mathbf{w}_{j}\|_{2}}w _{i}\qquad\ i=1,...,d \tag{16}\]
where each random vector points away from the resultant of all the others (see Appendix A.3 for details). Empirically, we find that the iterative update scheme
\[\mathbf{w}_{i}\leftarrow-\frac{\sum_{j\neq i}\mathbf{w}_{j}}{\|\sum_{j\neq i}\mathbf{w}_{ j}\|_{2}}w_{i} \tag{17}\]
converges to Eq. 16 quickly (after a small number of passes through the set of \(d\) vectors), especially if we initialise in the near-optimal simplex geometry. Conveniently, the solution has no \(v\)-dependence and is therefore scalable: the optimisation needs to be carried out for every draw of weights \(\{w_{i}\}\) but _not_ every pair of data points \((\mathbf{x},\mathbf{y})\). We refer to this mechanism of weight-dependent geometrical coupling as _SimRFs+_, and emphasise that it is asymptotically optimal (in the sense of minimising \(\rho(\mathbf{x},\mathbf{y})\)) in the \(v\ll 1\) limit.
Fig. 2 captures the essential difference between SimRFs and SimRFs+: in the latter case, vectors with larger norms subtend bigger angles. Fig. 3 compares the RF-conformity of the mechanisms we have considered, as well as the outcome of the inefficient numerical optimisation. Note that the additional benefits of weight-dependent coupling are marginal: SimRFs are already close to optimal, and SimRFs+ require an extra optimisation step of time-complexity \(\mathcal{O}(d^{3})\).
## 4 From ORFs to SimRFs: the theory
In this section, we derive analytic expressions for the RF-conformity \(\rho(\mathbf{x},\mathbf{y})\), and therefore the kernel estimator MSE, for IIDRFs, ORFs and SimRFs. This allows us to quantitatively compare the performance of different coupling mechanisms. As before, we specialise to \(K_{\mathrm{gauss}}\). Detailed proofs are provided in Appendix A.
We have seen that RF-conformity depends on an expectation value over \(w_{ij}=\|\mathbf{w}_{i}+\mathbf{w}_{j}\|_{2}\). This motivates us to begin with the following auxiliary lemma.
**Lemma 4.1** (IIDRF conformity).: _When random vectors \(\mathbf{w}_{i}\in\mathbb{R}^{d}\) are i.i.d. (IIDRFs), the probability distribution \(p(w_{ij})\) with \(w_{ij}=\|\mathbf{w}_{i}+\mathbf{w}_{j}\|_{2}\) is given by_
\[p_{i.d.}(w_{ij})=\frac{w_{ij}^{d-1}e^{-w_{ij}^{2}/4}}{2^{d-1}\Gamma(\frac{d}{2 })} \tag{18}\]
_which induces an RF-conformity_
\[\rho_{\mathrm{IIDRF}}(\mathbf{x},\mathbf{y})=e^{v^{2}} \tag{19}\]
_where \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{d}\) and \(v=\|\mathbf{x}+\mathbf{y}\|_{2}\)._
Now we make the following important observation.
**Lemma 4.2** (PDF for vectors subtending \(\theta\)).: _Supposing random vectors \(\mathbf{w}_{i},\mathbf{w}_{j}\) are marginally Gaussian but are conditioned to subtend a fixed angle \(\theta\), the probability distribution \(p_{\theta}(w_{ij})\), is given by_
\[\frac{w^{2d-1}}{2^{d-2}\Gamma(\frac{d}{2})^{2}}\int_{\phi=0}^{ \pi/2}d\phi(\sin\phi\cos\phi)^{d-1}\frac{e^{-\frac{w^{2}}{2(1+\sin 2\phi\cos\theta) }}}{(1+\sin 2\phi\cos\theta)^{d}}. \tag{20}\]
ORFs and SimRFs correspond to special instances of this with \(\cos\theta=0\) and \(\cos\theta=-\frac{1}{d-1}\), respectively. It is instructive to observe that, in the orthogonal case, the distribution reduces to the \(\chi_{2d}\)-distribution. The probability distribution \(p_{\theta}(w_{ij})\) induces an RF-conformity
\[\begin{split}\rho_{\theta}(\mathbf{x},\mathbf{y})&=\frac{1 }{2^{d-1}\Gamma(\frac{d}{2})}\int_{0}^{\pi}\mathrm{d}\phi(\sin\phi)^{d-1}\\ &\cdot\sum_{k=0}^{\infty}\frac{v^{2k}(1+\sin\phi\cos\theta)^{k} }{2^{k}k!\Gamma(k+\frac{d}{2})}\Gamma(k+d).\end{split} \tag{21}\]
Inspecting the form closely, we see that every term in the sum over \(k\) is proportional to the integral
\[\int_{0}^{\pi}\mathrm{d}\phi(\sin\phi)^{d-1}(1+\sin\phi\cos\theta)^{k} \tag{22}\]
which is strictly smaller for \(\cos\theta<0\) compared to \(\cos\theta=0\) (since \(\sin\phi\) is nonnegative everywhere in the domain). Since every term in the sum is positive, we immediately conclude that for PRFs the conformity of SimRFs is strictly smaller than ORFs, and hence the MSE is smaller. We already derived this in Sec. 3, but are now also able to provide the following closed forms.
**Theorem 4.3** (ORF and SimRF conformity closed forms).: _For PRFs with \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{d}\), RF-conformity of ORFs is_
\[\rho_{\mathrm{ORF}}(\mathbf{x},\mathbf{y})=\frac{\Gamma(\frac{d}{2})}{ \Gamma(d)}\sum_{k=0}^{\infty}\frac{v^{2k}}{2^{k}k!}\frac{\Gamma(k+d)}{\Gamma(k +\frac{d}{2})} \tag{23}\]
_whereas the RF-conformity of SimRFs is_
\[\begin{split}\rho_{\mathrm{SimRF}}(\mathbf{x},\mathbf{y})&= \frac{\sqrt{\pi}}{\Gamma(\frac{d}{2})2^{d-1}}\sum_{k=0}^{\infty}\frac{\Gamma( k+d)}{\Gamma(k+\frac{d}{2})}\frac{v^{2k}}{2^{k}}\\ &\cdot\sum_{p=0}^{k}\left(-\frac{1}{d-1}\right)^{p}\frac{\Gamma( \frac{d+p}{2})}{\Gamma(\frac{d+p+1}{2})}\frac{1}{(k-p)!p!}.\end{split} \tag{24}\]
Figure 3: Comparison of the RF-conformity defined in Eq. 10 (lower is better) for a _single random draw_ of norms \(\{w_{i}\}\), \(v=\|\mathbf{x}+\mathbf{y}\|_{2}=1\) and \(d=6\). IIDRFs, ORFs, SimRFs and SimRFs+ are implemented as described in the main text. ‘Numerically optimised’ uses an off-the-shelf numerical optimiser to arrange vectors to minimise the RF-conformity: a scheme which is too computationally inefficient to be practical but benchmarks the lowest possible value. Any improvements above SimRFs using weight-dependent geometrical coupling are marginal. The IIDRF value is averaged over 100 random couplings of fixed weights, and the shaded region gives 1 standard deviation.
Figure 2: With SimRFs, random vectors are geometrically correlated such that they all subtend an equal angle \(\theta=\arccos(-\frac{1}{d-1})\). With SimRFs+, random vectors with bigger norms subtend bigger angles, guaranteeing smaller kernel estimator MSE when \(v\) is sufficiently small.
These results are novel, and permit the **first analytic characterisation of the difference in kernel estimator MSE between IIDRFs, ORFs and SimRFs**. We make one further observation.
**Corollary 4.4** (ORFs always outperform IIDRFs).: _In the PRF setting, orthogonality gap (difference in kernel estimator MSE between IIDRFs and ORFs) is given by_
\[\begin{split}&\Delta\mathrm{MSE}(\widehat{K}(\mathbf{x},\mathbf{y}))= e^{-2x^{2}-2y^{2}}\frac{m-1}{m}\\ &\cdot\left(e^{v^{2}}-\frac{\Gamma(d/2)}{\Gamma(d)}\sum_{k=0}^{ \infty}\frac{v^{2k}}{k!}\frac{\Gamma(k+d)}{\Gamma(k+d/2)}\right)\end{split} \tag{25}\]
_where \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{d}\), \(v=\|\mathbf{x}+\mathbf{y}\|_{2}\) and \(m\leq d\) is the number of random vectors. This is positive everywhere._
The sign of this orthogonality gap was first reported in in (Choromanski et al., 2020) but without an accompanying closed form.
Plotting each of derived probability distributions \(p(w_{ij})\) (Eq. 18 and Eq. 20, taking \(\cos\theta=0\) and \(\cos\theta=-\frac{1}{d-1}\)) and noting from Eq. 10 that the RF-conformity depends on the expectation value over the monotonically increasing function \(f(w_{ij},v)=\Gamma(\frac{d}{2})\sum_{k=0}^{\infty}\frac{v^{2k}w_{ij}^{2k}}{2^ {2k}k!\Gamma(k+\frac{d}{2})}\), the intuitive reason for the relative efficacy of SimRFs, ORFs and IIDRFs becomes clear: conformity is penalised by tails at large \(w_{ij}\), which we suppress with geometrical coupling (Fig. 4).
### Extension to RFFs
We briefly note that, with minimal work, the preceding results for PRFs can be modified to consider RFFs. For example, the following is true.
**Theorem 4.5** (RFF orthogonality gap).: _In the RFF setting, the orthogonality gap (difference in kernel estimator MSE between IIDRFs and ORFs) is given by_
\[\begin{split}&\Delta\mathrm{MSE}(\widehat{K}(\mathbf{x},\mathbf{y}))= \frac{m-1}{m}\left(e^{-z^{2}}-\right.\\ &\left.\frac{\Gamma(d/2)}{\Gamma(d)}\sum_{k=0}^{\infty}\frac{(-z^ {2})^{k}}{2^{k}k!}\frac{\Gamma(k+d)}{\Gamma(k+d/2)}\right)\end{split} \tag{26}\]
_where \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{d}\), \(z=\|\mathbf{x}-\mathbf{y}\|_{2}\) and \(m\leq d\) is the number of random vectors._
To the best of our knowledge, this result is also novel. The expression does not admit the same simple analysis as the PRF form 25 because successive terms in the sum oscillate in sign, but a cursory numerical analysis reveals that the MSE of ORFs is smaller than IIDRFs up to some threshold \(z_{\text{crit}}(d)\), the value of which diverges as \(d\to\infty\). Taylor expanding our exact result in \(\frac{1}{d}\) reproduces the following.
**Corollary 4.6** (RFF asymptotic MSE ratio).: _(Yu et al., 2016) The ratio of ORF to IIDRF kernel estimator MSE is given by_
\[\frac{\text{MSE}(\widehat{K}_{\text{ORF}})}{\text{MSE}(\widehat{K}_{\text{ IIDRF}})}=1-(m-1)\left(\frac{e^{-z^{2}}z^{4}}{d(1-e^{-z^{2}})^{2}}+\mathcal{O} \left(\frac{1}{d^{2}}\right)\right), \tag{27}\]
_where \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{d}\), \(z=\|\mathbf{x}-\mathbf{y}\|_{2}\) and \(m\leq d\) is the number of random features._
The negative subleading term shows that the RFF orthogonality gap is positive everywhere when \(d\to\infty\).
### Implementation, complexity and fast SimRFs
The replacement of ORFs with SimRFs is straightforward: instead of calculating random projections \(\mathbf{W}\mathbf{x}\) using the _orthogonal block_\(\mathbf{W}_{\text{ort}}=\mathbf{DR}\), we use the _simplex block_\(\mathbf{W}_{\text{simp}}=\mathbf{DSR}\), with the matrices \(\mathbf{D},\mathbf{S},\mathbf{R}\in\mathbb{R}^{d\times d}\) and the object \(\mathbf{x}\in\mathbb{R}^{d}\) defined at the beginning of Sec. 3. By choosing the order of computation \(\mathbf{D}(\mathbf{S}(\mathbf{R}\mathbf{x}))\), we can avoid the \(\mathcal{O}(d^{3})\) time complexity of computing matrix-matrix products. Both \(\mathbf{D}\) and \(\mathbf{S}\) support matrix-vector multiplication of time complexity \(\mathcal{O}(d)\) (see Appendix B.2.1). Generically, the time complexity to sample the random orthogonal matrix \(\mathbf{R}\) is \(\mathcal{O}(d^{3})\) and the matrix-vector multiplication \(\mathbf{R}\mathbf{x}\) is \(\mathcal{O}(d^{2})\). However, following exactly the same tricks as with ORFs, it is possible to replace \(\mathbf{R}\) with a proxy \(\widehat{\mathbf{R}}\) which is _approximately_ sampled from the orthogonal group according to Haar measure and which supports fast matrix-vector multiplication: for example, \(\mathbf{HD}\)-product matrices (Choromanski et al., 2017) or products of Givens random rotations (Dao et al., 2019). Then the time-complexity will be limited by the computation \(\widetilde{\mathbf{R}}\mathbf{x}\) which is subquadratic by construction (e.g. \(\mathcal{O}(d\log d)\) for the examples above). We refer to this mechanism as _fast SimRFs_, and show their excellent experimental performance in Appendix B.2.2.
Figure 4: Probability distributions over the random variable \(w_{ij}=\|\mathbf{w}_{i}+\mathbf{w}_{j}\|_{2}\) for IIDRFs, ORFs and SimRFs. The RF-conformity depends on the expectation of a monotonically increasing function \(f\), and geometrical coupling decreases this by reducing the probability mass at large \(w_{ij}\).
SimRFs+ are implemented by \(\mathbf{W}_{\text{simp}}=\mathbf{DS}^{\prime}\mathbf{R}\), where \(\mathbf{S}^{\prime}\) is obtained from \(\mathbf{S}\) according to the \(\mathcal{O}(d^{3})\) iterative optimisation scheme defined in Eq. 17. This will dominate the scaling of time-complexity if we apply fast SimRFs+.
The space complexity of all regular schemes is \(\mathcal{O}(d^{2})\) to store \(\mathbf{R}\). For fast ORFs and fast SimRFs, the space complexity becomes \(\mathcal{O}(d)\) because we no longer need to explicitly store \(\widehat{\mathbf{R}}\), just the \(d\) weights \(\{w_{i}\}\) from \(\chi_{d}\). But the space complexity of fast SimRFs+ is still \(\mathcal{O}(d^{2})\) since all vectors must be stored during the optimisation step.
It is clear that **SimRFs are essentially equal in computational cost to ORFs**, and in Sec. 5 we will see that they often perform substantially better in downstream tasks. Meanwhile, SimRFs+ are mostly of academic interest.
## 5 Experiments
Here we report the outcomes of an extensive empirical evaluation of SimRFs for PRFs, demonstrating their superiority over IIDRFs and ORFs in a variety of settings. Technical details are reported in Appendix B. The section is organised as follows: (a) in Sec. 5.1 we evaluate the derived MSE expressions for IIDRFs, ORFs and SimRFs with a variety of data distributions; (b) in Sec. 5.2 we compare the performance of the different RF mechanisms on nonparametric classification tasks using kernel regression; (c) in Sec. 5.3 we compare the RF mechanisms for approximation of the attention module in vision Performer-Transformers.
### Comparison of MSE between RF mechanisms
We begin by evaluating the MSE of the PRF estimator \(\widehat{K}\) with IIDRFs, ORFs and SimRFs, using a variety of synthetic pairs of vectors \(\boldsymbol{x},\boldsymbol{y}\in\mathbb{R}^{d}\). We use \(d=64\) (standard in Transformer applications) and take \(L=1024\) independent copies of each. We use four distributions to generate \(\boldsymbol{x},\boldsymbol{y}\): normal draws \(\boldsymbol{x},\boldsymbol{y}\sim\mathcal{N}(\mathbf{0}_{d},\sigma^{2} \boldsymbol{I}_{d})\); sphere takes uniform samples on a sphere \(\sigma\mathcal{S}_{d-1}\); heterogen draws \(\boldsymbol{x}\) and \(\boldsymbol{y}\) from \(\mathcal{N}(\mathbf{0}_{d},\sigma^{2}\boldsymbol{I}_{d})\) and \(\mathcal{N}(\sigma\mathbf{1}_{d},\sigma^{2}\boldsymbol{I}_{d})\) respectively; and mnist takes random samples from the MNIST dataset, downsamples them to \(8\times 8\), normalises and rescales then by \(\sigma\), and flattens them into a vector of length \(64\).
Fig. 5 presents the kernel estimator MSE of the ORF and SimRF mechanisms relative to IIDRFs for different values of \(\sigma\). As proved analytically for PRFs, SimRFs offer the smallest MSE and IIDRFs give the greatest. It is interesting that the size of the improvement accrued by geometrical coupling depends sensitively on \(\boldsymbol{x},\boldsymbol{y}\), so the benefit from using SimRFs will depend on the particular data distribution.
### Nonparametric classification using kernel regression
Here we demonstrate how reduced kernel estimator MSE translates to better performance in downstream classification tasks. We use \(8\) different datasets retrieved from the UCI Machine Learning Repository (Dua & Graff, 2017a), each consisting of \(L\) training data \(\{(\boldsymbol{x},\boldsymbol{y})\}\) and test data \(\{(\boldsymbol{x}^{\prime},\boldsymbol{y}^{\prime})\}\). The objects are \(d\)-dimensional vectors \(\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{R}^{d}\) and their labels are one-hot encoded \(\boldsymbol{y},\boldsymbol{y}^{\prime}\in\mathbb{R}^{n}\). We predict the label distribution of a test object using kernel regression with the Gaussian kernel, \(\boldsymbol{y}^{\prime}_{\text{pred}}=\sum_{i=1}^{L}K(\sigma\boldsymbol{x}^{ \prime},\sigma\boldsymbol{x}^{(i)})\boldsymbol{y}^{(i)}/\sum_{i=1}^{L}K(\sigma \boldsymbol{x}^{\prime},\sigma\boldsymbol{x}^{(i)})\), then predict a class by taking the greatest argument of \(\boldsymbol{y}^{\prime}_{\text{pred}}\). We measure accuracy by the proportion of correct label predictions across the test-set. The \(\sigma>0\) hyperparameter is tuned for good PRF performance on a validation dataset; see Appendix B.1 for detailed discussion. Fig. 6 presents the results, plotting classification accuracy against the number of random features used. The size of the benefit accrued from using SimRFs depends on the data (as we noted in Sec. 5.1) and in the limit of large \(m\) performance converges towards the exact kernel result. SimRFs consistently perform best.
#### 5.2.1 SimRFs+ for nonparametric classification
Table 2 compares the classification accuracies achieved with SimRFs and SimRFs+ on the tasks detailed above, using \(m=d\) random features. As suggested in Sec. 3 (see in particular Fig. 3), SimRFs are already close to optimal and any gain provided by using SimRFs+ is marginal. Moreover,
\begin{table}
\begin{tabular}{l|c c c} \hline \hline & \multicolumn{3}{c}{Time-complexity} \\ & ORFs & SimRFs & SimRFs+ \\ \hline Regular & \(\mathcal{O}(d^{3})\) & \(\mathcal{O}(d^{3})\) & \(\mathcal{O}(d^{3})\) \\ Fast & \(\mathcal{O}(d\log d)\) & \(\mathcal{O}(d\log d)\) & \(\mathcal{O}(d^{3})\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Time complexities of RF-mechanisms and their fast variants.
Figure 5: Analytic form the the MSE ratio of the PRF kernel estimator \(\widehat{K}\) wrt the IIDRFs over all possible pairs of the \(L=1024\) vectors \(\{\boldsymbol{x}\}\) and the \(1024\) vectors \(\{\boldsymbol{y}\}\), generated with different distributions at a range of lengthscales parameterised by \(\sigma\). Smaller values indicate lower MSE and are hence better. Shaded regions correspond to one standard deviation. Note that SimRFs always perform the best, followed by ORFs then IIDRFs. The size of the improvement depends on the particular \((\boldsymbol{x},\boldsymbol{y})\).
improvements tend to occur where \(v\) is small so truncating the objective series expansion at \(k=1\) is reasonable.
### SimRFs-Performers: scalable attention for Transformers
PRFs were first introduced in (Choromanski et al., 2020) in order to accurately approximate the softmax attention module of Transformers - an architecture coined the _Performer_. This technique for kernelising the attention mechanism, which identifies complex dependencies between the elements of an input sequence, permits linear (c.f. quadratic) space- and time-complexity without assuming restrictive priors such as sparsity and low-rankness. Performers offer competitive results across a range of tasks (Tay et al., 2021), including vision modeling (Yuan et al., 2021; Horn et al., 2021) and speech (Liutkus et al., 2021).
Since Performers apply the ORF variant of PRFs, it is natural to expect that the SimRFs mechanism, which gives provably lower kernel estimator MSE, will be more effective. We refer to this architecture as the _SimRFs-Performer_, and show that it outperforms the regular ORFs-Performer.
We focus on the 'performised' versions of Vision Transformers (ViTs) (Dosovitskiy et al., 2021) and consider four datasets: (a) ImageNet2012 (Deng et al., 2009) (1K classes, 1.2M training images, 100K test set); (b) Fashion-MNIST (Xiao et al., 2017) (10 classes, 60K training images, 10K test set); (c) I_naturalist2021 (Horn et al., 2018) (10K classes, 2.7M training images, 500K test set) and (d) Places365 (Zhou et al., 2018) (365 classes, 1.8M training images, 328K test set). These are often used to benchmark ViTs.
In all four experiments, we use a ViT with **12** layers, **12** heads, mlp_dim equal to **3072**, a dropout rate of **0.1** and no attention dropout. We use the adam optimiser with weight decay equal to **0.1** and batch size \(\mathrm{bs}=\textbf{4096}\), trained for **300** epochs on the \(\mathrm{TPU}\) architecture. We apply **130** random vectors to approximate the softmax attention kernel with PRFs, testing both the ORF and SimRF coupling mechanisms.
The results, comparing the ORFs and SimRFs for approximating attention, are presented in Fig. 7. The SimRFs-Performer often achieves gains over the regular (ORFs-) Performer - and is certainly never worse - for no observable extra cost. The exact difference depends on the data distribution (see. Sec. 5.1) and the importance of MSE reduction for that particular task. For some of the tested datasets the difference is substantial: for instance, on \(\mathrm{ImageNet2012}\) the SimRFs-Performer saturates at an accuracy which is greater than the regular Performer by **0.5%**.
\begin{table}
\begin{tabular}{l|c|c c} \hline \hline Data set & \(\bar{v}\) & \multicolumn{2}{c}{Classification accuracy} \\ & & SimRFs & SimRFs+ \\ \hline abalone & 1.7 & **0.1421\(\pm\)0.0002** & **0.1419\(\pm\)0.0002** \\ banknote & 2.6 & **0.7229\(\pm\)0.0012** & 0.7132\(\pm\)0.0012 \\ car & 5.0 & **0.6754\(\pm\)0.0004** & **0.6751\(\pm\)0.0004** \\ yeast & 3.1 & **0.3202\(\pm\)0.0004** & **0.3208\(\pm\)0.0004** \\ cmc & 2.0 & 0.4047\(\pm\)0.0005 & **0.4065\(\pm\)0.0005** \\ nursery & 1.4 & 0.6874\(\pm\)0.0005 & **0.6917\(\pm\)0.0004** \\ wifi & 0.8 & 0.6314\(\pm\)0.0018 & **0.6473\(\pm\)0.0018** \\ chess & 2.3 & **0.2000\(\pm\)0.0001** & **0.2000\(\pm\)0.0001** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Classification accuracies from kernel regression with SimRFs and SimRFs+, using random features of length \(m=d\). \(\bar{v}\) records the mean (\(\sigma\)-scaled) value of \(v\) in each dataset. Note that both variants substantially outperform ORFs on every dataset.
Figure 6: Nonparametric classification using kernel regression for a variety of datasets (Dua and Graff, 2017; Nash et al., 1994; Dua and Graff, 2017; Bohanec and Rajkovic, 1988; Horton and Nakai, 1996; Lim et al., 2000; Olave et al., 1989; Dua and Graff, 2017), where the Gaussian kernel is approximated with different RFs. Plots show mean classification accuracy vs. the number of random features used to approximate the kernel (\(d\), the dimensionality of the objects \(\boldsymbol{x}\)). Shading gives the standard deviation on the estimates of the mean. SimRFs consistently perform best.
Figure 7: Accuracy comparison (higher is better) of the SimRFs-Performer and the regular ORFs-Performer. Tests on four image classification tasks: (a) ImageNet2012, (b) Fashion-MNIST, (c) I-Naturalist2021, (d) Places365. \(x\)-axis is training epochs.
## 6 Conclusion
We have introduced _Simplex Random Features_ (SimRFs), a new mechanism for unbiased approximation of the Gaussian and softmax kernels. By correlating the directions of random vectors in the ensemble, we access lower kernel estimator MSE than the previously predominant _Orthogonal Random Features_ (ORFs): a fact we have verified both theoretically and empirically via extensive experiments. We have shown that the suppressed MSE of SimRFs compared to ORFs permits better performance in downstream applications, including in nonparametric classification and scalable Transformer training. We have proved that SimRFs constitute the best weight-independent geometrically-coupled PRF mechanism, with further marginal improvements available in some regimes from a weight-dependent SimRFs+ variant. Finally, through our detailed quantitative analysis of the different RF mechanisms, we have derived novel closed-form results for ORFs, precisely formalising qualitative and asymptotic findings previously reported in the literature.
## 7 Relative contributions and acknowledgements
IR developed the SimRF and SimRF+ mechanisms, proved all theoretical results, and ran the pointwise kernel evaluation and nonparametric classification experiments. KC designed and ran the Performer experiments, and was crucially involved in all aspects of the work throughout. AW and VL provided helpful discussion and feedback on drafts.
IR acknowledges support from the Trinity College External Studentship. VL acknowledges support from the Cambridge Trust and DeepMind. AW acknowledges support from a Turing AI Fellowship under grant EP/V025279/1 and the Leverhulme Trust via CFI.
|
2305.00463 | Quantitative Schauder estimates for hypoelliptic equations | We derive Schauder estimates using ideas from Campanato's approach for a
general class of local hypoelliptic operators and non-local kinetic equations.
The method covers equations in divergence and non-divergence form. In
particular our results are applicable to the inhomogeneous Landau and Boltzmann
equation without cut-off. The paper is self-contained. | Amélie Loher | 2023-04-30T12:23:22Z | http://arxiv.org/abs/2305.00463v1 | # Quantitative Schauder estimates for hypoelliptic equations
###### Abstract.
We derive Schauder estimates using ideas from Campanato's approach for a general class of local hypoelliptic operators and non-local kinetic equations. The method covers equations in divergence and non-divergence form. In particular our results are applicable to the inhomogeneous Landau and Boltzmann equation without cut-off. The paper is self-contained.
###### Contents
* 1 Introduction
* 2 Preliminaries
* 3 Toolbox
* 4 Campanato's inequality
* 5 Campanato's approach: the local (non-fractional) case
* 6 Campanato's approach: the non-local (fractional) case
* A Hypoelliptic Operators
* B Relation between Holder and Campanato spaces
* C Interpolation Inequality for Holder spaces
* D Proof of Bouchut's Proposition
## 1. Introduction
### Problem Formulation
We consider functions \(f:\mathbb{R}\times\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\) solving either a kinetic Fokker-Planck-type equation in divergence form
\[\partial_{t}f+v\cdot\nabla_{x}f=\sum_{1\leq i,j\leq d}\partial_{v_{i}}\big{(}a ^{ij}\partial_{v_{j}}f\big{)}+\sum_{1\leq i\leq d}b^{i}\partial_{v_{i}}f+cf+h, \tag{1.1}\]
or in non-divergence form
\[\partial_{t}f+v\cdot\nabla_{x}f=\sum_{1\leq i,j\leq d}a^{ij}\partial_{v_{i}v_{ j}}^{2}f+\sum_{1\leq i\leq d}b^{i}\partial_{v_{i}}f+cf+h, \tag{1.2}\]
###### Abstract
We consider the following two-dimensional case of a (non-fractional) stochastic stochastic stochastic differential equation
\[\left\{\begin{array}{ll}\partial_{t}f+v\cdot\nabla_{x}f=\mathcal{L}f+h,\\ \partial_{t}f+v\cdot\nabla_{x}f=\mathcal{L}f+h,\end{array}\right. \tag{1.1}\]
where \(\mathcal{L}f\) is a nonlinear fractional
\(C_{\ell}^{m-3+\alpha}(Q_{1})\). Let \(f\) solve (1.1) or (1.2) in \(Q_{1}\). In the former case, we further assume \(\nabla_{v}A\in C_{\ell}^{m-3+\alpha}(Q_{1})\). Then there holds_
\[\|f\|_{C_{\ell}^{m-1+\alpha}(Q_{1/4})}\leq C\Big{(}\|f\|_{L^{\infty}(Q_{1})}+\| h\|_{C_{\ell}^{m-3+\alpha}(Q_{1})}\Big{)},\]
_for some \(C\) depending on \(d,\lambda_{0},\alpha,\|A\|_{C_{\ell}^{m-3+\alpha}},\|B\|_{C_{\ell}^{m-3+\alpha }},\|c\|_{C_{\ell}^{m-3+\alpha}}\), and for the divergence form case also on \(\|\nabla_{v}A\|_{C_{\ell}^{m-3+\alpha}}\)._
In particular, we recover Theorem 3.9 of Imbert and Mouhot [16] when \(m=3\) and Theorem 2.12 of Henderson-Snelson [12] when \(m\in\{3,4\}\). More generally, our approach is robust enough to cover higher order hypoelliptic equations, or also Dini-regular coefficients; we refer to Theorem A.1 and Theorem A.2 in Appendix A.
### Assumptions and results: the non-local (fractional) case
For the non-local equation (1.3), we specify the following notion of ellipticity and Holder continuity. Let \(s\in(0,1)\) and \(0<\lambda_{0}<\Lambda_{0}\) be given. To be consistent with the previous work of Imbert-Silvestre [19], we consider a non-negative kernel \(K=K(t,x,v,v^{\prime})\) that maps \((t,x,v)\) into a non-negative Radon measure \(K_{(t,x,v)}\) in \(\mathbb{R}^{d}\setminus\{0\}\) with
\[K_{(t,x,v)}(w):=K(t,x,v,v+w).\]
Then, for all \(r>0\), we assume the upper bound
\[\int_{B_{r}}|w|^{2}K(w)\,\mathrm{d}w\leq\Lambda_{0}r^{2-2s}. \tag{1.9}\]
We further require a coercivity condition for any \(r>0\) and any \(\varphi\in C^{2}(B_{2r})\)
\[\lambda_{0}\int_{B_{r}}\int_{B_{r}}\frac{|\varphi(v)-\varphi(v^{\prime})|^{2} }{|v-v^{\prime}|^{d+2s}}\,\mathrm{d}v\,\mathrm{d}v^{\prime}\leq\int_{B_{2r}} \int_{B_{2r}}\big{[}\varphi(v)-\varphi(v^{\prime})\big{]}K(v^{\prime}-v) \varphi(v)\,\mathrm{d}v^{\prime}\,\mathrm{d}v+\Lambda_{0}\|\varphi\|_{L^{2}(B_ {2r})}. \tag{1.10}\]
Moreover, we will impose a certain notion of symmetry on the kernel, which can be understood as the distinction between divergence and non-divergence form equations in the fractional case. We either work with the following symmetry condition, which is the non-local analogue of _non-divergence form_ equations
\[K(w)=K(-w). \tag{1.11}\]
Or else, if we consider the _divergence form_ analogue instead, we require
\[\forall v\in\mathbb{R}^{d}\quad\left|\mathrm{PV}\int_{\mathbb{R}^{d}}\big{(}K( v,v^{\prime})-K(v^{\prime},v)\big{)}\,\mathrm{d}v^{\prime}\right|\leq\Lambda_{0}, \tag{1.12}\]
and if \(s\geq\frac{1}{2}\) we assume that for all \(r>0\)
\[\forall v\in\mathbb{R}^{d}\quad\left|\mathrm{PV}\int_{B_{r}(v)}(v-v^{\prime}) K(v,v^{\prime})\,\mathrm{d}v^{\prime}\right|\leq\Lambda_{0}r^{1-2s}. \tag{1.13}\]
Finally we want \(K\) to be Holder continuous with exponent \(\alpha\in(0,+\infty)\): given \(z_{1}=(t_{1},x_{1},v_{1})\) and \(z_{2}=(t_{2},x_{2},v_{2})\) we have for any \(r>0\)
\[\int_{B_{r}}\big{|}K_{z_{1}}(w)-K_{z_{2}}(w)\big{|}|w|^{2}\,\mathrm{d}w\leq A_{ 0}r^{2-2s}d_{\ell}(z_{1},z_{2})^{\alpha}, \tag{1.14}\]
where \(d_{\ell}\) denotes the kinetic distance defined below in Definition 2.1. In the divergence form case, we require in addition to (1.14) for any \(r>0\)
\[\left|\mathrm{PV}\int_{B_{r}}w\big{(}K_{z_{1}}(w)-K_{z_{2}}(w)\big{)}\,\mathrm{d }w\right|\leq A_{0}r^{1-2s}d_{\ell}(z_{1},z_{2})^{\alpha}. \tag{1.15}\]
_Remark 1.2_.: We observe that, as a consequence of (1.9) and (1.14), we obtain for all \(r>0\) and some \(C>0\)
\[\int_{B_{r}\setminus B_{r/2}}\big{|}K_{z_{1}}(w)-K_{z_{2}}(w)\big{|}\,\mathrm{d}w \leq CA_{0}r^{-2s}d_{\ell}(z_{1},z_{2})^{\alpha},\]
which in turn implies
\[\begin{split}&\int_{B_{1}}|w|^{2s+\alpha}\big{|}K_{z_{1}}(w)-K_{z_{ 2}}(w)\big{|}\,\mathrm{d}w\leq CA_{0}d_{\ell}(z_{1},z_{2})^{\alpha},\\ &\int_{\mathbb{R}^{d}\setminus B_{1}}\big{|}K_{z_{1}}(w)-K_{z_{2} }(w)\big{|}\,\mathrm{d}w\leq CA_{0}d_{\ell}(z_{1},z_{2})^{\alpha}.\end{split} \tag{1.16}\]
For integro-differential equations in non-divergence form we recover Theorem 1.6 of [19].
**Theorem 1.3** (Imbert-Silvestre [19, Theorem 1.6]).: _Let \(0<\gamma<\min(1,2s)\). Assume \(K\) is a non-negative kernel that is elliptic and Holder continuous in the sense that it satisfies (1.9)-(1.11) and (1.14) for \(\alpha=\frac{2s}{1+2s}\gamma\) and each \(z\in Q_{1}\). Then any solution \(f\in C_{l}^{\gamma}([-1,0]\times B_{1}\times\mathbb{R}^{d})\) of (1.3) in \(Q_{1}\) satisfies_
\[\|f\|_{C_{\ell}^{2s+\alpha}(Q_{1/4})}\leq C\big{(}\|f\|_{C_{\ell}^{\gamma}([-1,0]\times B_{1}\times\mathbb{R}^{d})}+\|h\|_{C_{\ell}^{\alpha}(Q_{1})}\big{)},\]
_for some constant \(C=C(d,s,\lambda_{0},\Lambda_{0},A_{0})\)._
For divergence form kinetic integro-differential equations we establish the following result.
**Theorem 1.4** (Schauder estimates for kinetic integro-differential equations in divergence form).: _Let \(0<\gamma<\min(1,2s)\). Assume \(K\) is a non-negative kernel that is elliptic and Holder continuous in the sense that it satisfies (1.9), (1.10), the (weak) divergence form symmetry (1.12) and (1.13), and is Holder continuous (1.14) and (1.15) for \(\alpha=\frac{2s}{1+2s}\gamma\) and each \(z\in Q_{1}\). Then any solution \(f\in C_{l}^{\gamma}([-1,0]\times B_{1}\times\mathbb{R}^{d})\) of (1.3) in \(Q_{1}\) satisfies_
\[\|f\|_{C_{\ell}^{2s+\alpha}(Q_{1/4})}\leq C\big{(}\|f\|_{C_{\ell}^{\gamma}([- 1,0]\times B_{1}\times\mathbb{R}^{d})}+\|h\|_{C_{\ell}^{\alpha}(Q_{1})}\big{)},\]
_for some constant \(C=C(d,s,\lambda_{0},\Lambda_{0},A_{0})\)._
_Remark 1.5_.: We want to emphasise that Theorems 1.1 and 1.3 are applicable to the inhomogeneous Landau and the Boltzmann equation without cut-off, respectively. On the one hand, the Landau equation is given by
\[\partial_{t}f+v\cdot\nabla_{x}f=\nabla_{v}\cdot\Bigg{(}\int_{\mathbb{R}^{d}}a (v-w)\big{[}f(w)\nabla f(v)-f(v)\nabla f(w)\big{]}\,\mathrm{d}w\Bigg{)}, \tag{1.17}\]
where
\[a(z)=a_{d,\gamma}|z|^{\gamma+2}\Big{(}I-\frac{z\otimes z}{|z|^{2}}\Big{)},\]
for \(\gamma\geq-d\), \(a_{d,\gamma}>0\). It can be rewritten in divergence (1.1) or non-divergence form (1.2) for suitable coefficients \(A,B,c\), as stated, for example, on page one in [12]. The Boltzmann equation, on the other hand, is given by
\[\partial_{t}f+v\cdot\nabla_{x}f=\int_{\mathbb{R}^{d}}\int_{\mathbb{S}^{d-1}} \big{[}f(w_{*})f(w)-f(v_{*})f(v)\big{]}B(|v-v_{*}|,\cos\theta)\,\mathrm{d}v_{* }\,\mathrm{d}\sigma, \tag{1.18}\]
where
\[w=\frac{v+v_{*}}{2}+\frac{|v-v_{*}|}{2}\sigma,\qquad w_{*}=\frac{v+v_{*}}{2}- \frac{|v-v_{*}|}{2}\sigma,\]
and where \(\theta\) is the deviation angle between \(v\) and \(w\). The non-cutoff kernels \(B\) are given by
\[B(r,\cos\theta)=r^{\gamma}b(\cos\theta),\qquad b(\cos\theta)\sim|\sin(\theta/ 2)|^{-d+1-2s},\]
for \(\gamma>-d\) and \(s\in(0,1)\). Using Carleman coordinates and the cancellation lemma, we can rewrite this as (1.3), for some specific kernel \(K\).
Under certain macroscopic assumptions, we can check that the coefficients for Landau and the kernel for Boltzmann satisfy the ellipticity assumptions made in 1.2 and 1.3, respectively. In particular, any Holder continuous solution \(f\) of (1.17) or (1.18) with mass, energy and entropy bounded above, and mass bounded below, satisfies the Schauder estimate in Theorem 1.1 or 1.3, respectively. We refer the reader to [12, Theorem 1.2] for the Landau equation, and [20, Section 4] for the Boltzmann equation.
### Contribution
Our contribution consists of a quantified and unified approach to Schauder estimates for kinetic equations with either non-fractional or fractional coefficients, in either non-divergence or divergence form. In this respect it improves upon the previous results on kinetic Schauder estimates in the local case by Imbert-Mouhot [16] and Henderson-Snelson [12], and in the non-local case by Imbert-Silvestre [19]. Indeed, on the one hand, in the non-fractional case we manage to gain two orders of Holder regularity at any stage \(m\geq 3\). On the other hand, we establish Schauder estimates for _divergence form equations_ in Theorem 1.1 and 1.4, which, to the best of our knowledge, is a novelty in the fractional case. Moreover, our approach is fully quantitative, which, in the fractional case, is in contrast to the blow-up argument used in [19]. Finally, in the non-fractional case, the method is robust enough to deal with any hypoelliptic operator, and it works even more generally for Dini-regular coefficients, see Theorem A.1 and Theorem A.2, respectively.
### Previous Literature Results
All the works on Schauder estimates have to be classified according to the notion of Holder continuity that is used and the assumptions on the coefficients that are made.
In the local case, there is the work by Imbert and Mouhot [16], which adapts Krylov's approach [22] to the kinetic setting. Furthermore, in [12], Henderson-Snelson discuss how to derive a \(C^{\infty}\)-smoothing estimate for the Landau equation by iteratively applying their Schauder estimates. There are also two articles [7, 13] for kinetic Fokker-Planck equations, which assume less regularity in time, and deduce partial Schauder estimates for space and velocity only. Their goal is to reduce the regularity assumptions needed on time. Note, however, that the Holder norms defined in [7, 13] differ from our notion of Holder continuity.
In the non-local case, the work that inspired us most is from Imbert and Silvestre [19]. In particular, the definition of kinetic Holder spaces, the notion of distance and degree of a kinetic polynomial all stem from their seminal contribution on regularity for the non-cutoff Boltzmann equation [17, 18, 19, 20]. Their approach to Schauder estimates consists of first proving a Liouville-type theorem, then using a blow-up argument. Their work is inspired from Ros-Oton-Serra [29], who have used these techniques for non-local operators that are generators of any stable and symmetric Levy process. Note, however, that this method is non-constructive, as it relies on a compactness argument. The structure of this argument comes from Simon [30], who used a scaling argument to derive a Liouville theorem for general hypo-elliptic operators, from which he deduces the Schauder estimate by a compactness argument.
We follow Campanato's approach. This method was first established for elliptic equations. A nice reference is the book by Giaquinta and Martinazzi [9, Chapter 5]. The idea is to use the scaling stemming from a combination of a Poincare inequality, Sobolev and regularity estimates on the constant coefficient equation. In contrast, Simon's scaling argument [30, Lemma 1] replaces the Sobolev inequality and regularity estimates by a reasoning of Hormander [15, Theorem 3.7] based on the closed graph theorem and the homogeneity of the operator. Through the characterisation of Holder norms by Campanato norms, we replace the blow-up argument of Simon by a constructive method.
### Strategy
We consider a solution of either the local or non-local equation, and freeze coefficients: the part which solves a constant coefficient equation with zero source term is considered separately from the rest. The latter can be viewed as a _lower order source term_ with the expected bounds due to the Holder continuity of the coefficients. For the constant coefficient solution, we subtract a certain polynomial
constructed from the vector fields of the equation of degree up to the order of our equation, such that we have a zero-averaged function. We then apply Poincare's inequality repeatedly as long as the zero-average condition is satisfied and the integrand is orthogonal to the kernel of the Poincare inequality, that is one order higher than the equation itself. We then use an \(L^{\infty}\)-bound and Sobolev's embedding. But then, since we consider a solution to a constant coefficient equation, regularity estimates yield a bound uniform in the Holder norm of the coefficients. The left hand side will be a higher order Campanato norm, which characterises Holder norms. The gain of regularity arises from the scaling of the equation.
\[\begin{array}{c|
This is not a norm in the mathematical sense.
_Remark 2.2_.: This notion of distance should not be confused with the distance function towards the grazing set as introduced in [11, Def. 1], which apart from the name does not have any connection to this distance here.
Let us observe that this distance is left invariant in the sense that \(d_{\ell}(z\circ z_{1},z\circ z_{2})=d_{\ell}(z_{1},z_{2})\) for any \(z,z_{1},z_{2}\in\mathbb{R}^{1+2d}\). We can also reformulate it as \(d_{\ell}\) being the infimum value of \(r>0\) such that both \(z_{1},z_{2}\) belong to \(Q_{r}(z_{0})\) for some \(z_{0}\in\mathbb{R}^{1+2d}\). Other equivalent formulations are
\[d_{\ell}(z_{1},z_{2})\sim\|z_{2}^{-1}\circ z_{1}\|\sim\|z_{1}^{-1}\circ z_{2}\| \sim\inf_{w\in\mathbb{R}^{d}}|t_{2}-t_{1}|^{\frac{1}{2s}}+|x_{2}-x_{1}-(t_{2}- t_{1})w|^{\frac{1}{1+2s}}+|v_{1}-w|+|v_{2}-w|.\]
For more remarks on this distance we refer the reader to [19, Section 2].
In addition to the kinetic distance, we use the notion of kinetic degree of a monomial \(m_{j}\in\mathbb{R}[t,x,v]\) introduced in [19, Subsection 2.2] as
\[\deg_{\mathrm{kin}}m_{j}=2s\cdot j_{0}+(1+2s)\Bigg{(}\sum_{i=1}^{d}j_{i}\Bigg{)} +\sum_{i=d+1}^{2d}j_{i}=2s\cdot j_{0}+(1+2s)\cdot|J_{1}|+|J_{2}|=:|J|,\]
where we denote a multi-index \(j\in\mathbb{N}^{1+2d}\) with \(j=(j_{0},J_{1},J_{2})\) where \(J_{1}=(j_{1},\ldots,j_{d})\) and \(J_{2}=(j_{d+1},\ldots,j_{2d})\). Under scaling a monomial \(m_{j}\) behaves as
\[m_{j}(z_{R})=R^{2sj_{0}}t^{j_{0}}R^{(1+2s)|J_{1}|}x^{J_{1}}R^{|J_{2}|}v^{J_{2} }=R^{|J|}z^{j},\quad R>0,\]
and its degree is precisely \(|J|=2sj_{0}+(1+2s)|J_{1}|+|J_{2}|\). We denote with \(\mathcal{P}_{k}\) the space of \(k\) degree polynomials. Note that in the non-local case \(k\) is in the discrete set \(k\in\mathbb{N}+2s\mathbb{N}\), and we will write \(k=2s\cdot k_{0}+(1+2s)\cdot k_{1}+k_{2}\) for \(k_{0},k_{1},k_{2}\in\mathbb{N}\). An element \(p\in\mathcal{P}_{k}\) is written as
\[p(t,x,v)=\sum_{\begin{subarray}{c}j\in\mathbb{N}^{1+2d}\\ |J|\leq k\end{subarray}}a_{j}m_{j}(z). \tag{2.1}\]
The sum is taken over \(j_{0}\in[0,k_{0}],|J_{1}|\in[0,k_{1}],|J_{2}|\in[0,k_{2}]\). We will abbreviate this and write \(|J|\leq k\). In the local case there is no ambiguity.
Our notion of Holder continuity leans on [16, Def. 2.2] and [19, Def. 2.3].
**Definition 2.3** (Holder spaces).: Given an open set \(\Omega\subset\mathbb{R}\times\mathbb{R}^{d}\times\mathbb{R}^{d}\) and \(\beta\in(0,\infty)\) we say that \(f:\Omega\to\mathbb{R}\) is \(C^{\beta}_{\ell}(\Omega)\) at a point \(z_{0}\in\mathbb{R}^{1+2d}\) if there is a polynomial \(p\in\mathbb{R}[t,x,v]\) with kinetic degree \(\deg_{\mathrm{kin}}p<\beta\) and a constant \(C>0\) such that
\[\forall r>0\quad\|f-p\|_{L^{\infty}(Q_{r}(z_{0})\cap\Omega)}\leq Cr^{\beta}. \tag{2.2}\]
When this property holds at every point \(z_{0}\in\Omega\) we say that \(f\in C^{\beta}_{\ell}(\Omega)\). The semi-norm \([f]_{C^{\beta}_{\ell}(\Omega)}\) is the smallest \(C\) such that (2.2) holds for all \(z_{0}\in\Omega\). We equip \(C^{\beta}_{\ell}(\Omega)\) with the norm
\[\|f\|_{C^{\beta}_{\ell}(\Omega)}=\|f\|_{L^{\infty}(\Omega)}+[f]_{C^{\beta}_{ \ell}(\Omega)}.\]
_Remark 2.4_.: This definition coincides with the definition of [16, Def. 2.2]. As the authors point out, it is equivalent to ask that for any \(z\in\Omega\)
\[|f(z)-p(z)|\leq Cd_{l}(z,z_{0})^{\beta}.\]
We can further rephrase Holder regularity of \(f\) at \(z_{0}\) due to the left-invariance as follows [19]. For any \(z\in\mathbb{R}^{1+2d}\) such that \(z_{0}\circ z\in\Omega\) we have
\[|f(z_{0}\circ z)-p_{z_{0}}(z)|\leq C\|z\|^{\beta},\]
where \(p_{z_{0}}(z)=p(z_{0}\circ z)\). The polynomial \(p_{z_{0}}\) will be the expansion of \(f\) at \(z_{0}\).
Holder spaces can also be characterised in terms of Campanato spaces. These have been introduced by Campanato himself [4, 5, 6] in the elliptic context. We adapt his notion to the kinetic setting.
**Definition 2.5** (Higher order Campanato spaces).: Let \(\Omega\subset\mathbb{R}^{1+2d}\) be an open subset. For \(1\leq p\leq\infty,\ \lambda\geq 0,\ k\geq 0\) we define the Campanato space \(\mathcal{L}_{k}^{p,\lambda}\big{(}\Omega\big{)}\) as
\[\mathcal{L}_{k}^{p,\lambda}\big{(}\Omega\big{)}:=\left\{f\in L^{p}\big{(} \Omega\big{)}:\sup_{z\in\Omega,r>0}r^{-\lambda}\inf_{P\in\mathcal{P}_{k}}\int _{Q_{r}(z)\cap\Omega}|f-P|^{p}\,\mathrm{d}z<+\infty\right\} \tag{2.3}\]
where \(\mathcal{P}_{k}\) is the space of polynomials of kinetic degree less or equal \(k\). We endow it with the seminorm
\[[f]^{p}_{\mathcal{L}_{k}^{p,\lambda}}:=\sup_{z\in\Omega,r>0}r^{-\lambda}\inf _{P\in\mathcal{P}_{k}}\int_{Q_{r}(z)\cap\Omega}|f-P|^{p}\,\mathrm{d}z \tag{2.4}\]
and the norm
\[\|f\|_{\mathcal{L}_{k}^{p,\lambda}}=[f]_{\mathcal{L}_{k}^{p,\lambda}}+\|f\|_{ L^{p}}. \tag{2.5}\]
_Remark 2.6_.:
1. We observe that for the local case \(k\in\mathbb{N}\), whereas in the non-local case \(k\in\mathbb{N}+2s\mathbb{N}\).
2. Campanato's spaces are most commonly known for \(k=0\). Such spaces have been used for Schauder estimates in the elliptic context [9]. To gain higher Holder continuity (\(k\geq 1\)) the equation was just differentiated. This would not work as easily for our equations. Note that a method inspired from Campanato's approach with \(k=0\) has been developed for partial Schauder estimates in the kinetic setting in [7]. However, we have not seen any Schauder estimates that use the higher order Campanato spaces directly, neither in the elliptic nor the parabolic case, and certainly not in the hypoelliptic case.
The next subsection states a characterisation of Holder continuity in terms of Campanato's norms.
### Relation between Holder and Campanato spaces
Holder spaces can be characterised through Campanato spaces, and vice versa. This equivalence has been established by Campanato himself in [4] for the lowest order Campanato space, and in [6] for higher order Campanato spaces. Following Campanato's arguments, we can show the following relation between Campanato and Holder spaces defined in 2.3 and 2.5. We refer the reader to the proof in Appendix B.
**Theorem 2.7** (Campanato).: _Let \(\tilde{z}_{0}\in\mathbb{R}^{1+2d}\) and \(R>0\), and write \(\Omega=Q_{R}(\tilde{z}_{0})\). Then, for \(n+kp<\lambda\leq n+(k+1)p\) and \(\beta=\frac{\lambda-n}{p}\) we have \(\mathcal{L}_{k}^{p,\lambda}(\Omega)\cong C_{\ell}^{\beta}(\bar{\Omega})\), where \(n=2s+2d(s+1)\)._
_Remark 2.8_.: For the local case, setting \(s=1\) yields the same result.
### Differential operators
In this section, we show how to relate Holder norms to kinetic differential operators. We reprove Lemma 2.7 of [19] to make our paper self-contained.
**Lemma 2.9** (Imbert-Silvestre [19, Lemma 2.7]).: _Let \(D=\mathcal{T},D=\nabla_{x}\) or \(D=\nabla_{v}\). Let \(f\in C^{\beta}_{\ell}(Q)\) for \(\beta\in(0,\infty)\) and \(Q\) some kinetic cylinder. Then \(D^{l}f\in C^{\beta-k}_{\ell}(Q)\) where \(k\) is the kinetic degree of \(D^{l}\), \(l\in\mathbb{N}\), and_
\[[D^{l}f]_{C^{\beta-k}_{\ell}(Q)}\leq C[f]_{C^{\beta}_{\ell}(Q)}.\]
Proof.: Let \(z_{1},z_{2}\in Q\). Since \(f\in C^{\beta}_{\ell}(Q)\) there exists a polynomial \(p\) with degree \(k=\deg_{\text{kin}}\text{p}<\beta\) so that for \(z\in Q\) with \(\|z\|\leq d_{l}(z_{1},z_{2})=r\)
\[\begin{split}|f(z_{1}\circ z)-p(z_{1}\circ z)|& \leq Cr^{\beta},\\ |f(z_{2}\circ z)-p(z_{2}\circ z)|&\leq Cr^{\beta}. \end{split} \tag{2.6}\]
We can compute that
\[p(z_{1}\circ z)=f(z_{1})+\mathcal{T}f(z_{1})t+\nabla_{x}f(z_{1})\cdot x+ \nabla_{v}f(z_{1})\cdot v+\ldots\]
By equivalence of norms in finite dimensional spaces, we know that if \(\sup_{|z|\leq 1}|p(z)|\leq C_{0}\) then the coefficients of \(p\) denoted by \(a_{j}\) will satisfy \(\sup_{j}|a_{j}|\leq CC_{0}\) for some constant \(C\) depending on \(k\) and \(n\). Scaling this argument yields together with (2.6)
\[\big{|}D^{l}f(z_{1})-D^{l}f(z_{2})\big{|}r^{k}\leq Cr^{\beta},\]
where \(D^{l}\) is the differential operator of degree \(k\).
We will need a similar estimate for the fractional operator (1.4). We start with a global bound, see [19, Lemma 3.6] for kernels in non-divergence form (1.11).
**Lemma 2.10**.: _Assume \(0<\alpha<\min(1,2s)\). For any non-negative kernel \(K\) satisfying (1.9), and either satisfy (1.11) or (1.12), (1.13). Then for \(f\in C^{2s+\alpha}_{\ell}(\mathbb{R}^{1+2d})\) there holds_
\[[\mathcal{L}f]_{C^{\alpha}_{\ell}(\mathbb{R}^{2d+1})}\leq C[f]_{C^{2s+\alpha} _{\ell}(\mathbb{R}^{2d+1})}.\]
Proof.: Let \(z,\xi\in\mathbb{R}^{1+2d}\). We know that
\[\big{|}f(z\circ\xi)-p_{z}(\xi)\big{|}\leq[f]_{C^{2s+\alpha}_{\ell}}\|\xi\|^{2s +\alpha}. \tag{2.7}\]
We need to estimate
\[\mathcal{L}f(z\circ\xi)-\mathcal{L}f(z) =\int_{\mathbb{R}^{d}}\big{[}f(z\circ\xi\circ(0,0,v^{\prime}-v- \xi_{v}))-f(z\circ\xi)\big{]}K(z\circ\xi,v^{\prime})\,\mathrm{d}v^{\prime}\] \[\quad-\int_{\mathbb{R}^{d}}\big{[}f\big{(}z\circ(0,0,v^{\prime}-v )\big{)}-f(z)\big{]}K(z,v^{\prime})\,\mathrm{d}v^{\prime}.\]
We distinguish the close and the far part. Let \(R>0\) and write for ease of notation \(\phi=(0,0,v^{\prime}-v-\xi_{v})\) and \(\psi=(0,0,v^{\prime}-v)\) for \(\xi=(\xi_{t},\xi_{x},\xi_{v})\).
If we assume symmetry in the non-divergence form (1.11), then we can symmetrise the integral and remove the principal value. We find
\[\mathrm{PV}\int_{B_{R}(v)} \big{[}f(z\circ\psi)-f(z)\big{]}K(z,v^{\prime})\,\mathrm{d}v^{\prime}\] \[=\frac{1}{2}\int_{B_{R}(v)}\big{[}f(z\circ\psi)+f(z\circ-\psi)-2f (z)\big{]}K(z,v^{\prime})\,\mathrm{d}v^{\prime}\] \[\leq\frac{1}{2}\int_{B_{R}(v)}|f(z\circ\psi))-p_{z}(\psi)\big{|}K( z,v^{\prime})\,\mathrm{d}v^{\prime}+\frac{1}{2}\int_{B_{R}(v)}|p_{z}(\psi)-f(z) \big{|}K(z,v^{\prime})\,\mathrm{d}v^{\prime}\] \[\leq[f]_{C^{2s+\alpha}_{\ell}}\int_{B_{R}(v)}|v^{\prime}-v|^{2s+ \alpha}K(z,v^{\prime})\,\mathrm{d}v^{\prime}+\big{|}\nabla^{2}_{v}f(z)\big{|} \int_{B_{R}(v)}|v^{\prime}-v|^{2}K(z,v^{\prime})\,\mathrm{d}v^{\prime}\] \[\lesssim_{\Lambda}[f]_{C^{2s+\alpha}_{\ell}}R^{\alpha}+\big{|} \nabla^{2}_{v}f(z)\big{|}R^{2-2s}.\]
We used that \(p_{z}(\psi)=f(z)+\nabla_{v}f(z)\cdot(v^{\prime}-v)+(v^{\prime}-v)^{T}\cdot \nabla^{2}_{v}f(z)\cdot(v^{\prime}-v)\). Any higher order terms vanish since \(\deg\,p<2s+\alpha\). The first order term in \(v\) vanishes due to (1.11). For the second order term, we use (1.9). All estimates are independent of \(z\in\mathbb{R}^{1+2d}\) so that we similarly obtain
\[\mathrm{PV}\int_{B_{R}(v+\xi_{v})}\big{[}f(z\circ\xi\circ\phi)-f(z\circ\xi) \big{]}K(z\circ\xi,v^{\prime})\,\mathrm{d}v^{\prime}\lesssim_{\Lambda}[f]_{C^ {2s+\alpha}_{\ell}}R^{\alpha}+\big{|}\nabla^{2}_{v}f(z\circ\xi)\big{|}R^{2-2s}.\]
Therefore
\[\int_{B_{R}(v+\xi_{v})}\big{[}f(z\circ\xi\circ\phi)-f(z\circ\xi) \big{]}K(z\circ\xi,v^{\prime})\,\mathrm{d}v^{\prime}-\int_{B_{R}(v)}\big{[}f( z\circ\psi)-f(z)\big{]}K(z,v^{\prime})\,\mathrm{d}v^{\prime}\] \[\lesssim_{\Lambda}[f]_{C^{2s+\alpha}_{\ell}}R^{\alpha}+\|\nabla^{ 2}_{v}f(z\circ\xi)-\nabla^{2}_{v}f(z)\big{|}R^{2-2s}\] \[\lesssim_{\Lambda}[f]_{C^{2s+\alpha}_{\ell}}R^{\alpha}+\|\xi\|^{2 s+\alpha-2}R^{2-2s}[f]_{C^{2s+\alpha}_{\ell}}.\]
We used Lemma 2.9 for the last inequality. Choosing \(R=\|\xi\|\) therefore yields
\[\int_{B_{R}(v+\xi_{v})}\big{[}f(z\circ\xi\circ\phi)-f(z\circ\xi)\big{]}K(z \circ\xi,v^{\prime})\,\mathrm{d}v^{\prime}-\int_{B_{R}(v)}\big{[}f(z\circ \psi)-f(z)\big{]}K(z,v^{\prime})\,\mathrm{d}v^{\prime}\lesssim_{\Lambda}[f]_{ C^{2s+\alpha}_{\ell}}R^{\alpha}.\]
If, instead of (1.11), we assume (1.12) and (1.13), then we bound
\[\left|\mathrm{PV}\int_{B_{R}(v)}\big{[}f(z\circ\psi)-f(z)\big{]}K (z,v^{\prime})\,\mathrm{d}v^{\prime}\right| =\left|\mathrm{PV}\int_{B_{R}(v)}\big{[}f(z\circ\psi)-p_{z}(\psi) -\big{(}f(z)-p_{z}(\psi)\big{)}\big{]}K(z,v^{\prime})\,\mathrm{d}v^{\prime}\right|\] \[\leq\left|\mathrm{PV}\int_{B_{R}(v)}|f(z\circ\psi)-f(z)|K(z,v^{ \prime})\,\mathrm{d}v^{\prime}\right|\] \[\quad+\left|\mathrm{PV}\int_{B_{R}(v)}D_{v}f(z)\cdot\big{(}v-v^{ \prime}\big{)}K(z,v^{\prime})\,\mathrm{d}v^{\prime}\right|\] \[\quad+\left|\mathrm{PV}\int_{B_{R}(v)}\big{|}D_{v}^{2}f(z)\big{|} |v-v^{\prime}|^{2}K(z,v^{\prime})\,\mathrm{d}v^{\prime}\right|\] \[\leq[f]_{C^{2s+\alpha}_{\ell}}\int_{B_{R}(v)}|v^{\prime}-v|^{2s+ \alpha}K(z,v^{\prime})\,\mathrm{d}v^{\prime}\] \[\quad+C\Lambda\big{|}D_{v}f(z)\big{|}R^{1-2s}+C\Lambda\big{|}D_{v} ^{2}f(z)\big{|}R^{2-2s}\] \[\lesssim_{\Lambda}[f]_{C^{2s+\alpha}_{\ell}}R^{\alpha}+\big{|}D_{ v}f(z)\big{|}R^{1-2s}+\big{|}D_{v}^{2}f(z)\big{|}R^{2-2s}.\]
We again used (1.9) and (2.7). The same computations yield
\[\left|\operatorname{PV}\int_{B_{R}(v+\xi_{v})}\big{[}f(z\circ\xi \circ\phi)-f(z\circ\xi)\big{]}K(z\circ\xi,v^{\prime})\,\mathrm{d}v^{\prime}\right|\] \[\lesssim_{\Lambda}[f]_{C^{2s+\alpha}_{\ell}}R^{\alpha}+\big{|}D_ {v}f(z\circ\xi)\big{|}R^{1-2s}+\big{|}D_{v}^{2}f(z\circ\xi)\big{|}R^{2-2s},\]
so that as before, we obtain with Lemma 2.9
\[\int_{B_{R}(v+\xi_{v})}\big{[}f(z\circ\xi\circ\phi) -f(z\circ\xi)\big{]}K(z\circ\xi,v^{\prime})\,\mathrm{d}v^{\prime}- \int_{B_{R}(v)}\big{[}f(z\circ\psi)-f(z)\big{]}K(z,v^{\prime})\,\mathrm{d}v^{\prime}\] \[\lesssim_{\Lambda}[f]_{C^{2s+\alpha}_{\ell}}R^{\alpha}+\big{|} \nabla_{v}f(z\circ\xi)-\nabla_{v}f(z)\big{|}R^{1-2s}+\big{|}\nabla_{v}^{2}f(z \circ\xi)-\nabla_{v}^{2}f(z)\big{|}R^{2-2s}\] \[\lesssim_{\Lambda}[f]_{C^{2s+\alpha}_{\ell}}R^{\alpha}+\|\xi\|^ {2s+\alpha-1}R^{1-2s}[f]_{C^{2s+\alpha}_{\ell}}+\|\xi\|^{2s+\alpha-2}R^{2-2s}[ f]_{C^{2s+\alpha}_{\ell}}\] \[\lesssim_{\Lambda}[f]_{C^{2s+\alpha}_{\ell}}R^{\alpha},\]
since \(\|\xi\|=R\).
For the far part we do not need to distinguish non-divergence form from divergence form. In both cases we separate the integral into different terms
\[\int_{\mathbb{R}^{d}\setminus B_{R}(v+\xi_{v})}\big{[}f(z\circ\xi\circ\phi)-f (z\circ\xi)\big{]}K(z\circ\xi,v^{\prime})\,\mathrm{d}v^{\prime}-\int_{\mathbb{ R}^{d}\setminus B_{R}(v)}\big{[}f(z\circ\psi)-f(z)\big{]}K(z,v^{\prime})\, \mathrm{d}v^{\prime}\leq\sum_{i=1}^{5}I_{i},\]
with
\[I_{1} =\int_{\mathbb{R}^{d}\setminus B_{R}(v+\xi_{v})}\big{|}f(z\circ \xi\circ\phi)-p_{z\circ\phi}(\phi^{-1}\circ\xi\circ\phi)\big{|}K(z\circ\xi,v ^{\prime})\,\mathrm{d}v^{\prime},\] \[I_{2} =\int_{\mathbb{R}^{d}\setminus B_{R}(v+\xi_{v})}\big{|}f(z\circ \xi)-p_{z}(\xi)\big{|}K(z\circ\xi,v^{\prime})\,\mathrm{d}v^{\prime},\] \[I_{3} =\int_{\mathbb{R}^{d}\setminus B_{R}(v+\xi_{v})}\big{|}p_{z\circ \phi}(\phi^{-1}\circ\xi\circ\phi)-p_{z}(\xi)\big{|}K(z\circ\xi,v^{\prime})\, \mathrm{d}v^{\prime},\] \[I_{4} =\int_{\mathbb{R}^{d}\setminus B_{R}(v)}\big{|}p_{z\circ\psi}(\xi )-p_{z}(\xi)-f(z\circ\psi)+f(z)\big{|}K(z,v^{\prime})\,\mathrm{d}v^{\prime},\] \[I_{5} =\int_{\mathbb{R}^{d}\setminus B_{R}(v)}\big{|}p_{z\circ\psi}(\xi )-p_{z}(\xi)\big{|}K(z,v^{\prime})\,\mathrm{d}v^{\prime}.\]
Using that
\[\big{|}f(z\circ\xi\circ\phi)-p_{z\circ\phi}(\phi^{-1}\circ\xi \circ\phi)\big{|} \leq[f]_{C^{2s+\alpha}_{\ell}}\|\phi^{-1}\circ\xi\circ\phi\|^{2s+\alpha}\] \[\leq[f]_{C^{2s+\alpha}_{\ell}}\Big{(}\|\xi\|+|v^{\prime}-v-\xi_{v }|^{1/(1+2s)}\|\xi_{\ell}\|^{2s/(1+2s)}\Big{)}^{2s+\alpha},\]
we bound the first term with (1.9) by
\[I_{1} \leq C[f]_{C^{2s+\alpha}_{\ell}}\Bigg{(}\|\xi\|^{2s+\alpha}R^{-2 s}+\|\xi\|^{\frac{2s(2+\alpha)}{1+2s}}\int_{\mathbb{R}^{d}\setminus B_{R}(v+\xi_{v})}|v^{ \prime}-v-\xi_{v}|^{\frac{2s+\alpha}{1+2s}}K(z\circ\xi,v^{\prime})\,\mathrm{d }v^{\prime}\Bigg{)}\] \[\leq C[f]_{C^{2s+\alpha}_{\ell}}R^{-2s}\Big{(}\|\xi\|^{2s+\alpha}+ \|\xi\|^{\frac{2s(2+\alpha)}{1+2s}}R^{\frac{2s+\alpha}{1+2s}}\Big{)}.\]
For \(I_{2}\) we get
\[I_{2}\leq C[f]_{C^{2s+\alpha}_{\ell}}\|\xi\|^{2s+\alpha}R^{-2s}.\]
We further notice that \(I_{4}\) is the same as \(I_{5}\) without the lowest order term of \(p_{z\circ\psi}-p_{z}\). To estimate \(I_{5}\) we write \(p_{z}(\xi)=\sum a_{j}(z)m_{j}(\xi)\). Note that by Lemma 2.9 the coefficients \(a_{j}\) satisfy
\[[a_{j}]_{C^{2s-j+\alpha}_{\ell}}\leq C[f]_{C^{2s+\alpha}_{\ell}},\]
where \(j\) is the degree of the corresponding monomial. Thus
\[I_{5}\leq C[f]_{C^{2s+\alpha}_{\ell}}\Big{(}R^{\alpha}+R^{\alpha-1}\|\xi\|+R^{ \alpha-2s}\|\xi\|^{2s}+R^{\alpha-2}\|\xi\|^{2}\Big{)},\]
and
\[I_{4}\leq C[f]_{C^{2s+\alpha}_{\ell}}\Big{(}R^{\alpha-1}\|\xi\|+R^{\alpha-2s}\| \xi\|^{2s}+R^{\alpha-2}\|\xi\|^{2}\Big{)}.\]
For \(I_{3}\) we notice that \(\phi^{-1}\circ\xi\circ\phi^{-1}=\big{(}\xi_{t},\xi_{x}+\xi_{t}(v^{\prime}-v- \xi_{v}),\xi_{v}\big{)}\). Apart from the space variable this coincides with \(\xi\). But since we only consider polynomial expansion up to order \(2s+\alpha<2s+1\) the space variable won't appear, so that in fact \(|I_{3}|=|I_{5}|\). We now choose \(R=\|\xi\|\) so that all terms are bounded by \(I_{i}\leq C[f]_{C^{2s+\alpha}_{\ell}}\|\xi\|^{\alpha}\) for all \(i=1,\ldots,5\).
To localise Lemma 2.10 we follow the proof of Imbert and Silvestre in [19, Lemma 3.7]. Note, however, that we also cover the non-divergence form symmetry (1.12)-(1.13).
**Lemma 2.11** (Imbert-Silvestre [19, Lemma 3.7]).: _Let \(0<\alpha\leq\gamma<\min(1,2s)\) and let \(K\) satisfy (1.9) and either (1.11) or (1.12),(1.13). Then_
\[[\mathcal{L}f]_{C^{\alpha}_{\ell}(Q_{\frac{1}{2}})}\leq C\Big{(}[f]_{C^{2s+ \alpha}_{\ell}(Q_{\frac{1}{2}})}+[f]_{C^{\gamma}_{\ell}((-1,0]\times B_{1} \times\mathbb{R}^{d})}\Big{)},\]
_for some \(C\) depending on \(n,s,\Lambda_{0}\) and \(A_{0}\)._
Proof.: We write \(\mathcal{L}f(z)=\tilde{\mathcal{L}}f(z)+C(z)\) where \(\tilde{\mathcal{L}}f(z)\) corresponds to the non-local operator as in (1.4) with kernel \(\tilde{K}(v,v^{\prime})=\mathbb{1}_{B_{\rho}(v)}(v^{\prime})K(v,v^{\prime})\) and \(C(z)\) corresponds to \(\mathcal{L}f\) with kernel \(\big{[}1-\mathbb{1}_{B_{\rho}(v)}(v^{\prime})\big{]}K(v,v^{\prime})\) for some small \(\rho>0\). Then by Lemma 2.10 we have
\[[\tilde{\mathcal{L}}f]_{C^{\alpha}_{\ell}(Q_{\frac{1}{2}})}\leq C[f]_{C^{2s+ \alpha}_{\ell}(Q_{1})}.\]
Now we consider \(z_{0},z\in Q_{\frac{1}{2}}\) such that \(z_{0}\circ z\in Q_{\frac{1}{2}}\). If we write \(\phi=(0,0,v^{\prime}-v-v_{0})\) and \(\psi=(0,0,v^{\prime}-v)\) we have for \(K(w)=K(v,v+w)\)
\[C(z_{0}\circ z) -C(z)\] \[=\int_{\mathbb{R}^{d}\setminus B_{\rho}(v+v_{0})}\big{[}f(z_{0} \circ z\circ\phi)-f(z_{0}\circ z)\big{]}K(z_{0}\circ z,v^{\prime})\,\mathrm{d} v^{\prime}-\int_{\mathbb{R}^{d}\setminus B_{\rho}(v)}\big{[}f(z\circ\psi)-f(z) \big{]}K(z,v^{\prime})\,\mathrm{d}v^{\prime}\] \[=\int_{\mathbb{R}^{d}\setminus B_{\rho}}\big{[}f(z)-f(z_{0}\circ z )\big{]}K(w)\,\mathrm{d}w-\int_{\mathbb{R}^{d}\setminus B_{\rho}}\big{[}f \big{(}z\circ(0,0,w)\big{)}-f\big{(}z_{0}\circ z\circ(0,0,w)\big{)}\big{]}K(w) \,\mathrm{d}w\] \[\leq C\Lambda_{0}\rho^{-2s}[f]_{C^{\gamma}_{\ell}}d_{\ell}(z,z_{0 }\circ z)^{\alpha}+C[f]_{C^{\gamma}_{\ell}}\int_{\mathbb{R}^{d}\setminus B_{ \rho}}d_{\ell}\big{(}z\circ(0,0,w),z_{0}\circ z\circ(0,0,w)\big{)}^{\gamma}K( w)\,\mathrm{d}w,\]
since \(\alpha\leq\gamma\). But now we compute
\[d_{\ell}(z\circ(0,0,w),z_{0}\circ z\circ(0,0,w)) =\big{\|}(0,0,w)^{-1}\circ z^{-1}\circ z_{0}^{-1}\circ z\circ(0,0,w)\big{\|}\] \[=d_{\ell}((z_{0}\circ z)^{-1},z)-(0,tow,0)\] \[\lesssim d_{\ell}(z,z_{0}\circ z)+|t-t_{0}|^{\frac{1}{1+2s}}|w|^{ \frac{1}{1+2s}}\] \[\lesssim d_{\ell}(z,z_{0}\circ z)^{\frac{2s}{1+2s}}\big{(}1+|w|^{ \frac{1}{1+2s}}\big{)}.\]
Therefore, since \(\alpha\leq\frac{2s}{1+2s}\) and since \(K\) satisfies (1.9) we find
\[C(z_{0}\circ z)-C(z)\leq C\Lambda_{0}[f]_{C^{\gamma}_{\ell}}\rho^{-2s}d_{\ell}( z,z_{0}\circ z)^{\alpha}.\]
This concludes the proof.
### Interpolation
We also have an interpolation inequality, see [19, Prop. 2.10]. Unlike the other preliminary results that we have stated in Subsection 2.3, the proof of the following proposition is verbatim the same as in [19, Prop. 2.10]. For the sake of self-containment we recall it in Appendix C.
**Proposition 2.12** (Imbert-Silvestre [19, Prop. 2.10]).: _Given \(\beta_{1}<\beta_{2}<\beta_{3}\) so that \(\beta_{2}=\theta\beta_{1}+(1-\theta)\beta_{3}\), then for any \(f\in C_{\ell}^{\beta_{3}}(Q_{1})\) there holds_
\[[f]_{C_{\ell}^{\beta_{2}}(Q_{1})}\leq[f]_{C_{\ell}^{\beta_{1}}(Q_{1})}^{\theta} [f]_{C_{\ell}^{\beta_{3}}(Q_{1})}^{1-\theta}+[f]_{C_{\ell}^{\beta_{1}}(Q_{1})}.\]
_In particular for all \(\varepsilon>0\)_
\[[f]_{C_{\ell}^{\beta_{2}}(Q_{1})}\leq C(\varepsilon)[f]_{C_{\ell}^{\beta_{1}} (Q_{1})}+\varepsilon[f]_{C_{\ell}^{\beta_{3}}(Q_{1})}.\]
## 3. Toolbox
Campanato's approach is a scaling argument, consisting of a clever combination of several tools that permit to gain as much regularity as can be gained from the equation. In short, we combine Poincare's inequality with Sobolev embedding, and close the argument with regularity estimates. In this section we assemble the tools that are used in both the non-fractional and the fractional case.
### Functional inequalities
We observe, similar to the elliptic case in [8], that for \(f\in W^{m,p}(Q_{R}(z_{0}))\) there exists a unique polynomial \(p_{m-1}=p_{m-1}(z_{0},R,f,z)\) of degree less or equal to \(m-1\) so that
\[\fint_{Q_{R}(z_{0})}D^{\phi}(f-p_{m-1})\,\mathrm{d}z=0\qquad\forall\alpha\text { with }|\Phi|\leq m-1. \tag{3.1}\]
Note that \(m\in\mathbb{N}+2s\mathbb{N}\) and \(D^{\phi}\) is a kinetic differential whose order is in the discrete set \(\mathbb{N}+2s\mathbb{N}\) as well. Indeed, the polynomial is given by
\[p_{m-1}(z)=\sum_{\psi\in\mathbb{N}^{1+2d},|\Psi|\leq m-1}\frac{c_{\psi}}{ \psi!}(z-z_{0})^{\psi}\]
with
\[c_{\psi}=\sum_{\begin{subarray}{c}\phi\in\mathbb{N}^{1+2d},\\ 2|\Phi|\leq m-1-|\Psi|\end{subarray}}c_{\psi,\phi}R^{-n+2|\phi|}\int_{Q_{R}(z _{0})}D^{\psi+2\phi}f\,\mathrm{d}z,\]
where \(n=2s+2d(s+1)\). Recall that for \(\psi=(\psi_{0},\Psi_{1},\Psi_{2})\in\mathbb{N}^{1+2d}\) we denote by \(|\Psi|\) the size of \(\psi\) respecting the scaling, i.e. \(|\Psi|=2s\cdot\psi_{0}+(1+2s)|\Psi_{1}|+|\Psi_{2}|\). Here \(\psi!\) denotes the element-wise operation \(\psi!=\psi_{0}!\psi_{1}!\cdots\psi_{2d}!\).
The idea is to use (3.1) in order to apply the standard Poincare-inequality [9, Prop 3.12] to \(D^{\phi}(f-p_{m-1})\) for \(|\Phi|=0,\ldots,m-1\). Moreover, we have for any non-negative function \(f\in L^{2}(Q_{r}(z_{0}))\)
\[\int_{Q_{r}(z_{0})}f^{2}\,\mathrm{d}z\leq Cr^{n}\|f\|_{L^{\infty}(Q_{r}(z_{0} ))}^{2}, \tag{3.2}\]
where \(r>0\) and \(n=2s+2d(s+1)\). Combined with Sobolev's embedding and regularity estimates, we obtain an estimate commonly referred to as Campanato's (first) inequality, which will be the first tool to tackle the Schauder estimates. For reference, in the elliptic case, Campanato's first inequality reads
\[\int_{B_{r}}|u|^{2}\,\mathrm{d}x\leq C\Big{(}\frac{r}{R}\Big{)}^{d}\int_{B_{R }}|u|^{2}\,\mathrm{d}x,\]
for a solution \(u:\mathbb{R}^{d}\to\mathbb{R}\) of a second order elliptic equation, see [9, Section 5].
### Regularity estimates
The second key step are regularity estimates for the constant coefficient equation. We consider solutions \(f\) of the constant coefficient Kolmogorov equation
\[\partial_{t}f+v\cdot\nabla_{x}f-A^{0}\Delta_{v}f=h \tag{3.3}\]
in \(Q_{R}(z_{0})\) for some \(z_{0}\in\mathbb{R}^{1+2d}\) and \(R>0\). Here \(A^{0}\) is some constant such that \(A^{0}\geq\lambda_{0}\) with \(\lambda_{0}\) from (1.8). The fractional analogue reads
\[\partial_{t}f+v\cdot\nabla_{x}f+\mathcal{L}_{0}f=h, \tag{3.4}\]
where \(\mathcal{L}_{0}\) is the non-local operator (1.4) associated to a non-negative, translation-invariant kernel \(K_{0}\) such that
\[\frac{\lambda_{0}}{|w|^{d+2s}}\leq K_{0}(w)\leq\frac{\Lambda_{0}}{|w|^{d+2s}}, \tag{3.5}\]
and \(K_{0}(w)=K_{0}(-w)\) is independent of \(z\). We derive inductive regularity estimates relying on Bouchut's Proposition 3.4, which captures the regularising effect of the transport operator in the space variable. For the sake of brevity we will introduce the notation \(|D|^{\gamma}:=(-\Delta)^{\frac{\gamma}{2}}\) for any \(\gamma\geq 0\).
**Proposition 3.1** (Local (non-fractional) regularity estimates).: _Let \(f\) be a non-negative solution in \(Q_{R}(z_{0})\) of (3.3) with \(s=1\). Let \(l\in\mathbb{N}_{0}\), \(0<r<R\leq 1\) and write \(\delta:=R-r>0\). Then there holds_
\[\big{\|}D^{l+2}f\big{\|}_{L^{2}(Q_{r}(z_{0}))}\leq C\delta^{-2(l+2)}\Big{(} \|f\|_{L^{2}(Q_{R}(z_{0}))}+\big{\|}D^{l}h\big{\|}_{L^{2}(Q_{R}(z_{0}))}\Big{)},\]
_where \(D^{l}\) is a pseudo-differential of order \(l\geq 0\), and \(C=C(n,\lambda_{0})\). In particular, if \(h=0\), then_
\[\big{\|}|D_{v}|^{l+2}f\big{\|}_{L^{2}(Q_{r}(z_{0}))}+\big{\|}|D_{t}|^{\frac{l+ 2}{2}}f\big{\|}_{L^{2}(Q_{r}(z_{0}))}+\big{\|}|D_{x}|^{\frac{l+2}{3}}f\big{\|} _{L^{2}(Q_{r}(z_{0}))}\lesssim\delta^{-2(l+2)}\|f\|_{L^{2}(Q_{R}(z_{0}))}.\]
For the fractional case, the right hand side involves a norm on the whole velocity space.
**Proposition 3.2** (Non-local (fractional) regularity estimates).: _Let \(\gamma>0\) be such that \(\gamma<\min(1,2s)\). Suppose \(f\in C^{\gamma}_{\ell}(Q^{\gamma}_{R}(z_{0})\times\mathbb{R}^{d})\) is a non-negative solution in \(Q_{R}(z_{0})\) of (3.4) with \(s\in(0,1)\). Let \(l\in\mathbb{N}_{0}\), \(0<r<R\leq 1\) and write \(\delta=R-r>0\). Then there holds_
\[\big{\|}D^{(l+2)s}f\big{\|}_{L^{2}(Q_{r}(z_{0}))}\leq C\delta^{-2(l+2)}\Big{(} \|f\|_{C^{\gamma}_{\ell}(Q^{\gamma}_{R}(z_{0})\times\mathbb{R}^{d})}+\big{\|} D^{ls}h\big{\|}_{L^{2}(Q_{R}(z_{0}))}\Big{)}, \tag{3.6}\]
_where \(D^{ls}\) is a pseudo-differential of order \(ls\geq 0\) and \(C=C(n,s,\Lambda_{0},\lambda_{0})\)._
_Remark 3.3_.:
1. The proof of Proposition 3.1 is similar to the proof of Proposition 3.2. In fact, for Step 1 in the proof Proposition 3.2, we can just set \(s=1\) and obtain the global version of the energy estimate for the non-fractional case. Steps 2, 3 and 4 are much simpler for the non-fractional case: it suffices to localise with some smooth cut-off \(\theta\in C^{\infty}_{c}(Q_{R}(z_{0}))\), and then consider the equation solved by \(g:=f\theta\). Since the equation solved by \(f\) is non-fractional, \(g\) solves an equation with a right hand side that can be bounded by \(\|f\|_{L^{2}(Q_{R}(z_{0}))}\) using the induction hypothesis. Since this case is comparatively simpler, we will focus on the proof of Proposition 3.2.
2. With slightly more work, we would possibly also be able to deduce a similar result for a general diffusion coefficient that is uniformly elliptic and satisfies \(D^{l}A\in L^{2}(Q_{R}(z_{0}))\) with \(l\in\mathbb{N}_{0}\) as in the statement. For our purposes, the constant coefficient case suffices.
The proof builds upon the work of Alexandre and Bouchut [1, 3]. In particular, we will make use of the following proposition [3, Proposition 1.1].
**Proposition 3.4** (Bouchut).: _Assume that \(f,S\in L^{2}(\mathbb{R}^{1+2d})\) satisfy_
\[\partial_{t}f+v\cdot\nabla_{x}f=S, \tag{3.7}\]
_and \(|D_{v}|^{\beta}f\in L^{2}(\mathbb{R}^{1+2d})\) for some \(\beta\geq 0\). Then \(|D_{x}|^{\frac{\beta}{1+\beta}}f\in L^{2}(\mathbb{R}^{1+2d})\), and_
\[\big{\|}|D_{x}|^{\frac{\beta}{1+\beta}}f\big{\|}\leq C\big{\|}|D_{v}|^{\beta}f \big{\|}^{\frac{1}{1+\beta}}\big{\|}S\big{\|}^{\frac{\beta}{1+\beta}}, \tag{3.8}\]
_for some universal constant \(C>0\)._
We recall the proof of Proposition 3.4 in Appendix D.
Proof of Proposition 3.2.: With no loss of generality, we assume \(A^{0}=1\) and \(K_{0}(w)=\frac{1}{|w|^{d+2s}}\) (otherwise we can either perform a constant change of variable or just use the pointwise bounds on the kernel). We start with global estimates, and then we localise the result.
_Step 1: Global estimate_. Assume for now that \(f\) solves (3.4) on \(\mathbb{R}^{1+2d}\) with a source term \(h\in L^{2}(\mathbb{R}^{1+2d})\), that is
\[\mathcal{T}f+|D_{v}|^{2s}f=h. \tag{3.9}\]
To prove the global statement (3.6) in its full generality, we will need to assume that \(|D_{v}|^{ls}h,|D_{x}|^{\frac{ls}{1+2s}}h,|D_{t}|^{\frac{ls}{2s}}h\in L^{2}( \mathbb{R}^{1+2d})\).
First note that testing (3.9) with \(\bar{f}\) yields
\[\big{\|}|D_{v}|^{s}f\big{\|}^{2}\leq\|h\|\|f\|.\]
Second, we note that any solution \(f\) of (3.9) satisfies
\[\mathcal{T}\big{(}|D_{x}|^{\frac{ls}{1+2s}}f\big{)}=-|D_{v}|^{2s}|D_{x}|^{ \frac{ls}{1+2s}}f+|D_{x}|^{\frac{ls}{1+2s}}h.\]
Then Bouchut's Proposition 3.4 applied to \(|D_{x}|^{\frac{ls}{1+2s}}f\) yields for \(\beta=2s\geq 0\)
Now we use Holder's inequality in Fourier variables \((k,\xi)\) of \((x,v)\) to bound
\[\big{\|}|D_{v}|^{2s}|D_{x}|^{\frac{ls}{1+2s}}f\big{\|}=\big{\|}|D_{x}|^{\frac{ \theta-(l+2)s}{1+2s}}|D_{v}|^{(1-\theta)\cdot(l+2)s}f\big{\|}\leq\big{\|}|D_{ x}|^{\frac{(l+2)s}{1+2s}}f\big{\|}^{\theta}\big{\|}|D_{v}|^{(l+2)s}f\big{\|}^{1- \theta},\]
where \(\theta=\frac{l}{l+2}\). Thus
\[\big{\|}|D_{x}|^{\frac{(l+2)s}{1+2s}}f\big{\|}\lesssim\big{\|}|D_{x}|^{\frac{ (l+2)s}{1+2s}}f\big{\|}^{\theta}\big{\|}|D_{v}|^{(l+2)s}f\big{\|}^{1-\theta}+ \big{\|}|D_{x}|^{\frac{(l+2)s}{1+2s}}f\big{\|}^{\frac{\theta}{1+2s}}\big{\|}|D _{v}|^{(l+2)s}f\big{\|}^{\frac{l-\theta}{1+2s}}\big{\|}|D_{x}|^{\frac{ls}{1+2s }}h\big{\|}^{\frac{2s}{1+2s}},\]
from which we deduce by dividing by \(\big{\|}|D_{x}|^{\frac{(l+2)s}{1+2s}}f\big{\|}^{\frac{\theta}{1+2s}}\) and using Holder for some \(\varepsilon\in(0,1)\)
\[\big{\|}|D_{x}|^{\frac{(l+2)s}{1+2s}}f\big{\|} \lesssim\big{\|}|D_{x}|^{\frac{(l+2)s}{1+2s}}f\big{\|}^{\frac{2 \theta}{1+2s-\theta}}\big{\|}|D_{v}|^{(l+2)s}f\big{\|}^{\frac{(1-\theta)(1+2s)s} {1+2s-\theta}}+\big{\|}|D_{v}|^{(l+2)s}f\big{\|}^{\frac{1-\theta}{1+2s-\theta}} \big{\|}|D_{x}|^{\frac{ls}{1+2s}}h\big{\|}^{\frac{2s}{1+2s-\theta}}\] \[\leq\varepsilon^{\frac{1+2\theta-\theta}{2s\theta}}\big{\|}|D_{x}|^{ \frac{(l+2)s}{1+2s}}f\big{\|}+C_{\varepsilon}\big{\|}|D_{v}|^{(l+2)s}f\big{\|}+ \big{\|}|D_{v}|^{(l+2)s}f\big{\|}^{\frac{1-\theta}{1+2s-\theta}}\big{\|}|D_{x} |^{\frac{ls}{1+2s}}h\big{\|}^{\frac{2s}{1+2s-\theta}}.\]
Finally, absorbing the first term on the right hand side to the left hand side and using \(\theta=\frac{l}{l+2}\) we have
\[\big{\|}|D_{x}|^{\frac{(l+2)s}{1+2s}}f\big{\|}\lesssim\big{\|}|D_{v}|^{(l+2)s}f \big{\|}+\big{\|}|D_{v}|^{(l+2)s}f\big{\|}^{\frac{1}{1+s+2s}}\big{\|}|D_{x}|^{ \frac{ls}{1+2s}}h\big{\|}^{\frac{s(l+2)}{1+2s+s+d}}. \tag{3.10}\]
Third, we test (3.9) with \(|D_{x}|^{\frac{(l+1)2s}{1+2s}}f\). Then
\[\big{\|}|D_{v}|^{s}D_{x}^{\frac{(l+1)s}{1+2s}}f\big{\|}\leq\big{\|}|D_{x}|^{ \frac{ls}{1+2s}}h\big{\|}^{\frac{1}{2}}\big{\|}|D_{x}|^{\frac{(l+2)s}{1+2s}}f \big{\|}^{\frac{1}{2}}. \tag{3.11}\]
Finally, we test (3.9) with
\[\Big{(}\delta+|D_{v}|^{2(l+1)}+|D_{x}|^{\frac{2(l+1)}{1+2s}}+\sum_{j=1}^{l}|D_{v} |^{2j}|D_{x}|^{\frac{2l+2-2j}{1+2s}}\Big{)}^{s}\bar{f}+|D_{t}|^{l}\partial_{t} \bar{f}\]
for some small \(\delta\in(0,1)\). We get
\[\int\Big{\{}\Big{(}\delta+|D_{v}|^{2(l+1)}+|D_{x}|^{\frac{2(l+1)}{ 1+2s}}+\sum_{j=1}^{l}|D_{v}|^{2j}|D_{x}|^{\frac{2l+2-2j}{1+2s}}\Big{)}^{s}+|D_{t }|^{l}\partial_{t}\Big{\}}\bar{f}\cdot\Big{(}|D_{v}|^{2s}f+\partial_{t}f\Big{)} \,\mathrm{d}z\] \[=-\int\Big{\{}\Big{(}\delta+|D_{v}|^{2(l+1)}+|D_{x}|^{\frac{2(l+1 )}{1+2s}}+\sum_{j=1}^{l}|D_{v}|^{2j}|D_{x}|^{\frac{2l+2-2j}{1+2s}}\Big{)}^{s}+| D_{t}|^{l}\partial_{t}\Big{\}}\bar{f}v\cdot\nabla_{x}f\,\mathrm{d}z\] \[\qquad+\int\Big{\{}\Big{(}\delta+|D_{v}|^{2(l+1)}+|D_{x}|^{\frac {2(l+1)}{1+2s}}+\sum_{j=1}^{l}|D_{v}|^{2j}|D_{x}|^{\frac{2l+2-2j}{1+2s}}\Big{)} ^{s}+|D_{t}|^{l}\partial_{t}\Big{\}}\bar{f}h\,\mathrm{d}z\] \[=:I_{1}+I_{2}. \tag{3.12}\]
For the left hand side we find
\[\int\Big{\{}\Big{(}\delta+|D_{v}|^{2(l+1)}+|D_{x}|^{\frac{2(l+1)}{ 1+2s}}+\sum_{j=1}^{l}|D_{v}|^{2j}|D_{x}|^{\frac{2l+2-2j}{1+2s}}\Big{)}^{s}+|D_{ t}|^{l}\partial_{t}\Big{\}}\bar{f}\cdot\Big{(}|D_{v}|^{2s}f+\partial_{t}f \Big{)}\,\mathrm{d}z\] \[\quad\gtrsim\Big{\|}|D_{v}|^{(l+2)s}f\|_{L^{2}}^{2}+\Big{\|}|D_{t }|^{\frac{l}{2}}\partial_{t}f\Big{\|}_{L^{2}}^{2}+\sum_{j=1}^{l}\Big{\|}|D_{v} |^{(j+1)s}|D_{x}|^{\frac{(l+1-j)s}{1+2s}}f\Big{\|}_{L^{2}}^{2}+\Big{\|}|D_{v} |^{s}|D_{x}|^{\frac{(l+1)s}{1+2s}}f\Big{\|}_{L^{2}}^{2}.\]
On the other hand we get with (3.10)
\[I_{2} \lesssim\|f\|_{L^{2}}\|h\|_{L^{2}}+\big{\|}|D_{v}|^{(l+2)s}f\big{\|} _{L^{2}}\big{\|}|D_{v}|^{ls}h\big{\|}_{L^{2}}+\big{\|}|D_{x}|^{\frac{(l+2)s}{1 +2s}}f\big{\|}_{L^{2}}\big{\|}|D_{x}|^{\frac{l_{x}}{1+2s}}h\big{\|}_{L^{2}}\] \[\lesssim\|f\|_{L^{2}}\|h\|_{L^{2}}+\big{\|}|D_{v}|^{(l+2)s}f\big{\|} _{L^{2}}\big{\|}|D_{v}|^{ls}h\big{\|}_{L^{2}}+\big{\|}|D_{v}|^{(l+2)s}f\big{\|} _{L^{2}}\big{\|}|D_{x}|^{\frac{l_{x}}{1+2s}}h\big{\|}_{L^{2}}\] \[\quad+\big{\|}|D_{v}|^{(l+2)s}f\big{\|}_{L^{2}}^{\frac{1}{1+2s+l }}\big{\|}|D_{x}|^{\frac{l_{x}}{1+2s}}h\big{\|}_{L^{2}}^{1+\frac{(l+2)s}{1+2s} }+\big{\|}|D_{t}|^{\frac{l_{x}}{2s}}\partial_{t}f\big{\|}_{L^{2}}\big{\|}|D_{ t}|^{\frac{l_{x}}{2s}}h\big{\|}_{L^{2}}\] \[\quad+\sum_{j=1}^{l}\Big{\|}|D_{v}|^{(j+1)s}|D_{x}|^{\frac{(l+1-j) s}{1+2s}}f\big{\|}_{L^{2}}\big{\|}|D_{v}|^{(j-1)s}|D_{x}|^{\frac{(l+1-j)s}{1+2s}}h \big{\|}_{L^{2}}\] \[\lesssim\|f\|_{L^{2}}\|h\|_{L^{2}}+\big{\|}|D_{v}|^{(l+2)s}f\big{\|} _{L^{2}}\big{\|}|D_{v}|^{ls}h\big{\|}_{L^{2}}+\big{\|}|D_{v}|^{(l+2)s}f\big{\|} _{L^{2}}\big{\|}|D_{x}|^{\frac{l_{x}}{1+2s}}h\big{\|}_{L^{2}}\] \[\quad+\big{\|}|D_{v}|^{(l+2)s}f\big{\|}_{L^{2}}^{\frac{1}{1+2s+l }}\big{\|}|D_{x}|^{\frac{l_{x}}{1+2s}}h\big{\|}_{L^{2}}^{1+\frac{(l+2)s}{1+2s+ l}}+\big{\|}|D_{t}|^{\frac{l_{x}}{2s}}\partial_{t}f\big{\|}_{L^{2}}\big{\|}|D_{t}|^{ \frac{l_{x}}{2s}}h\big{\|}_{L^{2}}^{2}\] \[\quad+\sum_{j=1}^{l}\big{\|}|D_{v}|^{(j+1)s}|D_{x}|^{\frac{(l+1-j) s}{1+2s}}f\big{\|}_{L^{2}}\big{\|}|D_{v}|^{ls}h\big{\|}_{L^{2}}^{\frac{j-1}{1 }}\big{\|}|D_{x}|^{\frac{l_{x}}{1+2s}}h\big{\|}_{L^{2}}^{\frac{l+1-j}{1+2s}}\] \[\lesssim\|f\|_{L^{2}}\|h\|_{L^{2}}+\big{\|}|D_{v}|^{(l+2)s}f\big{\|} _{L^{2}}\big{\|}|D_{v}|^{ls}h\big{\|}_{L^{2}}+\big{\|}|D_{v}|^{(l+2)s}f\big{\|}_{L^ {2}}\big{\|}|D_{x}|^{\frac{l_{x}}{1+2s}}h\big{\|}_{L^{2}}\] \[\quad+\big{\|}|D_{v}|^{(l+2)s}f\big{\|}_{L^{2}}^{\frac{1}{1+2s+l }}\big{\|}|D_{x}|^{\frac{l_{x}}{1+2s}}h\big{\|}_{L^{2}}^{1+\frac{(l+2)s}{1+2s+ l}}+\big{\|}|D_{t}|^{\frac{l_{x}}{2s}}\partial_{t}f\big{\|}_{L^{2}}\big{\|}|D_{t}|^{ \frac{l_{x}}{2s}}h\big{\|}_{L^{2}}^{2}\] \[\quad+\Big{(}\big{\|}|D_{v}|^{ls}h\big{\|}_{L^{2}}+\big{\|}|D_{x}|^{ \frac{l_{x}}{1+2s}}h\big{\|}_{L^{2}}\Big{)}\sum_{j=1}^{l}\big{\|}|D_{v}|^{(j+1)s}|D_{x}|^{ \frac{(l+1-j)s}{1+2s}}f\big{\|}_{L^{2}}, \tag{3.13}\]
where in the second last inequality we again used Holder in Fourier and Young's inequality. Note that the last sum can be absorbed on the left hand side of (3.12) eventually.
For \(I_{1}\) we Fourier-transform \((t,x,v)\to(\eta,k,\xi)\) so that we get
\[I_{1} =-\mathrm{Re}\Big{\langle}\Big{\{}\Big{(}\delta+|D_{v}|^{2(l+1)}+ |D_{x}|^{\frac{2(l+1)}{1+2s}}+\sum_{j=1}^{l}|D_{v}|^{2j}|D_{x}|^{\frac{2l+2-2j }{1+2s}}\Big{)}^{s}+|D_{t}|^{l}\partial_{t}\Big{\}}\bar{f},v\cdot\nabla_{x}f \Big{\rangle}\] \[=-\mathrm{Re}\Big{\langle}\Big{\{}\Big{(}\hat{\delta}+|\xi|^{2(l+ 1)}+|k|^{\frac{2(l+1)}{1+2s}}+\sum_{j=1}^{l}|\xi|^{2j}|k|^{\frac{2l+2-2j}{1+2s }}\Big{)}^{s}+|\eta|^{l+1}\Big{\}}\bar{\bar{f}},k_{i}\partial_{\xi_{i}}\hat{f} \Big{\rangle}\] \[=2s\ \mathrm{Re}\Bigg{\langle}\Big{(}\hat{\delta}+|\xi|^{2(l+1)}+ |k|^{\frac{2(l+1)}{1+2s}}+\sum_{j=1}^{l}|\xi|^{2j}|k|^{\frac{2l+2-2j}{1+2s}} \Big{)}^{s-1}\] \[\quad\times\Big{(}(l+1)|\xi|^{2l}+\sum_{j=1}^{l}j|k|^{\frac{2l+2- 2j}{1+2s}}|\xi|^{2j-2}\Big{)}\xi_{i}\bar{\bar{f}},k_{i}\hat{f}\Bigg{\rangle}\] \[\quad+\mathrm{Re}\Big{\langle}\Big{\{}\Big{(}\hat{\delta}+|\xi|^ {2(l+1)}+|k|^{\frac{2(l+1)}{1+2s}}+\sum_{j=1}^{l}|\xi|^{2j}|k|^{\frac{2l+2-2j} {1+2s}}\Big{)}^{s}+|\eta|^{l+1}\Big{\}}\partial_{\xi_{i}}\bar{\bar{f}},k_{i} \hat{f}\Big{\rangle}.\]
Thus
\[I_{1} =s\ \mathrm{Re}\Bigg{\langle}\Big{(}\hat{\delta}+|\xi|^{2(l+1)}+ |k|^{\frac{2(l+1)}{1+2s}}+\sum_{j=1}^{l}|\xi|^{2j}|k|^{\frac{2l+2-2j}{1+2s}} \Big{)}^{s-1}\Big{(}(l+1)|\xi|^{2l}+\sum_{j=1}^{l}j|k|^{\frac{2l+2-2j}{1+2s}} |\xi|^{2j-2}\Big{)}\xi_{i}\bar{\bar{f}},k_{i}\hat{f}\Bigg{\rangle}\] \[\lesssim\int\bar{\bar{f}}\hat{f}\cdot\Big{(}|k|^{\frac{2(l+1)}{1 +2s}}+\sum_{j=1}^{l}|\xi|^{2j}|k|^{\frac{2l+2-2j}{1+2s}}\Big{)}^{s-1}\Big{(}| \xi|^{2l}+\sum_{j=1}^{l}|k|^{\frac{2l+2-2j}{1+2s}}|\xi|^{2j-2}\Big{)}|\xi||k| \,\mathrm{d}z.\]
We claim that we can bound
\[I_{1}\lesssim\int\bar{\bar{f}}\hat{f}\cdot\sum_{j=1}^{l}|\xi|^{2(j-1)s+s}|k|^{ \frac{2l+3s-2(j-1)s}{1+2s}}\,\mathrm{d}z+\int\bar{\bar{f}}\hat{f}\cdot|\xi|^{ 2ls+s}|k|^{\frac{3s}{1+2s}}\,\mathrm{d}z. \tag{3.14}\]
Indeed, if \(|\xi|\sim|k|\) then one can check that the homogeneity is kept. Else assume first that \(|\xi|\ll|k|\). Then we have
\[I_{1}\lesssim\int\bar{\bar{f}}\hat{f}\cdot|\xi|\sum_{j=1}^{l}|k|^{\frac{2l+2-2j }{1+2s}}|\xi|^{2j-2}|k|^{\frac{2(l+1)(s-1)}{1+2s}+1}\,\mathrm{d}z=\int\bar{ \bar{f}}\hat{f}\cdot\sum_{j=1}^{l}|k|^{\frac{2l-2j+4s+1}{1+2s}}|\xi|^{2j-1}\, \mathrm{d}z.\]
Comparing the exponents of \(|\xi|\) and \(|k|\) gives \(2l\) conditions that need to be satisfied,
\[2j-1\geq 2(j-1)s+s,\qquad 2ls+4s-2j+1\leq 2ls-2(j-1)s+3s,\quad\forall j\in\{1, \ldots,l\},\]
which holds since \(s\leq 1\). Now assume on the other hand that \(|k|\ll|\xi|\). Then we have
\[I_{1}\lesssim\int\bar{\bar{f}}\hat{f}\cdot|\xi|^{2l+1+2(l+1)(s-1)}|k|\,\mathrm{d }z=\int\bar{\bar{f}}\hat{f}\cdot|\xi|^{2ls-1+2s}|k|\,\mathrm{d}z.\]
Thus we need
\[2ls+2s-1\leq 2ls+s,\qquad 1\geq\frac{3s}{1+2s},\]
which both clearly holds for \(s\leq 1\).
Thus we estimate
\[\begin{split} I_{1}&\lesssim\int\bar{\tilde{f}}\tilde{f} \cdot\sum_{j=1}^{l}|\xi|^{2(j-1)s+s}|k|^{\frac{2l+3s-2(j-1)s}{1+2s}}\,\mathrm{d}z +\int\bar{\tilde{f}}\tilde{f}\cdot|\xi|^{2ls+s}|k|^{\frac{3s}{1+2s}}\,\mathrm{d}z \\ &\lesssim\big{\|}|D_{v}|^{ls}|D_{x}|^{\frac{2s}{1+2s}}f\big{\|} \big{\|}|D_{v}|^{(l+1)s}|D_{x}|^{\frac{s}{1+2s}}f\big{\|}\\ &\quad+\sum_{j=1}^{l}\big{\|}|D_{v}|^{js}|D_{x}|^{\frac{ls+2s-js} {1+2s}}f\big{\|}\big{\|}|D_{v}|^{(j-1)s}|D_{x}|^{\frac{ls+3s-js}{1+2s}}f\big{\|}.\end{split} \tag{3.15}\]
Now we use Holder in Fourier variables again. Eventually we want to use (3.11) and (3.10) in order to get a right hand side in terms of our source term. Thus, for each \(j\in\{1,\dots,l\}\) we will look for parameters \(\theta_{j}\in(0,1)\) such that we can express the right hand side of (3.15) in terms of
\[\big{\|}\big{(}|D_{v}|^{s}|D_{x}|^{\frac{(l+1)s}{1+2s}}\big{)}^{1-\theta_{j}} \big{(}|D_{v}|^{(l+1)s}|D_{x}|^{\frac{s}{1+2s}}f\big{\|}\big{\|}\leq\big{\|}|D_ {v}|^{s}|D_{x}|^{\frac{(l+1)s}{1+2s}}f\big{\|}^{1-\theta_{j}}\big{\|}|D_{v}|^{( l+1)s}|D_{x}|^{\frac{s}{1+2s}}f\big{\|}^{\theta_{j}}.\]
Indeed, then we get with (3.11) and (3.10)
\[\begin{split}\big{\|}\big{(}|D_{v}|^{s}|D_{x}|^{\frac{(l+1)s}{1+2 s}}\big{)}^{1-\theta_{j}}&\big{(}|D_{v}|^{(l+1)s}|D_{x}|^{\frac{s}{1+2s}} \big{)}^{\theta_{j}}f\big{\|}\\ &\leq\big{\|}|D_{v}|^{s}|D_{x}|^{\frac{(l+1)s}{1+2s}}f\big{\|}^{1 -\theta_{j}}\big{\|}|D_{v}|^{(l+1)s}|D_{x}|^{\frac{s}{1+2s}}f\big{\|}^{\theta_ {j}}\\ &\lesssim\big{\|}|D_{x}|^{\frac{(l+2)s}{1+2s}}f\big{\|}^{\frac{1 -\theta_{j}}{2}}\big{\|}|D_{x}|^{\frac{ls}{1+2s}}h\big{\|}^{\frac{1-\theta_{j} }{2}}\big{\|}|D_{v}|^{(l+1)s}|D_{x}|^{\frac{s}{1+2s}}f\big{\|}^{\theta_{j}}\\ &\lesssim\Big{(}\big{\|}|D_{v}|^{(l+2)s}f\big{\|}+\big{\|}|D_{v}| ^{(l+2)s}f\big{\|}^{\frac{1}{1+(l+2)s}}\big{\|}|D_{x}|^{\frac{ls}{1+2s}}h\big{\|} ^{\frac{(l+2)s}{1+(l+2)s}}\Big{)}^{\frac{1-\theta_{j}}{2}}\\ &\quad\quad\times\big{\|}|D_{x}|^{\frac{ls}{1+2s}}h\big{\|}^{ \frac{1-\theta_{j}}{2}}\big{\|}|D_{v}|^{(l+1)s}|D_{x}|^{\frac{s}{1+2s}}f\big{\|} ^{\theta_{j}}.\end{split} \tag{3.16}\]
We start by using (3.16) for the first term on the right hand side of (3.15). We can write
\[\big{\|}|D_{v}|^{ls}|D_{x}|^{\frac{2s}{1+2s}}f\big{\|}=\big{\|}\big{(}|D_{v}|^ {s}|D_{x}|^{\frac{(l+1)s}{1+2s}}\big{)}^{1-\theta_{l+1}}\big{(}|D_{v}|^{(l+1)s} |D_{x}|^{\frac{s}{1+2s}}\big{)}^{\theta_{l+1}}f\big{\|}\]
for \(\theta_{l+1}=\frac{l-1}{l}\). Thus with Holder's inequality
\[\big{\|}|D_{v}|^{ls}|D_{x}|^{\frac{2s}{1+2s}}f\big{\|}\big{\|}|D_{v}|^{(l+1)s} |D_{x}|^{\frac{s}{1+2s}}f\big{\|}\]
\[\lesssim\Big{(}\big{\|}|D_{v}|^{(l+2)s}f\big{\|}+\big{\|}|D_{v}|^{(l+2)s}f \big{\|}^{\frac{1}{1+(l+2)s}}\big{\|}|D_{x}|^{\frac{ls}{1+2s}}h\big{\|}^{ \frac{(l+2)s}{1+(l+2)s}}\Big{\|}^{\frac{1-\theta_{l+1}}{2}}\]
\[\quad\quad\times\big{\|}|D_{x}|^{\frac{ls}{1+2s}}h\big{\|}^{\frac{1-\theta_{l+1 }}{2}}\big{\|}|D_{v}|^{(l+1)s}|D_{x}|^{\frac{s}{1+2s}}f\big{\|}^{\theta_{l+1} +1}\]
\[\lesssim\big{\|}|D_{v}|^{(l+2)s}f\big{\|}^{\frac{4}{3}}\big{\|}|D_{x}|^{ \frac{ls}{1+2s}}h\big{\|}^{\frac{3}{3}}+\big{\|}|D_{v}|^{(l+1)s}|D_{x}|^{\frac{ s}{1+2s}}f\big{\|}^{\frac{8(\theta_{l+1}+1)}{5+3\theta_{l}}}\big{\|}|D_{x}|^{ \frac{ls}{1+2s}}h\big{\|}^{\frac{2(1-\theta_{l+1})}{5+3\theta_{l+1}}}\]
\[\quad\quad+\big{\|}|D_{v}|^{(l+2)s}f\big{\|}^{\frac{2}{1+(l+2)s}}\big{\|}|D_{x }|^{\frac{ls}{1+2s}}h\big{\|}^{\frac{2(l+2)s}{1+(l+2)s}}\]
\[\quad\quad+\big{\|}|D_{v}|^{(l+1)s}|D_{x}|^{\frac{s}{1+2s}}f\big{\|}^{\frac{4( \theta_{l+1}+1)}{3+\theta_{l+1}}}\big{\|}|D_{x}|^{\frac{ls}{1+2s}}h\big{\|}^{ \frac{2(1-\theta_{l+1})}{3+\theta_{l+1}}}\]
\[\lesssim\varepsilon\big{\|}|D_{v}|^{(l+2)s}f\big{\|}^{2}+\varepsilon\big{\|}|D_{v }|^{(l+1)s}|D_{x}|^{\frac{s}{1+2s}}f\big{\|}^{2}+C_{\varepsilon}\big{\|}|D_{x}|^{ \frac{s}{1+2s}}h\big{\|}^{2},\]
for some \(\varepsilon\in(0,1)\).
Similarly we can proceed with the remaining terms on the right hand side of (3.15). Note that for each \(j\in\{1,\dots,l\}\) we can absorb the term \(\big{\|}|D_{v}|^{(j+1)s}|D_{x}|^{\frac{ls+s-js}{1+2s}}f\big{\|}\) on the left hand side. Thus we write for each \(j\in\{2,\dots,l\}\)
\[\big{\|}|D_{v}|^{(j-1)s}|D_{x}|^{\frac{ls+3s-js}{1+2s}}f\big{\|}=\big{\|}\big{(}|D _{v}|^{s}|D_{x}|^{\frac{(l+1)s}{1+2s}}\big{)}^{1-\theta_{j}}\big{(}|D_{v}|^{(l+1) s}|D_{x}|^{\frac{s}{1+2s}}\big{)}^{\theta_{j}}f\big{\|},\]
where \(\theta_{j}=\frac{j-2}{l}\). Then by (3.16) and Young's inequality
\[\sum_{j=2}^{l} \big{\|}|D_{v}|^{js}|D_{x}|^{\frac{ls+2s-js}{1+2s}}f\big{\|}\big{\|} |D_{v}|^{(j-1)s}|D_{x}|^{\frac{ls+3s-js}{1+2s}}f\big{\|}\] \[\lesssim\sum_{j=2}^{l}\Big{(}\big{\|}|D_{v}|^{(l+2)s}f\big{\|}+ \big{\|}|D_{v}|^{(l+2)s}f\big{\|}^{\frac{1}{1+(l+2)s}}\big{\|}|D_{x}|^{\frac{ls} {1+2s}}h\big{\|}^{\frac{(l+2)s}{1+(l+2)s}}\] \[\qquad\times\big{\|}|D_{x}|^{\frac{ls}{1+2s}}h\big{\|}^{\frac{1- \theta_{j}}{2}}\big{\|}|D_{v}|^{(l+1)s}|D_{x}|^{\frac{s}{1+2s}}f\big{\|}^{ \theta_{j}}\big{\|}|D_{v}|^{js}|D_{x}|^{\frac{ls+2s-js}{1+2s}}f\big{\|}\] \[\lesssim\big{\|}|D_{v}|^{(l+2)s}f\big{\|}^{\frac{4}{3}}\big{\|}|D _{x}|^{\frac{ls}{1+2s}}h\big{\|}^{\frac{2}{3}}+\big{\|}|D_{v}|^{(l+2)s}f\big{\|} ^{\frac{2}{1+(l+2)s}}\big{\|}|D_{x}|^{\frac{ls}{1+2s}}h\big{\|}^{\frac{2(l+2)s} {1+(l+2)s}}\] \[\quad+\sum_{j=2}^{l}\Bigg{[}\big{\|}|D_{v}|^{(l+1)s}|D_{x}|^{\frac {s}{1+2s}}f\big{\|}^{\frac{8\theta_{j}}{5+3\theta_{j}}}\big{\|}|D_{x}|^{\frac{ ls}{1+2s}}h\big{\|}^{\frac{2(1-\theta_{j})}{5+3\theta_{j}}}\big{\|}|D_{v}|^{js}|D_{x} |^{\frac{ls+2s-js}{1+2s}}f\Big{\|}^{\frac{8}{5+3\theta_{j}}}\] \[\quad+\big{\|}|D_{v}|^{(l+1)s}|D_{x}|^{\frac{s}{1+2s}}f\big{\|} ^{\frac{4\theta_{j}}{3+\theta_{j}}}\big{\|}|D_{v}|^{js}|D_{x}|^{\frac{ls+2s-js} {1+2s}}f\big{\|}^{\frac{4}{3+\theta_{j}}}\big{\|}|D_{x}|^{\frac{ls}{1+2s}}h \big{\|}^{\frac{2(1-\theta_{j})}{3+\theta_{j}}}\Bigg{]}\] \[\lesssim\big{\|}|D_{v}|^{(l+2)s}f\big{\|}^{\frac{4}{3}}\big{\|}|D _{x}|^{\frac{ls}{1+2s}}h\big{\|}^{\frac{2}{3}}+\big{\|}|D_{v}|^{(l+2)s}f\big{\|} ^{\frac{2}{1+(l+2)s}}\big{\|}|D_{x}|^{\frac{ls}{1+2s}}h\big{\|}^{\frac{2(l+2)s} {1+(l+2)s}}\] \[\quad+\big{\|}|D_{v}|^{(l+1)s}|D_{x}|^{\frac{s}{1+2s}}h\big{\|}^{ \frac{4\theta_{j}}{3+\theta_{j}}}\big{\|}|D_{x}|^{\frac{ls+2s-js}{1+2s}}f\big{\|} ^{\frac{4}{3-\theta_{j}}}+\big{\|}|D_{v}|^{(l+1)s}|D_{x}|^{\frac{s}{1+2s}}f \big{\|}^{\frac{8\theta_{j}}{1+3\theta_{j}}}\big{\|}|D_{x}|^{\frac{ls}{1+3 \theta_{j}}}h\big{\|}^{\frac{2(1-\theta_{j})}{1+3\theta_{j}}}\] \[\quad+\big{\|}|D_{v}|^{js}|D_{x}|^{\frac{ls+2s-js}{1+2s}}f\big{\|} ^{\frac{8}{5-\theta_{j}}}\big{\|}|D_{x}|^{\frac{ls}{1+2s}}h\big{\|}^{\frac{2(1- \theta_{j})}{5-\theta_{j}}}\Bigg{]}\] \[\lesssim\varepsilon\big{\|}|D_{v}|^{(l+2)s}f\big{\|}^{2}+ \varepsilon\big{\|}|D_{v}|^{(l+1)s}|D_{x}|^{\frac{s}{1+2s}}f\big{\|}^{2}+ \varepsilon\sum_{j=2}^{l}\big{\|}|D_{v}|^{js}|D_{x}|^{\frac{ls+2s-js}{1+2s}}f \big{\|}^{2}+C_{\varepsilon}\big{\|}|D_{x}|^{\frac{ls}{1+2s}}h\big{\|}^{2},\]
for some \(\varepsilon\in(0,1)\).
Finally, the only remaining term is when \(j=1\) in (3.15), which we estimate using (3.11) and (3.10)
\[\big{\|}|D_{v}|^{s}|D_{x}|^{\frac{(l+1)s}{1+2s}}f\big{\|}\big{\|} |D_{x}|^{\frac{(l+2)s}{1+2s}}f\big{\|} \lesssim\big{\|}|D_{x}|^{\frac{ls}{1+2s}}h\big{\|}^{\frac{1}{2}} \big{\|}|D_{x}|^{\frac{(l+2)s}{1+2s}}f\big{\|}^{\frac{3}{2}}\] \[\lesssim\big{\|}|D_{x}|^{\frac{ls}{1+2s}}h\big{\|}^{\frac{1}{2}} \big{\|}|D_{v}|^{(l+2)s}f\big{\|}^{\frac{3}{2}}\] \[\quad+\big{\|}|D_{x}|^{\frac{ls}{1+2s}}h\big{\|}^{\frac{1}{2}+ \frac{3(l+2)s}{2(1+(l+2)s)}}\big{\|}|D_{v}|^{(l+2)s}f\big{\|}^{\frac{3}{2(1+(l+2) s)}}\] \[\lesssim\varepsilon\big{\|}|D_{v}|^{(l+2)s}f\big{\|}^{2}+C_{ \varepsilon}\big{\|}|D_{x}|^{\frac{ls}{1+2s}}h\big{\|}^{2}.\]
Therefore, we have shown
\[I_{1}\lesssim\varepsilon\big{\|}|D_{v}|^{(l+2)s}f\big{\|}^{2}+ \varepsilon\big{\|}|D_{v}|^{(l+1)s}|D_{x}|^{\frac{s}{1+2s}}f\big{\|}^{2}+ \varepsilon\sum_{j=2}^{l}\big{\|}|D_{v}|^{js}|D_{x}|^{\frac{ls+2s-js}{1+2s}}f \big{\|}^{2}+C_{\varepsilon}\big{\|}|D_{x}|^{\frac{ls}{1+2s}}h\big{\|}^{2}. \tag{3.17}\]
We combine (3.12), (3.13) and (3.17) to get
\[\big{\|}|D_{v}|^{(l+2)s}f\big{\|}^{2}_{L^{2}}+\big{\|}|D_{t}|^{ \frac{ls}{2s}}\partial_{t}f\big{\|}^{2}_{L^{2}} +\sum_{j=1}^{l}\big{\|}|D_{v}|^{(j+1)s}|D_{x}|^{\frac{(l+s-js}{1+2s}}f \big{\|}^{2}_{L^{2}}+\big{\|}|D_{v}|^{s}|D_{x}|^{\frac{ls+s}{1+2s}}f\big{\|}^{2}_{ L^{2}}\] \[\lesssim\|f\|\big{}^{2}_{L^{2}}+\|h\|^{2}_{L^{2}}+\big{\|}|D_{v}|^{ ls}h\big{\|}^{2}_{L^{2}}+\big{\|}|D_{x}|^{\frac{ls}{1+2s}}h\big{\|}^{2}_{L^{2}}+\big{\|}|D_{t}|^{ \frac{ls}{2s}}h\big{\|}^{2}_{L^{2}}.\]
Thus, by (3.10) we have
\[\left\||D_{x}|^{\frac{(l+2)s}{1+2s}}f\right\|_{L^{2}}^{2}\lesssim\left\|f\right\|_ {L^{2}}^{2}+\left\|h\right\|_{L^{2}}^{2}+\left\||D_{v}|^{ls}h\right\|_{L^{2}}^{ 2}+\left\||D_{x}|^{\frac{ls}{1+2s}}h\right\|_{L^{2}}^{2}+\left\||D_{t}|^{\frac{ ls}{2s}}h\right\|_{L^{2}}^{2}.\]
We conclude
\[\left\||D_{v}|^{(l+2)s}f\right\|_{L^{2}}^{2}+\left\||D_{t}|^{\frac {ls}{2s}}\partial_{t}f\right\|_{L^{2}}^{2}+\left\||D_{x}|^{\frac{(l+2)s}{1+2s} }f\right\|_{L^{2}}^{2}\] \[\lesssim\left\|f\right\|_{L^{2}}^{2}+\left\|h\right\|_{L^{2}}^{2} +\left\||D_{v}|^{ls}h\right\|_{L^{2}}^{2}+\left\||D_{x}|^{\frac{ls}{1+2s}}h \right\|_{L^{2}}^{2}+\left\||D_{t}|^{\frac{ls}{2s}}h\right\|_{L^{2}}^{2}.\]
_Step 2: Local estimates._ Let \(0<r<R\) and let \(\delta=R-r>0\) be from the statement of the theorem. With no loss in generality set \(z_{0}=(0,0,0)\) and assume \(f\) solves (3.4) in \(Q_{R}(z_{0})\). We introduce a smooth function \(\theta=\theta(t,x,v)\in C_{c}^{\infty}(\mathbb{R}^{1+2d})\) such that \(\theta=1\) in \(Q_{r}\) and \(\theta=0\) outside \(Q_{R}\), and so that \(|D_{v}|\theta\lesssim\delta^{-1},|D_{x}|^{\frac{1}{1+2s}}\theta\lesssim\delta^ {-1},|D_{t}|^{\frac{1}{2s}}\theta\lesssim\delta^{-1}\). Then we let
\[g=f\theta, \tag{3.18}\]
so that \(g\) satisfies
\[\mathcal{T}g+|D_{v}|^{2s}g=f\big{(}\mathcal{T}\theta\big{)}+|D_{v}|^{2s}g- \big{(}|D_{v}|^{2s}f\big{)}\theta=f\big{(}\mathcal{T}\theta\big{)}+h\theta-h_ {1}-h_{2}-|D_{v}|^{s}h_{3}=:H \tag{3.19}\]
in \(\mathbb{R}^{1+2d}\), where \(h_{1},h_{2},h_{3}\) are as in the commutator estimates of [18, Lemma 4.10, 4.11].
_Step 3-(i): The base case._ We start with \(l=0\). The global case for \(l=0\) gives
\[\left\||D_{v}|^{2s}g\right\|_{L^{2}}+\left\|\partial_{t}g\right\|_{L^{2}}+ \left\||D_{x}|^{\frac{2s}{1+2s}}g\right\|_{L^{2}}+\left\||D_{v}|^{s}|D_{x}|^{ \frac{ls+s}{1+2s}}g\right\|_{L^{2}}^{2}\lesssim\left\|H\right\|_{L^{2}}.\]
Therefore it remains to bound the right hand side. We have by the standard energy estimate (see [10, Proposition 9] for \(s=1\) and [18, Lemma 6.2] or [25, Proposition 3.3] for the fractional case \(s\in(0,1)\))
\[\left\||D_{v}|^{s}f\right\|_{L^{2}(Q_{r})}\lesssim\|h\|_{L^{2}(Q_{R})}+\delta ^{-1}\|f\|_{L^{2}_{t,x}L_{v}^{\infty}(Q_{R}^{v}\times\mathbb{R}^{d})}.\]
Moreover, we see
\[\|f\mathcal{T}\theta\|_{L^{2}}\lesssim\delta^{-2}\|f\|_{L^{2}(Q_{R})}.\]
The remaining part is a commutator of the form
\[|D_{v}|^{2s}g-\big{(}|D_{v}|^{2s}f\big{)}\theta=\int_{\mathbb{R}^{d}}f(w) \frac{\big{(}\theta(v)-\theta(w)\big{)}}{|v-w|^{d+2s}}\,\mathrm{d}w.\]
Just as in [18, Lemma 4.10, 4.11] we write
\[h_{1}=\int_{\mathbb{R}^{d}\setminus B_{r}(v)}f(w)\frac{\big{(}\theta(v)- \theta(w)\big{)}}{|v-w|^{d+2s}}\,\mathrm{d}w,\]
and
\[\tilde{h}_{2}=\int_{B_{r}(v)}f(w)\frac{\big{(}\theta(v)-\theta(w)\big{)}}{|v-w |^{d+2s}}\,\mathrm{d}w.\]
Then we get for any \(v\in\mathbb{R}^{d}\)
\[\|h_{1}\|_{L^{2}}\lesssim\delta^{-2}\|f\|_{L^{2}_{t,x}L_{v}^{\infty}(Q_{R}^{v} \times\mathbb{R}^{d})}.\]
Moreover, as the proof of [18, Lemma 4.11] shows, we can write \(\tilde{h}_{2}=h_{2}+|D_{v}|^{s}h_{3}\) for some \(h_{2},h_{3}\) that satisfy
\[\|h_{2}\|_{L^{2}}\lesssim\delta^{-2}\|f\|_{L^{2}_{t,x}H_{v}^{\infty}(Q_{R})},\]
and
\[\|h_{3}\|_{L^{2}}\lesssim\delta^{-2}\|f\|_{L^{2}_{t,x,v}(Q_{R})}.\]
Thus
\[\|H\|_{L^{2}}\lesssim\delta^{-2}\Big{(}\|f\|_{L^{2}_{t,x}H_{v}^{s}(Q_{R})}+\|f \|_{L^{2}_{t,x}L_{v}^{\infty}(Q_{R}^{v}\times\mathbb{R}^{d})}\Big{)}+\|h\|_{L^{2} (Q_{R})}\lesssim\delta^{-3}\|f\|_{L^{2}_{t,x}L_{v}^{\infty}(Q_{R}^{v}\times \mathbb{R}^{d})}+\|h\|_{L^{2}(Q_{R})}.\]
Finally, since \(g=f\) in \(Q_{r}\) we conclude
\[\big{\|}\partial_{t}f\big{\|}_{L^{2}(Q_{r})}+\big{\|}|D_{v}|^{2s}f\big{\|}_{L^{2}( Q_{r})}+\big{\|}|D_{x}|^{\frac{2s}{1+2s}}f\big{\|}_{L^{2}(Q_{r})}+\big{\|}|D_{v}|^{s}|D_{x} |^{\frac{l+s}{1+2s}}f\big{\|}\lesssim\delta^{-3}\|f\|_{L^{2}_{t,x}L^{\infty}_{v }(Q^{v}_{R}\times\mathbb{R}^{d})}+\|h\|_{L^{2}(Q_{R})}.\]
_Step 3-(ii): The general case._ Now let \(l\in\mathbb{N}_{0}\). We proceed by induction. Let \(l\geq 1\) and assume the conclusion holds for \(l-1\), that is we have
\[\big{\|}|D_{t}|^{\frac{(l+1)s}{2s}}f\big{\|}_{L^{2}(Q_{r})}+\big{\|}|D_{v}|^{ (l+1)s}f\big{\|}_{L^{2}(Q_{r})}+\big{\|}|D_{x}|^{\frac{(l+1)s}{1+2s}}f\big{\|} _{L^{2}(Q_{r})}+\sum_{j=0}^{l-1}\big{\|}|D_{v}|^{(j+1)s}|D_{x}|^{\frac{l+s-jz} {1+2s}}f\big{\|}_{L^{2}(Q_{r})}^{2}\]
where \(\delta=R-r\).
The same procedure as for \(f\) on \(\mathbb{R}^{1+2d}\) in Step 1 will yield
\[\big{\|}|D_{v}|^{(l+2)s}g\big{\|}_{L^{2}}+\big{\|}|D_{t}|^{\frac{ l}{2}}\partial_{t}g\big{\|}_{L^{2}}+\big{\|}|D_{x}|^{\frac{(l+2)s}{1+2s}}g\big{\|} _{L^{2}}+\sum_{j=0}^{l}\] \[\lesssim\big{\|}|D_{v}|^{ls}H\big{\|}_{L^{2}}+\big{\|}|D_{x}|^{ \frac{ls}{1+2s}}H\big{\|}_{L^{2}}+\big{\|}|D_{t}|^{\frac{l}{2}}H\big{\|}_{L^{ 2}}.\]
To estimate the right hand side of (3.18), we first notice by induction hypothesis
\[\big{\|}|D_{v}|^{ls}(f\mathcal{T}\theta)\big{\|}_{L^{2}}+\big{\|}| D_{x}|^{\frac{ls}{1+2s}}(f\mathcal{T}\theta)\big{\|}_{L^{2}}+\big{\|}|D_{t}|^{ \frac{l}{2}}(f\mathcal{T}\theta)\big{\|}_{L^{2}}\] \[\lesssim\delta^{-2s}\Big{(}\big{\|}|D_{v}|^{ls}f\big{\|}_{L^{2}(Q _{2r})}+\big{\|}|D_{x}|^{\frac{ls}{1+2s}}f\big{\|}_{L^{2}(Q_{2r})}+\big{\|}|D_ {t}|^{\frac{l}{2}}f\big{\|}_{L^{2}(Q_{2r})}\Big{)}\] \[\lesssim\delta^{-(2l+2)}\big{\|}f\big{\|}_{C^{\gamma}_{\ell}(Q_{R }\times\mathbb{R}^{d})}\] \[\qquad+\delta^{-(l+2)s}\Big{(}\big{\|}|D_{v}|^{(l-2)s}h\big{\|}_{L^ {2}(Q_{R})}+\big{\|}|D_{x}|^{\frac{(l-2)s}{1+2s}}h\big{\|}_{L^{2}(Q_{R})}+ \big{\|}|D_{t}|^{\frac{l-2}{2}}h\big{\|}_{L^{2}(Q_{R})}\Big{)}.\]
Second, we have
\[|D_{v}|^{ls}\int_{\mathbb{R}^{d}}f(w)\frac{\theta(v)-\theta(w)}{|v-w|^{d+2s}} \,\mathrm{d}w\leq C\|f\|_{L^{\infty}(\mathbb{R}^{d})}|D_{v}|^{ls}\int_{\mathbb{ R}^{d}}\frac{\theta(v)-\theta(w)}{|v-w|^{d+2s}}\,\mathrm{d}w.\]
Thus we can estimate
\[\big{\|}|D_{v}|^{ls}(h_{1}+\tilde{h}_{2})\big{\|}\lesssim\delta^{-(l+2)s}\|f\| _{L^{\infty}(Q^{v}_{R}\times\mathbb{R}^{d})}.\]
Moreover, we get
\[\big{|}|D_{x}|^{\frac{ls}{1+2s}}h_{1}\big{|} =\left|\left|D_{x}\right|^{\frac{l}{1+2s}}\int_{\mathbb{R}^{d} \setminus B_{r}(v)}f(w)\frac{\theta(t,x,v)-\theta(t,x,w)}{|v-w|^{d+2s}}\, \mathrm{d}w\right|\] \[\lesssim[f]_{C^{\gamma}_{\ell}(Q^{v}_{R}\times\mathbb{R}^{d})} \int_{\mathbb{R}^{d}\setminus B_{r}(v)}\int_{\mathbb{R}^{d}}\frac{\theta(t,x,v)- \theta(t,x,w)-\theta(t,y,v)+\theta(t,y,w)}{|v-w|^{d+2s}|x-y|^{d+\frac{l-s- \gamma}{1+2s}}}\,\mathrm{d}y\,\mathrm{d}w\] \[\lesssim\delta^{-(l+2)s}\|f\|_{C^{\gamma}_{\ell}(Q^{v}_{R}\times \mathbb{R}^{d})}.\]
Similarly,
\[\big{|}|D_{t}|^{\frac{ls}{2s}}h_{1}\big{|} =\left|\left|D_{t}\right|^{\frac{ls}{2s}}\int_{\mathbb{R}^{d} \setminus B_{r}(v)}f(w)\frac{\theta(t,x,v)-\theta(t,x,w)}{|v-w|^{d+2s}}\, \mathrm{d}w\right|\] \[\lesssim[f]_{C^{\gamma}_{\ell}(Q^{v}_{R}\times\mathbb{R}^{d})} \int_{\mathbb{R}^{d}\setminus B_{r}(v)}\int_{\mathbb{R}^{d}}\frac{\theta(t,x,v)- \theta(t,x,w)-\theta(s,x,v)+\theta(s,x,w)}{|v-w|^{d+2s}|t-s|^{d+\frac{ls- \gamma}{2s}}}\,\mathrm{d}s\,\mathrm{d}w\] \[\lesssim\delta^{-(l+2)s}\|f\|_{C^{\gamma}_{\ell}(Q^{v}_{R}\times \mathbb{R}^{d})}.\]
Finally using the induction hypothesis
\[\left\|\left|D_{x}\right|^{\frac{l_{x}}{1+2s}}h_{2}\right\|+\left\| \left|D_{t}\right|^{\frac{l_{x}}{2s}}h_{2}\right\| \lesssim\delta^{-2s}\Big{(}\|\|D_{x}\|^{\frac{l_{x}}{1+2s}}f\|_{L _{t,x}^{2}H_{v}^{\varepsilon}(QR)}+\|\|D_{t}\|^{\frac{l_{x}}{2s}}f\|_{L_{t,x}^{ 2}H_{v}^{\varepsilon}(QR)}\Big{)}\] \[\lesssim\delta^{-2(l+2)}\|f\|_{C_{\ell}^{\gamma}(Q_{R}^{\varepsilon }\times\mathbb{R}^{d})}+\delta^{-(l+2)s}\big{\|}|D|^{ls}h\big{\|}_{L^{2}(Q_{R})},\]
as well as
\[\big{\|}|D_{x}|^{\frac{l_{x}}{1+2s}}h_{3}\big{\|}+\big{\|}|D_{t}|^ {\frac{l_{x}}{2s}}h_{3}\big{\|} \lesssim\delta^{-2s}\Big{(}\|\|D_{x}\|^{\frac{l_{x}}{1+2s}}f\|_{L ^{2}(QR)}+\|\|D_{t}\|^{\frac{l_{x}}{2s}}f\|_{L^{2}(QR)}\Big{)}\] \[\lesssim\delta^{-2(l+2)}\|f\|_{C_{\ell}^{\gamma}(Q_{R}^{\varepsilon }\times\mathbb{R}^{d})}+\delta^{-(l+2)s}\big{\|}|D|^{ls}h\big{\|}_{L^{2}(Q_{R})}.\]
We combine all these estimates for the right hand side and use that \(f=g\) in \(B_{r}\) so that
\[\big{\|}|D_{v}|^{(l+2)s}f\big{\|}_{L^{2}(Q_{r})}+\big{\|}|D_{t}|^ {\frac{l}{2}}\partial_{t}f\big{\|}_{L^{2}(Q_{r})} +\big{\|}|D_{x}|^{\frac{(l+2)s}{1+2s}}f\big{\|}_{L^{2}(Q_{r})}\] \[\lesssim\delta^{-2(l+2)}\|f\|_{C_{\ell}^{\gamma}(Q_{R}^{\varepsilon }\times\mathbb{R}^{d})}+\delta^{-(l+2)s}\big{\|}|D|^{ls}h\big{\|}_{L^{2}(Q_{R} )}.\]
This concludes the proof.
### Kolmogorov equation: Fundamental solution
Lastly, for the lower order perturbation arising with the freezing of coefficients, we will make use of the fundamental solution for the (fractional) Kolmogorov equation
\[\mathcal{T}f=-(-\Delta_{v})^{s}f+h,\quad(t,x,v)\in\mathbb{R}^{1+2d} \tag{3.19}\]
for some source term \(h\in L^{\infty}\). In the non-fractional case set \(s=1\). This equation preserves the same Lie group structure as (1.1), (1.2) and (1.3) and it admits the following fundamental solution [21] in case that \(s=1\):
\[J(t,x,v)=\Big{(}\frac{\sqrt{3}}{2\pi t^{2}}\Big{)}^{d}\exp\Big{(}\frac{-3|x+ \frac{tv}{2}|}{t^{3}}-\frac{|v|^{2}}{4t}\Big{)},\quad t>0,\]
and \(J=0\) for \(t\leq 0\). In case that \(s\in(0,1)\) the fundamental solution is given by
\[J(t,x,v)=ct^{-d(1+\frac{1}{s})}\mathcal{J}\Bigg{(}\frac{x}{t^{1+\frac{1}{2s}} },\frac{v}{t^{\frac{1}{2s}}}\Bigg{)},\]
where \(\mathcal{J}\) is given in Fourier variables as
\[\hat{\mathcal{J}}(\varphi,\xi)=\exp\Bigg{(}-\int_{0}^{1}|\xi-\tau\varphi|^{2s }\,\mathrm{d}t\Bigg{)}.\]
Similarly to Proposition 2.1 of [16] we have
**Lemma 3.5**.: _Given \(h\in L^{\infty}(\mathbb{R}\times\mathbb{R}^{2d})\) with compact support in time, the function_
\[f(t,x,v)=\int_{\mathbb{R}\times\mathbb{R}^{2d}}h(\tilde{t},\tilde{x},\tilde{v })J(t-\tilde{t},x-\tilde{x}-(t-\tilde{t})v,v-\tilde{v})\,\mathrm{d}\tilde{t} \,\mathrm{d}\tilde{x}\,\mathrm{d}\tilde{v}=:J*_{\text{kin}}h(z)\]
_solves (3.19) in \(\mathbb{R}\times\mathbb{R}^{2d}\). Moreover, for all \(z_{0}\in\mathbb{R}\times\mathbb{R}^{2d}\) and \(r>0\) there holds_
\[\|J*_{\text{kin}}\mathbb{1}_{Q_{r}(z_{0})}\|_{L^{\infty}(Q_{r}(z_{0}))}\leq Cr ^{2s},\]
_for some universal constant \(C\) depending on \(d\)._
Proof.: Given \(z=(t,x,v)\in Q_{r}(z_{0})\) we compute the scaling of the fundamental solution stemming from the parabolicity of the equation
\[J\ast_{\mathrm{kin}}\mathbb{1}_{Q_{r}(z_{0})}(t,x,v) =\int_{Q_{r}(z_{0})}J(t-\tilde{t},x-\tilde{x}-(t-\tilde{t})v,v- \tilde{v})\,\mathrm{d}\tilde{z}\] \[=r^{2s}\int_{Q_{1}(z_{0})}J\!\left(\frac{t}{r^{2s}}-\bar{t},\frac {x}{r^{1+2s}}-\bar{x}-\Big{(}\frac{t}{r^{2s}}-\bar{t}\Big{)}v,\frac{v}{r}-\bar{ v}\right)\mathrm{d}\bar{z}\] \[=r^{2s}J\ast_{\mathrm{kin}}\mathbb{1}_{Q_{1}(z_{0})}\Bigg{(}\frac {t}{r^{2s}},\frac{x}{r^{1+2s}},\frac{v}{r}\Bigg{)},\]
and conclude.
## 4. Campanato's inequality
### Local (non-fractional) Campanato's inequality
Let \(0<r<R\leq 1\) and \(z_{0}\in\mathbb{R}^{1+2d}\) such that \(Q_{R}(z_{0})\subset Q_{1}\). Assume \(f\) solves (3.3) for some constant diffusion coefficient \(A\) satisfying (1.8) and zero source term \(h=0\). As the coefficients \(A\) are constant, there is no distinction between the non-divergence and divergence form. Moreover, note that in this case we can assume \(f\in C^{\infty}\) as we can approximate \(f\) with a smooth solution by mollification respecting the Lie group structure. We want to combine (3.2) with the regularity estimates in Proposition 3.1 to infer Campanto's inequality. Together with Poincare's inequality this constitutes Campanato's approach to Schauder estimates.
We have by combining the estimate (3.1) and the Poincare inequality [9, Proposition 3.12] applied \(m\)-times, and then a fractional Poincare inequality in the final step, see for example [14, Equation 1.2] or [28, page 241]
\[\int_{Q_{r}(z_{0})}|f-p_{m-1}|^{2}\,\mathrm{d}z \leq Cr^{2m}\int_{Q_{r}(z_{0})}|D_{v}^{m}f|^{2}\,\mathrm{d}z+Cr^{ 6\lfloor\frac{m}{3}\rfloor}\int_{Q_{r}(z_{0})}\big{|}D_{x}^{\lfloor\frac{m}{3 }\rfloor}(f-p_{m-1})\big{|}^{2}\,\mathrm{d}z\] \[\qquad+Cr^{4\lfloor\frac{m}{2}\rfloor}\int_{Q_{r}(z_{0})}\big{|} D_{t}^{\lfloor\frac{m}{2}\rfloor}(f-p_{m-1})\big{|}^{2}\,\mathrm{d}z\] \[\qquad+Cr^{2\left(\lfloor\frac{z}{2}\rfloor+3\lfloor\frac{z}{2} \rfloor+k\right)}\int_{Q_{r}(z_{0})}\sum_{\begin{subarray}{c}i,j,k\geq 0\\ i+j+k=m\end{subarray}}\big{|}D_{t}^{\lfloor\frac{i}{2}\rfloor}D_{x}^{ \lfloor\frac{j}{3}\rfloor}D_{v}^{k}(f-p_{m-1})\big{|}^{2}\,\mathrm{d}z\] \[\leq Cr^{2m}\int_{Q_{2r}(z_{0})}|D^{m}f|^{2}\,\mathrm{d}z, \tag{4.1}\]
where \(D^{m}\) is a derivative in time, space or velocity of order \(m\). To control the right hand side, we use (3.2), Sobolev's embedding for some \(k\) sufficiently large depending on \(n\), and the regularity estimates of Proposition 3.1 to get
\[\int_{Q_{2r}(z_{0})}|D^{m}f|^{2}\,\mathrm{d}z\leq Cr^{n}\big{\|}D^{m}f\big{\|} _{L^{\infty}(Q_{2r}(z_{0}))}^{2}\leq Cr^{n}\big{\|}f\big{\|}_{H^{k}(Q_{R/2}(z_ {0}))}^{2}\leq C(n,R,k)r^{n}\|f\|_{L^{2}(Q_{R}(z_{0}))}^{2}. \tag{4.2}\]
Thus we deduce
\[\big{\|}f-p_{m-1}\big{\|}_{L^{2}(Q_{r}(z_{0}))}^{2}\leq Cr^{n+2m}\|f\|_{L^{2}( Q_{R}(z_{0}))}^{2},\]
where \(C=C(n,s,R)\). This inequality is Campanato's (second) inequality. Now, dividing by \(r^{n+2m}\) yields the Campanato norm on the left hand side:
\[\overset{2,\lambda}{\mathcal{L}^{2,\lambda}_{m-1}(Q_{r}(z_{0}))}=r^{-\lambda} \big{\|}f-p_{m-1}\big{\|}_{L^{2}(Q_{r}(z_{0}))}^{2}\leq C\|f\|_{L^{2}(Q_{R}(z_ {0}))}^{2},\]
where
\[\lambda=n+2m.\]
_Remark 4.1_.: As a consequence of (4.2), we deduce that the only smooth solutions of (3.3) with constant coefficients that grow at most polynomially at infinity are kinetic polynomials. Indeed, if we assume that a solution \(f\) of (3.3) in \(\mathbb{R}^{1+2d}\) satisfies
\[\sup_{Q_{R}}f(z)\leq MR^{m-1},\qquad\forall R\geq 1,\]
for some constant \(M>0\) and \(m\geq 1\), then as before we get with Poincare's inequality, Sobolev embedding, and the regularity estimates for any \(r>0\)
\[\int_{Q_{r}}\left|f-p_{m-1}\right|^{2}\mathrm{d}z\leq Cr^{2m}\int_{Q_{2r}} \left|D^{m}f\right|^{2}\mathrm{d}z\leq Cr^{n+2m}\big{\|}D^{m}f\big{\|}^{2}_{H^ {k}(Q_{2r})}\leq C\Big{(}\frac{r}{R}\Big{)}^{n+2m}\big{\|}f\big{\|}^{2}_{L^{2}( Q_{R})},\]
where \(p_{m-1}\) is some kinetic polynomial of degree \(m-1\). Due to the growth assumption on \(f\), we thus find
\[\int_{Q_{r}}\left|f-p_{m-1}\right|^{2}\mathrm{d}z\leq C(r,n)R^{-n-2m}R^{2m-2+ n},\]
which tends to \(0\) as \(R\to\infty\). Thus \(f=p_{m-1}\) in \(Q_{r}\). Since \(r>0\) was arbitrary, we deduce \(f\) is a polynomial of degree at most \(m-1\) in \(\mathbb{R}^{1+2d}\). In other words, a generalisation of Liouville's theorem holds. Note that a Liouville-type theorem has been used to derive Schauder estimates in the elliptic case by [30, Lemma 1] and in the hypoelliptic case by [19, Theorem 4.1].
### Non-local (fractional) Campanato's inequality
As before, we want to combine the observation in (3.2) with the energy estimates derived in the last subsection to infer Campanto's inequality. Let \(0<r<R\leq 1\) and \(z_{0}\in\mathbb{R}^{1+2d}\) such that \(Q_{R}(z_{0})\subset Q_{1}\). We consider the constant coefficient equation (3.4) with zero source term in \(Q_{R}(z_{0})\). We have by combining (3.1) and the fractional Poincare inequality, see [28, page 241], [14, equation (1.2)], or [26, Section 1],
\[\begin{split}&\int_{Q_{r}(z_{0})}\left|f-p_{2s}\right|^{2} \mathrm{d}z\\ &\leq Cr^{2s}\Bigg{(}\int_{Q_{r}(z_{0})}\left|D^{\frac{s}{2s}}_{ t}(f-p_{2s})\right|^{2}\mathrm{d}z+\int_{Q_{r}(z_{0})}\left|D^{\frac{s}{2+2s}}_{ x}(f-p_{2s})\right|^{2}\mathrm{d}z+\int_{Q_{r}(z_{0})}\left|D^{s}_{v}(f-p_{2s}) \right|^{2}\mathrm{d}z\Bigg{)}\\ &\leq Cr^{4s}\Bigg{(}\int_{Q_{r}(z_{0})}\left|D^{\frac{2s}{2s}}_ {t}(f-p_{2s})\right|^{2}\mathrm{d}z+\int_{Q_{r}(z_{0})}\left|D^{\frac{2s}{2+2 s}}_{x}(f-p_{2s})\right|^{2}\mathrm{d}z+\int_{Q_{r}(z_{0})}\left|D^{2s}_{v}(f-p_{2s}) \right|^{2}\mathrm{d}z\\ &\qquad+\int_{Q_{r}(z_{0})}\left|D^{s}_{v}D^{\frac{s}{2+2s}}_{x }(f-p_{2s})\right|^{2}\mathrm{d}z\Bigg{)}\\ &\leq Cr^{6s}\int_{Q_{r}(z_{0})}\left|D^{3s}f\right|^{2}\mathrm{ d}z,\end{split} \tag{4.3}\]
where \(D^{3s}\) is a differential of order \(3s\) in time, space, or velocity. We use (3.2), Sobolev's embedding for some \(k\) sufficiently large depending on \(s\) and \(n\), and the energy estimates of Proposition 3.2 to get
\[\int_{Q_{r}(z_{0})}\left|D^{3s}f\right|^{2}\mathrm{d}z\leq r^{n}\big{\|}D^{3s }f\big{\|}^{2}_{L^{\infty}(Q_{r}(z_{0}))}\leq Cr^{n}\big{\|}f\big{\|}^{2}_{H^{ k}(Q_{R/2}(z_{0}))}\leq C(n,s,R,k)r^{n}\|f\|^{2}_{C^{s}_{\ell}(Q^{v}_{R}(z_{0}) \times\mathbb{R}^{d})}.\]
This can be seen as a non-local analogue of Campanto's inequality. Thus we deduce
\[\big{\|}f-p_{2s}\big{\|}^{2}_{L^{2}(Q_{r}(z_{0}))}\leq Cr^{n+6s}\|f\|^{2}_{C^{ \gamma}_{\ell}(Q^{v}_{R}(z_{0})\times\mathbb{R}^{d})},\]
with \(C=C(n,s,R)\). Therefore, dividing by \(r^{n+6s}\) yields the Campanato norm on the left hand side:
\[\begin{split}\overset{2}{\angle}^{2,\lambda}_{2s}(Q_{r}(z_{0})) =r^{-\lambda}\big{\|}f-p_{2s}\big{\|}^{2}_{L^{2}(Q_{r}(z_{0}))}\leq C\|f\|^{2}_ {C^{\gamma}_{\ell}(Q^{v}_{R}(z_{0})\times\mathbb{R}^{d})},\end{split}\]
where
\[\lambda=n+6s.\]
## 5. Campanato's approach: the local (non-fractional) case
We freeze coefficients (also known as Korn's trick) to derive Schauder estimates for the general case. Let \(f\) classically solve (1.1) or (1.2). Suppose \(A=A(t,x,v)\) satisfies (1.8) and \(h\in C_{\ell}^{m-3+\alpha}(Q_{1})\). Assume that the diffusion coefficient satisfies \(A\in C_{\ell}^{m-3+\alpha}(Q_{1})\). For the divergence form equation (1.1) we additionally require \(\nabla_{v}A\in C_{\ell}^{m-3+\alpha}(Q_{1})\).
Similarly to [16] we consider \(0<\rho\leq\frac{1}{2}\) to be determined and we pick \(z_{0},z_{1}\in Q_{1}\) and \(0<r\leq 1\) such that \(z_{1}\in Q_{r}(z_{0})\) and
\[{}_{C_{\ell}^{m-1+\alpha}(Q_{\frac{1}{2}})}\leq 2\frac{|f(z_{1})-p_{m-1}^{z_{0} }[f](z_{1})|}{r^{m-1+\alpha}}.\]
We recall that the Taylor expansion of \(f\) at \(z_{0}\) of kinetic degree \(m-1\) is given by
\[p_{m-1}^{z_{0}}[f](z)=\sum_{j}\frac{a_{j}(z_{0})}{j!}\big{(}t-t_{0}\big{)}^{j_ {0}}\big{(}x_{1}-(x_{0})_{1}\big{)}^{j_{1}}\cdots\big{(}x_{d}-(x_{0})_{d} \big{)}^{j_{d}}\big{(}v_{1}-(v_{0})_{1}\big{)}^{j_{d+1}}\cdots\big{(}v_{d}-(v_ {0})_{d}\big{)}^{j_{2d}}\]
where we require \(j_{0}\leq\lfloor\frac{m-1}{2}\rfloor\), \(j_{1}+\cdots+j_{d}\leq\lfloor\frac{m-1}{3}\rfloor\) and \(j_{d+1}+\cdots+j_{2d}\leq m-1\). The coefficients can be computed and are given by
\[a_{j}(z_{0})=(\partial_{t}+v\cdot\nabla_{x})^{j_{0}}\partial_{x_{1}}^{j_{1}} \cdots\partial_{x_{d}}^{j_{d}}\partial_{v_{1}}^{j_{d+1}}\cdots\partial_{v_{d}} ^{j_{2d}}f(z_{0}).\]
If \(r\geq\rho\), we have, using Lemma 2.9,
\[{}_{C_{\ell}^{m-1+\alpha}(Q_{1/4})}^{m-1+\alpha} \leq 2\rho^{-(m-1+\alpha)}\Bigg{\{}2\|f\|_{L^{\infty}(Q_{r}(z_{0})) }+\sum_{j}\Big{[}\rho^{2j_{0}}\|(\partial_{t}+v\cdot\nabla_{x})^{j_{0}}f\|_{L ^{\infty}}\] \[\qquad\qquad\qquad+\rho^{3(j_{1}+\cdots+j_{d})}\|\partial_{x_{1} }^{j_{1}}\cdots\partial_{x_{d}}^{j_{d}}f\|_{L^{\infty}}+\rho^{(j_{d+1}+\cdots+ j_{2d})}\|\partial_{v_{1}}^{j_{d+1}}\cdots\partial_{v_{d}}^{j_{2d}}f\|_{L^{ \infty}}\Big{]}\Bigg{\}}\] \[\leq\frac{1}{4}[f]_{C_{\ell}^{m-1+\alpha}(Q_{1})}+C(\rho)\|f\|_{L ^{\infty}(Q_{1})}.\]
### Non-divergence form
Now we consider \(r\leq\rho\) and a solution \(f\) of (1.2). Let \(\eta\in C_{c}^{\infty}(\mathbb{R}^{1+2d})\) be a cut-off with \(0\leq\eta\leq 1\), such that \(\eta=1\) in \(Q_{\rho}(z_{0})\) and \(\eta=0\) outside \(Q_{2\rho}(z_{0})\). Let \(\tilde{f}=f\eta\). With no loss of generality we set \(z_{0}=(0,0,0)\). We denote with \(p_{2}^{(z_{0})}[f]\) the Taylor polynomial of \(f\) at \(z_{0}\) with kinetic degree less or equal to \(2\). To approximate the general case by the constant coefficient case we split
\[\tilde{f}-p_{2}^{(0)}[\tilde{f}]=g_{1}+g_{2},\]
where \(g_{1}\) solves
\[\partial_{t}g_{1}+v\cdot\nabla_{x}g_{1}-\sum_{i,j}a_{(0)}^{i,j}\partial_{v_{i }v_{j}}^{2}g_{1}=0,\]
for \(a_{(0)}^{i,j}=a^{i,j}(z_{1})\). Then \(g_{2}\) solves
\[\partial_{t}g_{2}+v\cdot\nabla_{x}g_{2}-\sum_{i,j}a_{(0)}^{i,j}\partial_{v_{i }v_{j}}^{2}g_{2}=\tilde{h}-\tilde{h}(0,0,0),\]
where
\[\tilde{h}:=\Bigg{[}\sum_{i,j}\Big{(}a^{i,j}-a_{(0)}^{i,j}\Big{)}\partial_{v_{i }v_{j}}^{2}f\Bigg{]}\eta+\sum_{i}\big{(}b^{i}\eta-2a_{(0)}^{i,j}\partial_{v_{i }}\eta\big{)}\partial_{v_{i}}f+\sum_{i,j}\big{(}c\eta+\mathcal{T}\eta-a_{(0)}^ {i,j}\partial_{v_{i}v_{j}}^{2}\eta\big{)}f+h\eta. \tag{5.1}\]
For \(g_{1}\) we have by Subsection 4.1
\[\int_{Q_{r}}\left|g_{1}-p_{m-1}^{(0)}[g_{1}]\right|^{2}\mathrm{d}z\leq C\Big{(} \frac{r}{R}\Big{)}^{n+2m}\int_{Q_{R}}\left|g_{1}\right|^{2}\mathrm{d}z\leq C \Big{(}\frac{r}{R}\Big{)}^{n+2m}\int_{Q_{R}}\left|f\right|^{2}\mathrm{d}z.\]
For \(g_{2}\) we first perform a change of variables \(g_{2}^{(0)}(t,x,v):=g_{2}\big{(}t,(A^{(0)})^{-\frac{1}{2}}x,(A^{(0)})^{-\frac{ 1}{2}}v\big{)}\) where \(A^{(0)}\) is the constant diffusion coefficient \(A^{(0)}=\big{(}a_{(0)}^{i,j}\big{)}_{i,j}\). Then \(g_{2}^{(0)}\) solves
\[\begin{split}\Bigg{(}\partial_{t}+v\cdot\nabla_{x}-\sum_{i,j} \partial_{v_{i}v_{j}}^{2}\Bigg{)}g_{2}^{(0)}(t,x,v)&=\Bigg{(} \partial_{t}+v\cdot\nabla_{x}-\sum_{i,j}a_{(0)}^{i,j}\partial_{v_{i}v_{j}}^{2 }\Bigg{)}g_{2}\Big{(}t,(A^{(0)})^{-\frac{1}{2}}x,(A^{(0)})^{-\frac{1}{2}}v \Big{)}\\ &=\Big{(}\tilde{h}-\tilde{h}(0,0,0)\Big{)}\Big{(}t,(A^{(0)})^{- \frac{1}{2}}x,(A^{(0)})^{-\frac{1}{2}}v\Big{)}\\ &=:\Big{(}\tilde{h}^{(0)}-\tilde{h}^{(0)}(0,0,0)\Big{)}(t,x,v). \end{split} \tag{5.2}\]
Thus, using the scaling of the fundamental solution as stated in Lemma 3.5,
\[\int_{Q_{r}}|g_{2}^{(0)}|^{2}\,\mathrm{d}z\leq Cr^{n}\|g_{2}^{(0)}\|_{L^{ \infty}(Q_{r})}^{2}\leq Cr^{n+2m+2\alpha-2}[\tilde{h}^{(0)}]_{C_{\ell}^{m-3+ \alpha}(Q_{r})}^{2}.\]
Since \(\|g_{2}^{(0)}\|_{L^{2}}\sim\|g_{2}\|_{L^{2}}\) and \([\tilde{h}^{(0)}]_{C_{\ell}^{m-3+\alpha}}^{2}\sim[\tilde{h}]_{C_{\ell}^{m-3+ \alpha}}^{2}\) up to a constant depending on \(A^{(0)}\), we thus find
\[\begin{split}\inf_{p\in\mathcal{P}_{m-1}}\int_{Q_{r}}\left|f-p \right|^{2}\mathrm{d}z&\leq\int_{Q_{r}}\left|f-p_{2}^{(0)}[f]-p_{ m-1}^{(0)}[g_{1}]\right|^{2}\mathrm{d}z\\ &\leq C\Big{(}\frac{r}{R}\Big{)}^{n+2m}\int_{Q_{R}}\left|f\right| ^{2}\mathrm{d}z+r^{n+2m+2\alpha-2}[\tilde{h}]_{C_{\ell}^{m-3+\alpha}(Q_{r})}^ {2}\\ &\leq C\Big{(}\frac{r}{R}\Big{)}^{n+2m+2\alpha-2}\Bigg{(}\int_{Q_ {R}}\left|f\right|^{2}\mathrm{d}z+[\tilde{h}]_{C_{\ell}^{m-3+\alpha}(Q_{r})} ^{2}\Bigg{)}.\end{split}\]
Equivalently,
\[\mathcal{C}_{m-1}^{2,n+2m+2\alpha-2}(Q_{r})\leq C\|f\|_{L^{2}(Q_{R})}+C[\tilde {h}]_{C_{\ell}^{m-3+\alpha}(Q_{r})}.\]
Thus by the characterisation of Campanato-norms with Holder-norms in Theorem 2.7 we have
\[\mathcal{C}_{\ell}^{m-1+\alpha}(Q_{r})\leq C\|f\|_{L^{2}(Q_{R})}+C[\tilde{h}]_{ C_{\ell}^{m-3+\alpha}(Q_{r})}.\]
Since \(A,B,c,h\in C_{\ell}^{m-3+\alpha}(Q_{1})\) we therefore obtain
\[\begin{split}[f]_{C_{\ell}^{m-1+\alpha}(Q_{1/4})}& \leq[f]_{C_{\ell}^{m-1+\alpha}(Q_{r})}\\ &\leq C\|f\|_{L^{2}(Q_{R})}+C[\tilde{h}]_{C_{\ell}^{m-3+\alpha}(Q_ {r})}\\ &\leq C\|f\|_{L^{2}(Q_{\rho}(z_{0}))}+\Big{[}\sum_{i,j}\big{(}a_{ (0)}^{i,j}-a^{i,j}\big{)}\partial_{v_{i}v_{j}}^{2}\tilde{f}\Big{]}_{C_{\ell}^ {m-3+\alpha}(Q_{\rho}(z_{0}))}\\ &\quad+C[b^{i}\partial_{v_{i}}f]_{C_{\ell}^{m-3+\alpha}(Q_{\rho}( z_{0}))}+C(\rho)[\partial_{v_{i}}f]_{C_{\ell}^{m-3+\alpha}(Q_{\rho}(z_{0}))}\\ &\quad+C[cf]_{C_{\ell}^{m-3+\alpha}(Q_{\rho}(z_{0}))}+C(\rho)[f] _{C_{\ell}^{m-3+\alpha}(Q_{\rho}(z_{0}))}+[h]_{C_{\ell}^{m-3+\alpha}(Q_{\rho}(z_ {0}))}\\ &\leq C\|f\|_{L^{2}(Q_{\rho}(z_{0}))}+C\rho^{m-3+\alpha}[D_{v}^{2} f]_{C_{\ell}^{m-3+\alpha}(Q_{\rho}(z_{0}))}\\ &\quad+C\rho^{m-3+\alpha}[D_{v}f]_{C_{\ell}^{m-3+\alpha}(Q_{\rho}( z_{0}))}+C\rho^{m-3+\alpha}[f]_{C_{\ell}^{m-3+\alpha}(Q_{\rho}(z_{0}))}\\ &\quad+C(\rho)[f]_{C_{\ell}^{m-3+\alpha}(Q_{\rho}(z_{0}))}+C(\rho)[ D_{v}f]_{C_{\ell}^{m-3+\alpha}(Q_{\rho}(z_{0}))}+[h]_{C_{\ell}^{m-3+\alpha}(Q_{\rho})} \\ &\leq C\|f\|_{L^{2}(Q_{1})}+\frac{1}{4}[f]_{C_{\ell}^{m-1+\alpha}(Q _{2\rho}(z_{0}))}+C(\rho)\|f\|_{L^{\infty}(Q_{2\rho}(z_{0}))}\\ &\quad+C_{1}\rho^{m-1+\alpha}[f]_{C_{\ell}^{m-1+\alpha}(Q_{2\rho}( z_{0}))}+[h]_{C_{\ell}^{m-3+\alpha}(Q_{\rho})},\end{split} \tag{5.3}\]
where we have used Lemma 2.9 and Proposition 2.12. Choosing \(\rho\) such that \(C_{1}\rho^{m-1+\alpha}+\frac{1}{4}\leq\frac{1}{2}\), we find for some \(\beta>0\)
\[[f]_{C_{\ell}^{m-1+\alpha}(Q_{\rho/4})}\leq C\rho^{-\beta}\|f\|_{L^{\infty}(Q_{ \rho})}+C[h]_{C_{\ell}^{m-3+\alpha}(Q_{\rho})}+\frac{1}{2}[f]_{C_{\ell}^{m-1+ \alpha}(Q_{2\rho}(z_{0}))}. \tag{5.4}\]
A standard covering argument implies
\[[f]_{C_{\ell}^{m-1+\alpha}(Q_{\rho/4})}\leq C\|f\|_{L^{\infty}(Q_{\rho})}+C[h]_ {C_{\ell}^{m-3+\alpha}(Q_{\rho})}, \tag{5.5}\]
where \(C\) depends on \(n,m,\alpha,\lambda_{0}\), and the Holder norms of all coefficients. Indeed, if we define \(\Psi(r):=[f]_{C_{\ell}^{m-1+\alpha}(Q_{r}(z_{0}))}\), then (5.4) yields
\[\Psi\Big{(}\frac{\rho}{4}\Big{)}\leq C_{0}\Big{(}\frac{7\rho}{4}\Big{)}^{- \beta}\big{(}\|f\|_{L^{\infty}(Q_{\rho})}+[h]_{C_{\ell}^{m-3+\alpha}(Q_{\rho} )}\big{)}+\varepsilon\Psi(2\rho)\]
for \(0\leq\varepsilon<1\), \(C_{0}>0\) and \(\beta>0\). For some \(0<\tau<1\) we then introduce
\[\begin{cases}r_{0}:=\frac{\rho}{4},\\ r_{i+1}:=r_{i}+(1-\tau)\tau^{i}\frac{7\rho}{4},\quad i\geq 0.\end{cases}\]
Since
\[\sum_{i=1}^{\infty}\tau^{i}=\frac{\tau}{1-\tau},\]
we have that \(r_{i}<2\rho\) and inductively we prove that
\[\Psi(r_{0})\leq\varepsilon^{k}\Psi(r_{k})+C_{0}\big{(}\|f\|_{L^{\infty}(Q_{ \rho})}+[h]_{C_{\ell}^{m-3+\alpha}(Q_{\rho})}\big{)}(1-\tau)^{-\beta}\Big{(} \frac{7\rho}{4}\Big{)}^{-\beta}\sum_{i=0}^{k-1}\varepsilon^{i}\tau^{-i\beta}.\]
We choose \(\tau\) such that \(\varepsilon\tau^{-\beta}<1\) so that letting \(k\to\infty\) we deduce (5.5).
### Divergence form
The case of divergence form equations follows is similar by modifying the \(\tilde{h}\) in (5.1) as follows
\[\tilde{h}:=\Bigg{[}\sum_{i,j}\partial_{v_{i}}\Big{(}a^{i,j}-a^{i,j}_{(0)} \Big{)}\partial_{v_{j}}f\Bigg{]}\eta+\sum_{i}\big{(}b^{i}\eta-2a^{i,j}_{(0)} \partial_{i}\eta\big{)}\partial_{v_{i}}f+\sum_{i,j}\big{(}c\eta+\mathcal{T} \eta-a^{i,j}_{(0)}\partial^{2}_{v_{i}v_{j}}\eta\big{)}f+h\eta.\]
Note that for (5.3) we will require \(\nabla_{v}A\in C^{m-3+\alpha}(Q_{1})\).
## 6. Campanato's approach: the non-local (fractional) case
We consider a solution \(f\) of (1.3) in \(C_{\ell}^{\gamma}([-1,0]\times B_{1}\times\mathbb{R}^{d})\) and assume that the non-negative kernel satisfies the ellipticity assumptions (1.9), (1.10) and the Holder condition (1.14). Moreover, we further assume that it either satisfies the non-divergence form symmetry (1.11), or that it verifies the divergence form symmetry (1.12), (1.13), and the additional Holder condition (1.15).
Let \(\eta\in C_{c}^{\infty}((-1,0]\times B_{1}\times\mathbb{R}^{d})\) so that \(\eta=1\) in \(Q_{\frac{3}{4}}\) and \(\eta=0\) outside \(Q_{1}\). Let \(\tilde{f}=f\eta\). We freeze coefficients and write \(K_{0}(w)=K(0,0,0,w)\) for the constant coefficient kernel; its corresponding operator \(\mathcal{L}_{0}\) satisfies (3.5). We compute for any \(z\in Q_{\frac{1}{2}}\)
\[\mathcal{T}\tilde{f}-\mathcal{L}_{0}\tilde{f}=h\eta+A\cdot\eta+B\]
with
\[A(z):=\int_{\mathbb{R}^{d}}\big{(}f(w)-f(v)\big{)}\big{[}K(t,x,v,w)-K_{0}(w) \big{]}\,\mathrm{d}w\]
and
\[B(z):=\int_{\mathbb{R}^{d}}\big{(}\eta(v)-\eta(w)\big{)}f(w)K_{0}(w)\,\mathrm{ d}w.\]
We write
\[\tilde{f}-p[\tilde{f}]=g_{1}+g_{2},\]
where \(g_{1}\) solves
\[\mathcal{T}g_{1}-\mathcal{L}_{0}g_{1}=0,\]
and with
\[p[f]:=f(z_{0})+(t-t_{0})\big{(}\mathcal{T}f(z_{0})-\mathcal{L}_{0}f(z_{0}) \big{)}\]
for some \(z_{0}\in\mathbb{R}^{1+2d}\). With no loss of generality set \(z_{0}=(0,0,0)\). In particular, \(g_{2}\) solves
\[\mathcal{T}g_{2}-\mathcal{L}_{0}g_{2} =\mathcal{T}(\tilde{f}-p[\tilde{f}]-g_{1})-\mathcal{L}_{0}(\tilde{ f}-p[\tilde{f}]-g_{1})\] \[=h\cdot\eta+A\cdot\eta+B-\mathcal{T}\tilde{f}(z_{0})+\mathcal{L} _{0}\tilde{f}(z_{0})\] \[=h\cdot\eta+A\cdot\eta+B-\big{(}h\cdot\eta+A\cdot\eta+B\big{)}(z _{0})\] \[=\tilde{h}-\tilde{h}(z_{0}),\]
where \(\tilde{h}:=h\eta+A\eta+B\). For \(g_{1}\) we find with Subsection 4.2
\[\int_{Q_{r}}\big{|}g_{1}-p_{2s}^{(0)}[g_{1}]\big{|}^{2}\,\mathrm{d}z\leq Cr^{ n+6s}\|g_{1}\|_{C_{\ell}^{\gamma}(Q_{1}^{\mathrm{u}}\times\mathbb{R}^{d})}^{2} \leq Cr^{n+6s}\|f\|_{C_{\ell}^{\gamma}(Q_{1}^{\mathrm{u}}\times\mathbb{R}^{d}) }^{2},\]
where \(r>0\) is such that \(Q_{r}\subset Q_{1/2}\). For \(g_{2}\) we first perform a change of variables \(g_{2}^{(0)}(t,x,v):=g_{2}\big{(}t,\kappa_{0}^{-\frac{1}{2s}}x,\kappa_{0}^{- \frac{1}{2s}}v\big{)}\) where \(\kappa_{0}\) is such that \(K_{0}(w)=\frac{\kappa_{0}}{|w|^{d+2s}}\). Then \(g_{2}^{(0)}\) solves
\[\Big{(}\partial_{t}+v\cdot\nabla_{x}+(-\Delta_{v})^{s}\Big{)}g_{2 }^{(0)}(t,x,v) =\Big{(}\partial_{t}+v\cdot\nabla_{x}+\mathcal{L}_{0}\Big{)}g_{2 }\Big{(}t,\kappa_{0}^{-\frac{1}{2s}}x,\kappa_{0}^{-\frac{1}{2s}}v\Big{)}\] \[=\Big{(}\tilde{h}-\tilde{h}(0,0,0)\Big{)}\Big{(}t,\kappa_{0}^{- \frac{1}{2}}x,\kappa_{0}^{-\frac{1}{2}}v\Big{)}\] \[=:\Big{(}\tilde{h}^{(0)}-\tilde{h}^{(0)}(0,0,0)\Big{)}(t,x,v).\]
Thus by Lemma 3.5
\[\int_{Q_{r}}|g_{2}^{(0)}|^{2}\,\mathrm{d}z\leq Cr^{n}\big{\|}g_{2}^{(0)}\big{\|} _{L^{\infty}(Q_{r})}^{2}\leq Cr^{n+4s+2\alpha}\big{[}\tilde{h}^{(0)}\big{]}_{ C_{\ell}^{\alpha}(Q_{r})}^{2}.\]
Since \(\big{\|}g_{2}^{(0)}\big{\|}_{L^{2}}\sim\|g_{2}\|_{L^{2}}\) and \(\big{[}\tilde{h}^{(0)}\big{]}_{C_{\ell}^{\alpha}}^{2}\sim[\tilde{h}]_{C_{\ell} ^{\alpha}}^{2}\) up to a constant depending on \(\kappa_{0}\), we thus find
\[\inf_{p\in\mathcal{P}_{m-1}}\int_{Q_{r}}|\tilde{f}-p|^{2}\,\mathrm{ d}z \leq\int_{Q_{r}}\big{|}\tilde{f}-p[\tilde{f}]-p_{2s}^{(0)}[g_{1}] \big{|}^{2}\,\mathrm{d}z\] \[\leq Cr^{n+6s}\|f\|_{C_{\ell}^{\gamma}(Q_{1}^{\mathrm{u}}\times \mathbb{R}^{d})}^{2}+Cr^{n+4s+2\alpha}[\tilde{h}]_{C_{\ell}^{\alpha}(Q_{r})}^{2}\] \[\leq Cr^{n+4s+2\alpha}\Bigg{(}\|f\|_{C_{\ell}^{\gamma}(Q_{1}^{ \mathrm{u}}\times\mathbb{R}^{d})}^{2}+[\tilde{h}]_{C_{\ell}^{\alpha}(Q_{r})}^{2} \Bigg{)}.\]
In the last inequality we used \(\alpha<s\) since \(\alpha=\frac{2s}{1+2s}\gamma<\frac{2s}{1+2s}\min(1,2s)\). Equivalently,
\[\mathcal{L}_{2s}^{2,n+4s+2\alpha}(Q_{r})\leq C\|f\|_{C_{\ell}^{\gamma}(Q_{1}^{ \mathrm{u}}\times\mathbb{R}^{d})}^{2}+C[\tilde{h}]_{C_{\ell}^{\alpha}(Q_{r})}.\]
Thus by the characterisation of Campanato norms with Holder norms in Theorem 2.7 we have
\[\mathcal{L}_{\ell}^{2s+\alpha}(Q_{r})\leq C\|f\|_{C_{\ell}^{\gamma}(Q_{1}^{ \mathrm{u}}\times\mathbb{R}^{d})}^{2}+C[\tilde{h}]_{C_{\ell}^{\alpha}(Q_{r})}.\]
It remains to bound the \(C_{\ell}^{\alpha}\)-norm of \(\tilde{h}=h\eta+A\eta+B\). We claim
\[\mathcal{L}_{\ell}^{\alpha}(Q_{\frac{1}{2}})\lesssim A_{0}\big{(}\|f\|_{C_{ \ell}^{2s+\alpha}(Q_{1})}+\|f\|_{C_{\ell}^{\gamma}((-1,0]\times B_{1}\times \mathbb{R}^{d})}\big{)}. \tag{6.1}\]
Indeed we write \(A(z_{1})-A(z_{2})=I_{1}+I_{2}\) with
\[I_{1} =\int\big{(}f(z_{2}\circ(0,0,w))-f(z_{2})\big{)}\big{[}K_{z_{1}}(w) -K_{z_{2}}(w)\big{]}\,\mathrm{d}w,\] \[I_{2} =\int\big{(}f(z_{1}\circ(0,0,w))-f(z_{1})-f(z_{2}\circ(0,0,w))+f(z _{2})\big{)}\big{[}K_{z_{1}}(w)-K_{0}(w)\big{]}\,\mathrm{d}w.\]
For \(I_{1}\) we distinguish the far and the close part and write \(I_{11}\) and \(I_{12}\) respectively. Then for the far part there holds with (1.16)
\[|I_{11}|\leq\|f\|_{L^{\infty}((-1,0]\times B_{1}\times\mathbb{R}^{d})}\int_{|w |\geq 1}\big{|}K_{z_{1}}(w)-K_{z_{2}}(w)\big{|}\,\mathrm{d}w\lesssim A_{0}\|f \|_{L^{\infty}((-1,0]\times B_{1}\times\mathbb{R}^{d})}d_{l}(z_{1},z_{2})^{ \alpha}.\]
For the close part we have in case of the _non-divergence form symmetry_ (1.11) and Lemma 2.9
\[|I_{12}| \leq\int_{|w|\leq 1}\big{|}f(z_{2}\circ(0,0,w))-p_{2s}^{z_{2}\circ(0,0,w)}[f]\big{|}|K_{z_{1}}(w)-K_{z_{2}}(w)\big{|}\,\mathrm{d}w\] \[\lesssim\big{|}f\big{|}_{C_{\ell}^{2s+\alpha}}\int_{|w|\leq 1}|w|^{2s +\alpha}\big{|}K_{z_{1}}(w)-K_{z_{2}}(w)\big{|}\,\mathrm{d}w+\big{|}D_{v}^{2}f \big{|}\int_{|w|\leq 1}|w|^{2}\big{|}K_{z_{1}}(w)-K_{z_{2}}(w)\big{|}\, \mathrm{d}w\] \[\lesssim A_{0}\|f\|_{C_{\ell}^{2s+\alpha}}d_{\ell}(z_{1},z_{2})^{ \alpha}.\]
If instead we assume the _divergence form symmetry_ (1.12) and (1.13) we get
\[|I_{12}| \leq\mathrm{PV}\int_{|w|\leq 1}\big{|}f(z_{2}\circ(0,0,w))-p_{2s}^{ z_{2}\circ(0,0,w)}[f]\big{|}\big{|}K_{z_{1}}(w)-K_{z_{2}}(w)\big{|}\, \mathrm{d}w\] \[\lesssim[f]_{C_{\ell}^{2s+\alpha}}\int_{|w|\leq 1}|w|^{2s+\alpha} \big{|}K_{z_{1}}(w)-K_{z_{2}}(w)\big{|}\,\mathrm{d}w\] \[\quad+\big{|}D_{v}f\big{|}\Big{|}\,\mathrm{PV}\int_{|w|\leq 1}w \big{(}K_{z_{1}}(w)-K_{z_{2}}(w)\big{)}\,\mathrm{d}w\Big{|}\] \[\quad+\big{|}D_{v}^{2}f\big{|}\int_{|w|\leq 1}|w|^{2}\big{|}K_{z_{1}}( w)-K_{z_{2}}(w)\big{|}\,\mathrm{d}w\] \[\lesssim A_{0}\|f\|_{C_{\ell}^{2s+\alpha}}d_{\ell}(z_{1},z_{2})^{ \alpha},\]
by assumption (1.14) and (1.15).
To estimate \(I_{2}\) we can use Lemma 2.11. This proves the claim.
We further claim
\[{}_{C_{\ell}^{\alpha}(Q_{\frac{1}{2}})}\lesssim\|f\|_{C_{\ell}^{\gamma}((-1,0] \times B_{1}\times\mathbb{R}^{d})}. \tag{6.2}\]
For \(z_{1},z_{2}\in Q_{r}\) we compute \(B(z_{2})-B(z_{1})=J_{1}+J_{2}\) with
\[J_{1} =\int_{|w|>r/4}\big{[}\eta(z_{1}\circ(0,0,w))-\eta(z_{1})-\eta(z_ {2}\circ(0,0,w))+\eta(z_{2})\big{]}f\big{(}z_{1}\circ(0,0,w)\big{)}K_{0}(w)\, \mathrm{d}w,\] \[J_{2} =\int_{|w|>r/4}\big{[}\eta(z_{2}\circ(0,0,w))-\eta(z_{2})\big{]} \big{[}f(z_{1}\circ(0,0,w))-f(z_{2}\circ(0,0,w))\big{]}K_{0}(w)\,\mathrm{d}w\]
Since \(\eta\) is smooth we can apply Lemma 2.11 and get
\[|J_{1}|\leq\|f\|_{L^{\infty}((-1,0]\times B_{1}\times\mathbb{R}^{d})}d_{l}(z_{1 },z_{2})^{\alpha}.\]
For \(J_{2}\) we have
\[|J_{2}|\leq 2\|\eta\|_{L^{\infty}}[f]_{C_{\ell}^{\gamma}}\int_{|w|>r/4}d_{l}(z_ {1}\circ(0,0,w),z_{2}\circ(0,0,w))^{\gamma}K_{0}(w)\,\mathrm{d}w.\]
Since \(\alpha=\frac{2s\gamma}{1+2s}\) we have
\[|J_{2}| \lesssim[f]_{C_{\ell}^{\gamma}}\int_{|w|>r/4}d_{l}(z_{1}\circ(0,0, w),z_{2}\circ(0,0,w))^{\gamma}K_{0}(w)\,\mathrm{d}w\] \[\lesssim[f]_{C_{\ell}^{\gamma}}\int_{|w|>r/4}\big{(}d_{l}(z_{1}, z_{2})+|t_{1}-t_{2}|^{\frac{1}{1+2s}}|w|^{\frac{1}{1+2s}}\big{)}^{\gamma}K_{0}(w)\, \mathrm{d}w\] \[\lesssim[f]_{C_{\ell}^{\gamma}}\int_{|w|>r/4}\big{(}1+|w|^{\frac{ \gamma}{1+2s}}\big{)}d_{l}(z_{1},z_{2})^{\frac{2s\gamma}{1+2s}}K_{0}(w)\, \mathrm{d}w\] \[\lesssim_{\Lambda}[f]_{C_{\ell}^{\gamma}}d_{l}(z_{1},z_{2})^{ \frac{2s\gamma}{1+2s}}\leq[f]_{C_{\ell}^{\gamma}}d_{l}(z_{1},z_{2})^{\alpha},\]
where we used the upper bound on \(K_{0}\) (1.9). This proves the second claim (6.2).
Thus we deduce by combining (6.1) and (6.2) that
\[C_{\ell}^{2s+\alpha}(Q_{\frac{1}{4}})\leq(1+A_{0})\|f\|_{C_{\ell}^{\gamma}((-1, 0]\times B_{1}\times\mathbb{R}^{d})}+A_{0}\|f\|_{C_{\ell}^{2s+\alpha}(Q_{1})}+ \|h\|_{C_{\ell}^{\alpha}(Q_{1})}. \tag{6.3}\]
Without loss in generality we can assume that \(A_{0}<1\), otherwise we scale the equation initially. We then translate and scale the estimate (6.3) and we get for any \(r>0\) and \(z_{0}\in Q_{1}\) such that \(Q_{r}(z_{0})\subset Q_{1}\)
\[r^{2s+\alpha}[f]_{C_{\ell}^{2s+\alpha}(Q_{\frac{r}{4}}(z_{0}))} \lesssim r^{\alpha}A_{0}r^{2s+\alpha}[f]_{C_{\ell}^{2s+\alpha}(Q_{r }(z_{0}))}+r^{\gamma}[f]_{C_{\ell}^{\gamma}((-1,0]\times B_{1}\times\mathbb{R }^{d})}+\|f\|_{L^{\infty}((-1,0]\times B_{1}\times\mathbb{R}^{d})}\] \[+r^{2s+\alpha}[h]_{C_{\ell}^{\alpha}(Q_{r}(z_{0}))}+r^{2s}\|h\|_{L ^{\infty}(Q_{r}(z_{0}))}\] \[\leq A_{0}r^{2s+\alpha}[f]_{C_{\ell}^{2s+\alpha}(Q_{r}(z_{0}))}+\| f\|_{C_{\ell}^{\gamma}((-1,0]\times B_{1}\times\mathbb{R}^{d})}\] \[+r^{2s+\alpha}[h]_{C_{\ell}^{\alpha}(Q_{r}(z_{0}))}+r^{2s}\|h\|_{L ^{\infty}(Q_{r}(z_{0}))}.\]
The same covering argument as outlined for (5.5) in Subsection 5.1 implies then
\[\|f\|_{C_{\ell}^{2s+\alpha}(Q_{\frac{1}{4}})}\leq C\Big{(}\|h\|_{C_{\ell}^{ \alpha}(Q_{1})}+\|f\|_{C_{\ell}^{\gamma}((-1,0]\times B_{1}\times\mathbb{R}^ {d})}\Big{)},\]
where \(C\) depends on \(s,d,\lambda_{0},\Lambda_{0},A_{0}\).
## Appendix A Hypoelliptic Operators
### Toolbox
In this section, we briefly outline that our approach is robust enough to deal with general second order Kolmogorov equations of the form
(A.1) \[\begin{split}\mathscr{L}f(t,x):=\sum_{N-d\leq i,j\leq N}a_{i,j}(t,x)\partial_{x_{i}x_{j}}f(t,x)+&\sum_{1\leq i,j\leq N}\tilde{b}_{ i,j}x_{j}\partial_{x_{i}}f(t,x)-\partial_{t}f(t,x)\\ +&\sum_{N-d\leq i\leq N}b_{i}(t,x)\partial_{i}f(t,x)+c (t,x)f(t,x)=h,\end{split}\]
where \(z=(t,x)=(t,x_{0},x_{1},\ldots,x_{\kappa})\in\mathbb{R}^{1+N}\), \(\kappa\geq 1\) is the number of commutators, and \(1\leq d\leq N\). The velocity variable corresponds to the last entry \(x_{\kappa}\in\mathbb{R}^{d}\). The diffusion matrix \(A(z)=\big{(}a_{i,j}(z)\big{)}_{N-d\leq i,j\leq N}\) is symmetric with real measurable entries, and uniformly elliptic (1.8). The matrix \(\tilde{B}=\big{(}\tilde{b}_{i,j}\big{)}_{1\leq i,j\leq N}\) has constant entries and satisfies suitable assumptions such that the _principal part operator_\(\mathscr{K}\) of \(\mathscr{L}\) with respect to the kinetic degree, given by
(A.2) \[\mathscr{K}f(t,x)=\sum_{N-d\leq i,j\leq N}\partial_{x_{i}x_{j}}f(t,x)+\sum_{1 \leq i,j\leq N}\tilde{b}_{i,j}x_{j}\partial_{x_{i}}f(t,x)-\partial_{t}f(t,x),\]
is hypoelliptic, i.e. any distributional solution of \(\mathscr{K}f=h\) is smooth whenever \(h\in C^{\infty}\). In particular, this assumption coincides with \(\tilde{B}\) having constant real entries and taking the form
(A.3) \[\tilde{B}=\begin{pmatrix}*&\tilde{B}_{1}&0&\dots&0\\ *&*&\tilde{B}_{2}&\dots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ *&*&*&\dots&\tilde{B}_{\kappa}\\ *&*&*&\dots&*\end{pmatrix},\]
where each \(\tilde{B}_{i}\) is a \(d_{i-1}\times d_{i}\) matrix of rank \(d_{i}\) with \(d:=d_{\kappa}\geq d_{\kappa-1}\geq\dots\geq d_{0}\geq 1\) and \(\sum_{i=0}^{\kappa}d_{i}=N\). For further discussion on this operator, we refer the reader to [24, Section 1 and 2]. We remark that the principal part operator \(\mathscr{K}\) is still invariant under Galilean transformation (1.7). Moreover, \(\mathscr{K}\) is invariant under the scaling given by
(A.4) \[(t,x_{0},\dots,x_{\kappa})\to(r^{2}t,r^{3}x_{0},\dots,r^{2\kappa+1}x_{\kappa-1 },rx_{\kappa})=:z_{r},\]
for \(r>0\), where \(\kappa\geq 1\) is the number of commutators, _if and only if_ all the \(*\)-blocks in \(\tilde{B}\) are zero [24, Proposition 2.2]. We denote the scaling invariant principal part by \(\mathscr{K}_{0}\), and emphasise that it is of the form (A.2) with the matrix \(\tilde{B}\) as in (A.3) where all the \(*\)-entries are zero. The cylinders will be defined respecting the scaling invariance, similar as above (1.5).
We briefly sketch how to obtain Schauder estimates for a solution \(f\) of (A.1) in \(Q_{1}\). Note that the kinetic distance and the corresponding Holder norms have to be defined more generally taking into account the scaling (A.4).
First, the regularity estimates will be replaced by an argument of Hormander [15, Theorem 3.7] as follows. Any solution \(f\) of \(\mathscr{K}f=0\) satisfies for \(l\geq 1\)
(A.5) \[\|D^{l}f\|_{L^{\infty}(Q_{r}(z_{0}))}\leq C(l,N)\|f\|_{L^{2}(Q_{R}(z_{0}))},\]
where \(D^{l}\) is a differential of order \(l\). Indeed, let \(\delta\) be a mutli-index such that \(|\delta|=l\geq 1\). Let \(G\subset L^{2}(Q_{R}(z_{0}))\) be defined as
\[G:=\big{\{}g\in L^{2}(Q_{R}(z_{0}))\cap C^{\infty}(Q_{R}(z_{0})):\mathscr{K}g =0\text{ in }Q_{R}(z_{0})\big{\}}.\]
Due to the hypoellipticity of \(\mathscr{K}\) the subspace \(G\) is closed in \(L^{2}(Q_{R}(z_{0}))\). Define \(\mathcal{B}:G\to C^{0}(Q_{r}(z_{0}))\) by \(\mathcal{B}g=D^{5}g|_{Q_{r}(z_{0})}\) for \(\delta\) such that \(|\delta|=l\geq 0\). Then \(\mathcal{B}\) has closed graph in \(G\times C^{0}(Q_{r}(z_{0}))\), and thus, by virtue of the closed graph theorem we conclude (A.5). Then we derive Campanato's inequality (4.2) just as above in Subsection 4.1.
Second, the principal part operator \(\mathscr{K}_{0}\) admits an explicit fundamental solution given in [2, Equation (2.7)]. In particular, it satisfies for \(r>0\)
(A.6) \[\mathscr{K}_{0}f_{r}=r^{2}\big{(}\mathscr{K}_{0}f\big{)}_{r},\]
where \(f_{r}\) denotes the rescaled function \(f_{r}(z):=f(z_{r})\). Note that we do not require the scaling invariance to deduce the Schauder estimates. To see this, we denote the fundamental solution of \(\mathscr{K}\) by \(\Gamma\) and the fundamental solution of \(\mathscr{K}_{0}\) by \(\Gamma_{0}\), respectively. Then we can use the upper bound on \(\Gamma\) by \(\Gamma_{0}\), stated in [24, Theorem 3.1],
(A.7) \[\Gamma(z)\leq a\Gamma_{0}(z),\]
for some \(a>0\). Due to (A.6) we then have the good scaling for \(g_{2}\), where \(g_{2}\) comes from the splitting of our solution \(f-p_{2}^{(0)}[f]=g_{1}+g_{2}\) as done in Section 5 above, with the polynomial \(p_{2}^{(0)}[f]\) given in (A.9) below. Alternatively, we can directly consider the scaling of the full matrix \(\tilde{B}\) in (A.3). According to [24, Remark 3.2] and [23, Remark 2.4], the \(*\)-blocks in (A.3) scale to some higher power of \(r\) than the superdiagonal
blocks. Thus, using \(\tilde{B}=\tilde{B}_{0}+\tilde{B}-\tilde{B}_{0}\), where \(\tilde{B}_{0}\) corresponds to \(\tilde{B}\) with all \(*\)-blocks equal to zero, we rewrite
\[\mathscr{K}=\mathscr{K}_{0}+\sum_{1\leq i,j\leq N}\big{(}\tilde{b}_{i,j}-\tilde{ b}_{i,j}^{0}\big{)}x_{j}\partial_{x_{i}}f,\]
so that
\[\mathscr{K}_{0}^{a}g_{2}=\tilde{h}+\sum_{1\leq i,j\leq N}\big{(}\tilde{b}_{i,j} ^{0}-\tilde{b}_{i,j}\big{)}x_{j}\partial_{x_{i}}f\]
with \(\mathscr{K}_{0}^{a}\) as in (A.12) but where \(\tilde{B}_{0}\) replaces \(\tilde{B}\), and with \(\tilde{h}\) given in (A.13). The right hand side can be bounded as in Section 5 above, since the term \(\sum_{1\leq i,j\leq N}\big{(}\tilde{b}_{i,j}-\tilde{b}_{i,j}^{0}\big{)}x_{j} \partial_{x_{i}}f\) scales like a lower order term due to [24, Remark 3.2] and [23, Remark 2.4]. The details of the splitting are done for Dini-continuous coefficients in Subsection A.3 below.
### Holder coefficients
We have assembled the toolbox required for the Schauder estimates, and the argument of Section 5 goes through (with suitable modifications as outlined above in Subsection A.1), so that we derive
**Theorem A.1** (Schauder estimate for Kolmogorov operators).: _Let \(\alpha\in(0,1)\) be given. Let \(m\geq 3\) be some integer. Let \(f\) solve (A.1) in \(Q_{1}\). Suppose \(A\in C_{\ell}^{m-3+\alpha}(Q_{1})\) satisfies (1.8) for some \(\lambda_{0}>0\), where \(d=d_{0}\), and assume \(B,c,h\in C_{\ell}^{m-3+\alpha}(Q_{1})\). We further assume that the principal part operator \(\mathscr{K}\) defined in (A.2) is hypoelliptic, i.e. \(\tilde{B}\) is of the form (A.3). Then there holds_
\[\|f\|_{C_{\ell}^{m-1+\alpha}(Q_{1/4})}\leq C\Big{(}\|f\|_{L^{\infty}(Q_{1})}+ \|h\|_{C_{\ell}^{m-3+\alpha}(Q_{1})}\Big{)},\]
_for some \(C\) depending on \(N,\lambda_{0},\alpha,\|A\|_{C_{\ell}^{m-3+\alpha}},\|B\|_{C_{\ell}^{m-3+\alpha }},\|c\|_{C_{\ell}^{m-3+\alpha}}\)._
Similar to Subsection 5.2 the divergence form case just follows by realising that any divergence form equation can be written in non-divergence form plus an additional lower order term, provided that \(\nabla_{x_{\kappa}}A\in C_{\ell}^{m-3+\alpha}(Q_{1})\). Finally, we can derive a Schauder-type estimate under less stringent assumptions assuming Dini-regularity instead of Holder regularity, inspired from [27].
### Dini Coefficients
We point out a structural peculiarity when we consider more generally Dini-regular coefficients \(A,B,c\) and source \(h\). We denote by \(\omega_{g}\) the modulus of continuity of a function \(g\) on a subset \(Q\subset\mathbb{R}^{1+N}\), given by
\[\omega_{g}(\ln r):=\sup_{z_{1},z_{2}\in Q\atop d_{\ell}(z_{1},z_{2})<r}\big{|} g(z_{1})-g(z_{2})\big{|}.\]
A function \(g\) is said to be Dini-continuous in \(Q\) if
\[\int_{0}^{1}\frac{\omega_{g}(\ln r)}{r}\,\mathrm{d}r=\int_{-\infty}^{0} \omega_{g}(\rho)\,\mathrm{d}\rho<+\infty.\]
We aim to show:
**Theorem A.2**.: _Let \(f\) solve (A.1) in \(Q_{1}\) such that \(A\) is a symmetric, uniformly elliptic matrix with real measurable entries, and suppose \(\tilde{B}\) has constant entries. Assume that the principal part operator \(\mathscr{K}\) (A.2) is hypoelliptic, i.e. \(\tilde{B}\) is of the form (A.3). Suppose that the coefficients \(A,B,c\) and the source \(h\) are
_Dini-regular. Then, for any \(z,z_{0}\in\mathbb{R}^{1+N}\) such that \(d_{\ell}(z,z_{0})<1/2\), \(f\) satisfies_
(A.8) \[\begin{split}\big{|}D^{2}f(z)&-D^{2}f(z_{0})\big{|} \\ &\leq C\Bigg{(}\int_{-\infty}^{\ln d_{\ell}(z,z_{0})}\omega_{A}( \xi)\,\mathrm{d}\xi+d_{\ell}(z,z_{0})\int_{\ln d_{\ell}(z,z_{0})}^{0}\omega_{A} (\xi)e^{-\xi}\,\mathrm{d}\xi\Bigg{)}\sum_{i,j}\sup_{Q_{1}}\big{|}\partial_{v_{i }v_{j}}^{2}f\big{|}\\ &\quad+C\Bigg{(}d_{\ell}(z,z_{0})+\int_{-\infty}^{\ln d_{\ell}(z, z_{0})}\omega_{c}(\xi)\,\mathrm{d}\xi+d_{\ell}(z,z_{0})\int_{\ln d_{\ell}(z,z_{0})}^{0} \omega_{c}(\xi)e^{-\xi}\,\mathrm{d}\xi\Bigg{)}\sup_{Q_{1}}|f\big{|}\\ &\quad+C\Bigg{(}\int_{-\infty}^{\ln d_{\ell}(z,z_{0})}\omega_{B} (\xi)\,\mathrm{d}\xi+d_{\ell}(z,z_{0})\int_{\ln d_{\ell}(z,z_{0})}^{0}\omega_{ B}(\xi)e^{-\xi}\,\mathrm{d}\xi\Bigg{)}\sum_{i}\sup_{Q_{1}}\big{|}\partial_{v_{i }}f\big{|}\\ &\quad+C\int_{-\infty}^{\ln d_{\ell}(z,z_{0})}\omega_{h}(\xi)\, \mathrm{d}\xi+Cd_{\ell}(z,z_{0})\int_{\ln d_{\ell}(z,z_{0})}^{0}\omega_{h}( \xi)e^{-\xi}\,\mathrm{d}\xi+Cd_{\ell}(z,z_{0})\sup_{Q_{1}}|h|.\end{split}\]
_Here \(D^{2}\) is a differential of order \(2\), and \(C=C(N,\lambda_{0})\)._
In particular we recover Theorem 1.6 of [27].
_Remark A.3_.: Theorem A.2 suggests that Dini continuity is the suitable notion of regularity for Schauder estimates. In particular, in Theorem A.1, we see that Holder regular solutions \(f\) are _fixed points_ of the Schauder estimates.
For this purpose, we consider \(0<\rho\leq 1\) to be determined and a solution \(f\) of (1.2) in \(Q_{1}\). Let \(\eta\in C_{c}^{\infty}(\mathbb{R}^{1+2d})\) be a cut-off with \(0\leq\eta\leq 1\), such that \(\eta=1\) in \(Q_{\rho}\) and \(\eta=0\) outside \(Q_{2\rho}\). Let \(\tilde{f}=f\cdot\eta\). With no loss in generality we set \(z_{0}=(0,0,0)\). We denote with \(p_{2}^{(z_{0})}[f]\) the Taylor polynomial of \(f\) at \(z_{0}\) with kinetic degree less or equal to \(2\), given by
(A.9) \[\begin{split} p_{2}^{z_{0}}[f](z)=f(z_{0})&+\sum_{N -d\leq i\leq N}\partial_{x_{i}}f(z_{0})\big{(}z^{(i)}-z_{0}^{(i)}\big{)}+\frac{ 1}{2}\sum_{N-d\leq i,j\leq N}\partial_{x_{i}x_{j}}^{2}f(z_{0})\big{(}z^{(i)}-z_{ 0}^{(i)}\big{)}(z^{(j)}-z_{0}^{(j)}\big{)}\\ &+\Bigg{[}\sum_{1\leq i,j\leq N}\tilde{b}_{i,j}x_{j}\partial_{x_ {i}}f(z_{0})-\partial_{t}f(z_{0})\Bigg{]}(t-t_{0}),\end{split}\]
where \(z^{(i)}\) denotes the element at index \(i\). We then write
(A.10) \[\tilde{f}-p_{2}^{(0)}[\tilde{f}]=\tilde{f}-\tilde{f}_{k}+\tilde{f}_{k}-p_{2}^{ (0)}[\tilde{f}],\]
where each \(f_{k}\) solves
(A.11) \[\mathscr{K}^{a}f_{k}=\tilde{h}(0,0,0),\]
in \(\mathcal{Q}_{k}:=Q_{\rho^{k}}\), with the constant coefficient operator \(\mathscr{K}^{a}\) given by
(A.12) \[\mathscr{K}^{a}:=\sum_{N-d\leq i,j\leq N}a_{i,j}^{(0)}\partial_{x_{i}x_{j}}^{2 }+\sum_{1\leq i,j\leq N}\tilde{b}_{i,j}x_{j}\partial_{x_{i}}-\partial_{t}\]
for \(a_{(0)}^{i,j}=a^{i,j}(z_{0})\), and the right hand side \(\tilde{h}\) given by
(A.13) \[\tilde{h}:=\sum_{N-d\leq i,j\leq N}\Big{(}a_{i,j}^{(0)}-a_{i,j}\Big{)}\partial_{ x_{i}x_{j}}^{2}f\cdot\eta+\sum_{N-d\leq i\leq N}\big{(}2a_{i,j}^{(0)}\partial_{x_{i}} \eta-b_{i}\eta\big{)}\partial_{x_{i}}f+(-c\eta+\mathscr{K}_{a}\eta)f+h\cdot\eta.\]
In particular, there holds
(A.14) \[\mathscr{K}^{a}\big{(}\tilde{f}_{k}-\tilde{f}_{k+1}\big{)}=0,\qquad\text{in } \mathcal{Q}_{k+1},\]
and
(A.15) \[\mathscr{K}^{a}\big{(}\tilde{f}-\tilde{f}_{k}\big{)}=\tilde{h}-\tilde{h}(0,0,0),\qquad\text{in }\mathcal{Q}_{k}.\]
On the one hand, we first perform a constant change of variables to rewrite \(\mathscr{K}^{a}\) in terms of \(\mathscr{K}\), as was done in (5.2). Then, due to (A.15), the upper bound of the fundamental solution (A.7) and the scaling (A.6), which extends Lemma 3.5, we bound for any \(k\geq 1\)
\[\int_{\mathcal{Q}_{k+1}}\left|\tilde{f}-\tilde{f}_{k}\right|^{2}\mathrm{d}z \leq C\rho^{(n+4)(k+1)}\sup_{\mathcal{Q}_{k+1}}\left|\tilde{h}-\tilde{h}(0,0,0 )\right|^{2}\leq C\rho^{(n+4)(k+1)}\omega_{\tilde{h}}^{2}(\rho^{k+1}).\]
Since \(\tilde{f}_{k}=\tilde{f}_{0}+\sum_{l=0}^{k-1}\tilde{f}_{l+1}-\tilde{f}_{l}\) we thus find
(A.16) \[\begin{split}\left(\rho^{-(n+6)(k+1)}\int_{\mathcal{Q}_{k+1}} \left|\tilde{f}-\tilde{f}_{k}\right|^{2}\mathrm{d}z\right)^{\frac{1}{2}}& \leq\Bigg{(}\sum_{l=0}^{k-1}\rho^{-(n+6)(l+1)}\int_{\mathcal{Q}_{l+1}} \left|\tilde{f}_{l+1}-\tilde{f}_{l}\right|^{2}\mathrm{d}z\Bigg{)}^{\frac{1}{2} }\\ &\leq\Bigg{\{}\sum_{l=0}^{k-1}\rho^{-(n+6)(l+1)}\Bigg{(}\int_{ \mathcal{Q}_{l+1}}\left|\tilde{f}_{l+1}-\tilde{f}\right|^{2}+\left|\tilde{f}- \tilde{f}_{l}\right|^{2}\mathrm{d}z\Bigg{)}\Bigg{\}}^{\frac{1}{2}}\\ &\leq C\sum_{l=0}^{k-1}\frac{\omega_{\tilde{h}}(\ln\rho^{l+1})}{ \rho^{l+1}}\\ &\leq C\int_{\ln\rho}^{0}\omega_{\tilde{h}}(\xi)e^{-\xi}\, \mathrm{d}\xi.\end{split}\]
On the other hand, we note that \(p_{2}^{(0)}[\tilde{f}]=\lim_{k\to\infty}\tilde{f}_{k}\). Indeed, \(p_{2}^{(0)}[\tilde{f}]\) is the Taylor polynomial of \(\tilde{f}\), so that
\[\sup_{\mathcal{Q}_{k}}\left(\tilde{f}-p_{2}^{(0)}[\tilde{f}]\right)=o(\rho^{2 k}),\]
and we also refer to [27, Equation (5.16)]. Moreover, due to (A.15) we can use (A.6) so that overall we find
\[\begin{split}\left|\tilde{f}_{k}(z)-p_{2}^{(0)}[\tilde{f}](z) \right|&\leq\sup_{\mathcal{Q}_{k}}\left|\tilde{f}_{k}-\tilde{f} \right|+\sup_{\mathcal{Q}_{k}}\left|\tilde{f}-p_{2}^{(0)}[\tilde{f}]\right|\\ &\leq C\rho^{2k}\sup_{\mathcal{Q}_{k}}\left|\tilde{h}-\tilde{h}(0,0,0)\right|+o(\rho^{2k})\\ &\leq C\rho^{2k}\omega_{\tilde{h}}(\ln\rho^{k})+o(\rho^{2k})\\ &\leq o(\rho^{2k}).\end{split}\]
Therefore, we may write
(A.17) \[\tilde{f}_{k}-p_{2}^{(0)}[\tilde{f}]=\sum_{l=k}^{\infty}\tilde{f}_{l}-\tilde{ f}_{l+1}.\]
Due to (A.11), Subsection 4.1 (suitably making the replacements for the more general equation as outlined in Subsection A.1), (A.17), (A.15) and (A.6), we then find for \(\tilde{f}_{k}-p_{2}^{(0)}[\tilde{f}]\)
\[\int_{\mathcal{Q}_{k+1}}\big{|}\tilde{f}_{k}-p_{2}^{(0)}[\tilde{f} ]-p_{2}^{(0)}[\tilde{f}_{k}-p_{2}^{(0)}[\tilde{f}]\big{|}\big{|}^{2}\,\mathrm{ d}z \leq C\Big{(}\frac{\rho^{k+1}}{\rho^{k}}\Big{)}^{n+6}\int_{\mathcal{Q}_{k}}\big{|} \tilde{f}_{k}-p_{2}^{(0)}[\tilde{f}]\big{|}^{2}\,\mathrm{d}z\] \[=C\Big{(}\frac{\rho^{k+1}}{\rho^{k}}\Big{)}^{n+6}\sum_{l=k}^{ \infty}\int_{\mathcal{Q}_{k}}\big{|}\tilde{f}_{l}-\tilde{f}_{l+1}\big{|}^{2} \,\mathrm{d}z\] \[\leq C\Big{(}\frac{\rho^{k+1}}{\rho^{k}}\Big{)}^{n+6}\sum_{l=k}^{ \infty}\Bigg{(}\int_{\mathcal{Q}_{l}}\big{|}\tilde{f}_{l}-\tilde{f}\big{|}^{2 }\,\mathrm{d}z+\int_{\mathcal{Q}_{l}}\big{|}\tilde{f}-\tilde{f}_{l+1}\big{|}^ {2}\,\mathrm{d}z\Bigg{)}\] \[\leq C\Big{(}\frac{\rho^{k+1}}{\rho^{k}}\Big{)}^{n+6}\sum_{l=k}^ {\infty}\rho^{l(n+4)}\omega_{h}^{2}(\ln\rho^{l})\] \[\leq C\rho^{(k+1)(n+6)}\rho^{-2k}\sum_{l=k}^{\infty}\omega_{h}^{2 }(\ln\rho^{l}),\]
or equivalently
(A.18) \[\begin{split}\Bigg{(}\rho^{-(n+6)(k+1)}\int_{\mathcal{Q}_{k+1}} \big{|}\tilde{f}_{k}-p_{2}^{(0)}[\tilde{f}]-p_{2}^{(0)}[\tilde{f}_{k}-p_{2}^{ (0)}[\tilde{f}]]\big{|}^{2}\,\mathrm{d}z\Bigg{)}^{\frac{1}{2}}& \leq C\rho^{-k}\sum_{l=k}^{\infty}\omega_{\tilde{h}}(\ln\rho^{l}) \\ &\leq C\rho^{-k}\int_{-\infty}^{\ln\rho}\omega_{\tilde{h}}(\xi) \,\mathrm{d}\xi.\end{split}\]
Thus due to (A.10), (A.16) and (A.18) we conclude
\[\Bigg{(}\rho^{-(n+6)(k+1)}\int_{\mathcal{Q}_{k+1}}\big{|}\tilde{f}-p_{2}^{(0) }[\tilde{f}]\big{|}^{2}\,\mathrm{d}z\Bigg{)}^{\frac{1}{2}}\leq C\rho^{-(k+1)} \int_{-\infty}^{\ln\rho}\omega_{\tilde{h}}(\xi)\,\mathrm{d}\xi+C\int_{\ln\rho }^{0}\omega_{\tilde{h}}(\xi)e^{-\xi}\,\mathrm{d}\xi.\]
The right hand side will further be bounded using the explicit form of \(\tilde{h}\) in (A.13):
\[\rho^{-(k+1)}\int_{-\infty}^{\ln\rho} \omega_{\tilde{h}}(\xi)\,\mathrm{d}\xi+\int_{\ln\rho}^{0}\omega_{ \tilde{h}}(\xi)e^{-\xi}\,\mathrm{d}\xi\] \[\lesssim\Bigg{(}\rho^{-(k+1)}\int_{-\infty}^{\ln\rho}\omega_{A}( \xi)\,\mathrm{d}\xi+\int_{\ln\rho}^{0}\omega_{A}(\xi)e^{\xi}\,\mathrm{d}\xi \Bigg{)}\sum_{1\leq i,j\leq d_{0}}\sup_{Q_{1}}\big{|}\partial_{x_{i}x_{j}}^{2} f\big{|}\] \[\quad+\Bigg{(}1+\rho^{-(k+1)}\int_{-\infty}^{\ln\rho}\omega_{c}( \xi)\,\mathrm{d}\xi+\int_{\ln\rho}^{0}\omega_{c}(\xi)e^{-\xi}\,\mathrm{d}\xi \Bigg{)}\sup_{Q_{1}}\big{|}f\big{|}\] \[\quad+\Bigg{(}\rho^{-(k+1)}\int_{-\infty}^{\ln\rho}\omega_{B}( \xi)\,\mathrm{d}\xi+\int_{\ln\rho}^{0}\omega_{B}(\xi)e^{-\xi}\,\mathrm{d}\xi \Bigg{)}\sum_{1\leq i\leq d_{0}}\sup_{Q_{1}}\big{|}\partial_{x_{i}}f\big{|}\] \[\quad+\rho^{-(k+1)}\int_{-\infty}^{\ln\rho}\omega_{h}(\xi)\, \mathrm{d}\xi+\int_{\ln\rho}^{0}\omega_{h}(\xi)e^{-\xi}\,\mathrm{d}\xi+\sup_{Q _{1}}|h|.\]
For the left hand side we find for \(z,z_{0}\) such that \(d_{\ell}(z,z_{0})\leq 1/2\) upon choosing \(\rho=d_{\ell}(z,z_{0})\)
\[\frac{\big{|}D^{2}f(z)-D^{2}f(z_{0})\big{|}^{2}}{d_{\ell}(z,z_{0})^{2}}\leq C[ f]_{C_{\ell}}^{2+1-(Q_{\rho})}\leq C\inf_{p\in\mathcal{P}_{2}}\rho^{-(n+6)} \int_{Q_{\rho}}\big{|}\tilde{f}-p\big{|}^{2}\,\mathrm{d}z\leq C\rho^{-(n+6)} \int_{Q_{\rho}}\big{|}\tilde{f}-p_{2}^{(0)}[\tilde{f}]\big{|}^{2}\,\mathrm{d}z,\]
where we used Lemma 2.9 and the characterisation of Campanato norms in Theorem 2.7. This concludes the proof of (A.8).
## Appendix B Relation between Holder and Campanato spaces
This section is devoted to the proof of the equivalence between kinetic Campanato and Holder spaces, as stated in Theorem 2.7. We follow Campanato's arguments from [6]. We recall the notation \(\Omega(z_{0},r):=\Omega\cap Q_{r}(z_{0})\) for any subset \(\Omega\subset\mathbb{R}^{n}\). Throughout this section we will denote \(\Omega=Q_{R}(\bar{z}_{0})\) as in the statement of Theorem 2.7.
### Auxiliary Result
We start with a preliminary lemma, which in the elliptic case has first been derived by De Giorgi [6, Lemma 2.1].
**Lemma B.1**.: _For a polynomial \(P\in\mathcal{P}_{k}\), a real number \(q\geq 1\), \(z_{0}\in\mathbb{R}^{1+2d}\), and \(\rho>0\) there exists a constant \(c\) such that_
\[\Big{|}(\partial_{t}+v\cdot\nabla_{x})^{j_{0}}\partial_{x_{1}}^{j_{1}}\cdots \partial_{x_{d}}^{j_{d}}\partial_{v_{1}}^{j_{d+1}}\cdots\partial_{v_{d}}^{j_{ 2d}}P(z)\big{|}_{z=z_{0}}\Big{|}^{q}\leq\frac{c}{\rho^{n+|J|q}}\int_{Q_{\rho}( z_{0})}|P(z)|^{q}\,\mathrm{d}z\]
_where \(|J|=2s\cdot j_{0}+(1+2s)|(j_{1},\ldots,j_{d})|+|(j_{d+1},\ldots,j_{2d})|\)._
Proof.: Let \(\mathcal{T}_{k}\subset\mathcal{P}_{k}\) be the subset of \(k\)-degree polynomials such that
(B.1) \[\sum_{|J|\leq k}|a_{j}|^{2}=1,\]
where we recall that \(a_{j}\) are the coefficients of an element \(p\in\mathcal{P}_{k}\), which can be written as in (2.1). Let \(\mathcal{F}\) denote the class of measurable functions \(f:\mathbb{R}^{n}\to[0,1]\) compactly supported on \(Q_{1}\) such that \(\int_{\mathbb{R}^{n}}f(z)\,\mathrm{d}z\geq A\), where \(A=|Q_{\rho}(z_{0})|\rho^{-n}\). Let \(\gamma(A)=\inf_{P\in\mathcal{T}_{k},f\in\mathcal{F}}\int_{Q_{1}}|P(z)|^{q}f(z )\,\mathrm{d}z\). We want to show that
(B.2) \[\gamma(A)=\min_{P\in\mathcal{T}_{k},f\in\mathcal{F}}\int_{Q_{1}}|P(z)|^{q}f(z )\,\mathrm{d}z.\]
For any integer \(m\) there exists \(P_{m}\in\mathcal{T}_{k}\) and \(f_{m}\in\mathcal{F}\) such that
(B.3) \[\gamma(A)\leq\int_{Q_{1}}|P_{m}(z)|^{q}f_{m}(z)\,\mathrm{d}z<\gamma(A)+\frac{1 }{m}.\]
Due to the normalisation (B.1) we can extract a subsequence \(\{P_{\nu}\}\) of \(\{P_{m}\}\) converging uniformly on compact subsets of \(\mathbb{R}^{n}\) to \(P^{*}\in\mathcal{T}_{k}\). Similarly, since \(0\leq f\leq 1\) we can extract another subsequence \(\{f_{\mu}\}\) of \(\{f_{\nu}\}\) converging weakly in \(L^{2}(Q_{1})\) to some \(f^{*}\in\mathcal{F}\). The subsequence will still satisfy (B.3), so that taking the limit yields
\[\gamma(A)=\int_{Q_{1}}|P^{*}(z)|^{q}f^{*}(z)\,\mathrm{d}z.\]
This proves the claim (B.2). It follows that \(\gamma(A)>0\). Moreover, for \(z_{0}\) and \(\rho\) such that \(Q_{\rho}(z_{0})\subset Q_{1}\), and for \(P\in\mathcal{T}_{k}\) there holds
\[\gamma(A)\leq\int_{Q_{\rho}(z_{0})}P(z)\,\mathrm{d}z.\]
since \(|Q_{\rho}(z_{0})|\geq A\rho^{n}\). If \(P\in\mathcal{P}_{k}\) then \(P(z)\cdot\Big{\{}\sum_{|J|\leq k}|a_{j}|^{2}\Big{\}}^{-\frac{1}{2}}\in \mathcal{T}_{k}\) and thus \(\bigg{\{}\sum_{|J|\leq k}|a_{j}|^{2}\bigg{\}}^{\frac{q}{2}}\leq\frac{1}{\gamma (A)}\int_{Q_{\rho}(z_{0})}|P(z)|^{q}\,\mathrm{d}z\), or also
(B.4) \[|a_{j}|^{q}\leq\frac{1}{\gamma(A)}\int_{Q_{\rho}(z_{0})}|P(z)|^{q}\,\mathrm{d} z,\quad\forall|J|\leq k.\]
Now let \(P\in\mathcal{P}_{k}\). Denote with \((s,y,w)=T(t,x,v)\) the transformation respecting the Lie group structure
\[\bar{z}:=(s,y,w)=\left(\frac{t-t_{0}}{\rho^{2s}},\frac{x-x_{0}-(t-t_{0})v_{0}}{ \rho^{1+2s}},\frac{v-v_{0}}{\rho}\right)=\left(z_{0}^{-1}\circ z\right)_{ \frac{1}{\rho}}.\]
Then
(B.5) \[\begin{split}\int_{Q_{\rho}(z_{0})}|P(z)|^{q}\,\mathrm{d}z& =\rho^{n}\int_{T(Q_{\rho}(z_{0}))}\big{|}P(\rho^{2s}s+t_{0},\rho^{1+2s}y+x_{0}+( t-t_{0})v_{0},\rho w+v_{0})\big{|}^{q}\,\mathrm{d}\bar{z}\\ &=\rho^{n}\int_{T(Q_{\rho}(z_{0}))}\big{|}P\big{(}z_{0}\circ\bar{z }_{\rho}\big{)}\big{|}^{q}\,\mathrm{d}\bar{z}.\end{split}\]
We note that \(T(Q_{\rho}(z_{0}))\subset Q_{1}\), \(|T(Q_{\rho}(z_{0}))|=\rho^{-n}\int_{Q_{\rho}(z_{0})}\,\mathrm{d}z\geq A\) and for \(J_{1}:=(j_{1},\ldots,j_{d})\), \(J_{2}:=(j_{d+1},\ldots,j_{2d})\)
\[P\big{(}z_{0}\circ\bar{z}_{\rho}\big{)}=\sum_{|J|\leq k}\frac{(\partial_{t}+v \cdot\nabla_{x})^{j_{0}}\partial_{x_{1}}^{j_{1}}\cdots\partial_{x_{d}}^{j_{d} }\partial_{v_{1}}^{j_{d+1}}\cdots\partial_{v_{d}}^{j_{2d}}P(z)|_{z=z_{0}}}{j!}\rho^{2s\cdot j_{0}}\rho^{(1+2s)\cdot|J_{1}|}\rho^{|J_{2}|}\bar{z}^{j}.\]
Equations (B.4) and (B.5) then imply
\[\Big{|}(\partial_{t}+v\cdot\nabla_{x})^{j_{0}}\partial_{x_{1}}^{j_{1}}\cdots \partial_{x_{d}}^{j_{d}}\partial_{v_{1}}^{j_{d+1}}\cdots\partial_{v_{d}}^{j_{ 2d}}P(z)|_{z=z_{0}}\Big{|}^{q}\leq\frac{(j!)^{q}}{\rho^{n+q[2sj_{0}+(1+2s)|J_{ 1}|+|J_{2}|]}\gamma(A)}\int_{Q_{\rho}(z_{0})}|P(z)|^{q}\,\mathrm{d}z\quad \forall j.\]
### Expansion of \(f\)
We let \(f\in\mathcal{L}_{k}^{q,\lambda}(\Omega)\). For all \(z_{0}\in\bar{\Omega}\) and for all \(\rho\in[0,\mathrm{diam}\ \Omega]\) we show the existence of a unique polynomial \(P_{k}(z,z_{0},\rho,f)\) such that
(B.6) \[\inf_{p\in\mathcal{P}_{k}}\int_{\Omega(x_{0},\rho)}|f(z)-p(z)|^{q}\,\mathrm{d} z=\int_{\Omega(x_{0},\rho)}|f(z)-P_{k}(z,z_{0},\rho,f)|^{q}\,\mathrm{d}z\]
Indeed, \(P_{k}(z,z_{0},\rho,f)\) is the kinetic Taylor expansion of \(f\) at \(z_{0}\). Let \(P\in\mathcal{P}_{k}\) and write
\[P(z)=\sum_{j\in\mathbb{N}^{1+2d},|J|\leq k}\frac{a_{j}(z_{0})}{j!}(z-z_{0})^{j}.\]
We denote
\[h(\{a_{j}\})=\|f-P\|_{L^{q}(\Omega(z_{0},\rho))},\]
where \(\Omega(z_{0},\rho)=Q_{R}(\bar{z}_{0})\cap Q_{\rho}(z_{0})\) with \(Q_{R}(\bar{z}_{0})\) as in the statement of Theorem 2.7. Note that \(h\) is a non-negative continuous real function of the coefficients of \(P\). The infimum of \(h\) will be attained in a compact set containing the origin, so that the existence of \(P_{k}\) follows standardly. The uniqueness of \(P_{k}\) follows by uniform convexity of the Lebesgue spaces \(L^{q}\). We will denote the coefficients of \(P_{k}(z,z_{0},\rho,f)\) with \(a_{j}(z_{0},\rho)\). Note that they are given by
(B.7) \[a_{j}(z_{0},\rho,f)=(\partial_{t}+v\cdot\nabla_{x})^{j_{0}}\partial_{x_{1}}^{j _{1}}\cdots\partial_{x_{d}}^{j_{d}}\partial_{v_{1}}^{j_{d+1}}\cdots\partial_{ v_{d}}^{j_{2d}}P_{k}(z,z_{0},\rho,f)\big{|}_{z=z_{0}}.\]
**Lemma B.2**.: _For \(f\in\mathcal{L}_{k}^{q,\lambda}(\Omega)\) there exists a constant \(c(q,\lambda)>0\) such that for any \(z_{0}\in\Omega\) and \(0<\rho\leq diam\ \Omega\) and \(l\in\mathbb{N}_{0}\) there holds_
\[\int_{\Omega(z_{0},\rho 2^{-(l-1)})}\big{|}P_{k}(z,z_{0},\rho 2^{-l},f)-P_{k}(z,z_{0}, \rho 2^{-l-1},f)\big{|}^{q}\,\mathrm{d}z\leq c2^{-l\lambda}\rho^{\lambda}[f]_{ \mathcal{L}_{k}^{q,\lambda}}^{q}\]
Proof.: For all \(z\in\Omega\big{(}z_{0},\rho 2^{-(l-1)}\big{)}\) there holds
\[|P_{k}(z,z_{0},\rho 2^{-l},f)-P_{k}(z,z_{0},\rho 2^{-l-1},f)|^{q}\leq 2^{q}|P_{k}(z,z _{0},\rho 2^{-l},f)-f(z)|^{q}+2^{q}|P_{k}(z,z_{0},\rho 2^{-(l-1)},f)-f(z)|^{q}\]
Thus
\[\int_{\Omega(z_{0},\rho 2^{-(l-1)})}|P_{k}(z,z_{0},\rho 2^{-l},f)-P_{k}(z,z _{0},\rho 2^{-l-1},f)|^{q}\,\mathrm{d}z \leq 2^{q}[f]_{\mathcal{L}_{k}^{q,\lambda}}^{q}\big{(}2^{-l\lambda }\rho^{\lambda}+2^{(-l-1)\lambda}\rho^{\lambda}\big{)}\] \[=2^{q}(1+2^{-\lambda})2^{-l\lambda}\rho^{\lambda}[f]_{\mathcal{L}_ {k}^{q,\lambda}}^{q}.\]
**Lemma B.3**.: _Suppose \(f\in\mathcal{L}_{k}^{q,\lambda}(\Omega)\). Then for any \(z_{0},z_{1}\in\bar{\Omega}\) and for any multi-index \(l\) such that \(|L|=k\) with \(|L|=2s\cdot l_{0}+(1+2s)|L_{1}|+|L_{2}|\) there holds_
(B.8) \[\big{|}a_{l}(z_{0},2d_{\ell}(z_{0},z_{1}),f)-a_{l}(z_{1},2d_{\ell}(z_{0},z_{1}),f)\big{|}^{q}\leq c2^{q+1+\lambda}[f]_{\mathcal{L}_{k}^{q,\lambda}}^{q}d_{ \ell}(z_{0},z_{1})^{\lambda-n-kq},\]
_where \(d_{\ell}\) is the kinetic distance defined in 2.1._
Proof.: Let \(z_{0},z_{1}\in\bar{\Omega}\). We write \(\rho=d_{\ell}(z_{0},z_{1})\) and \(I_{\rho}=\Omega(z_{0},2\rho)\cap\Omega(z_{1},2\rho)\). Then we have
\[|P_{k}(z,z_{0},2\rho,f)-P_{k}(z,z_{1},2\rho,f)|^{q}\leq 2^{q}|P_{k}(z,z_{0},2 \rho,f)-f(z)|^{q}+2^{q}|P_{k}(z,z_{1},2\rho,f)-f(z)|^{q}.\]
Integrating over \(\Omega(z_{0},\rho)\subset I_{\rho}\) we obtain
(B.9) \[\begin{split}\int_{\Omega(z_{0},\rho)}&|P_{k}(z,z_ {0},2\rho,f)-P_{k}(z,z_{1},2\rho,f)|^{q}\,\mathrm{d}z\\ &\leq 2^{q}\int_{\Omega(z_{0},\rho)}|P_{k}(z,z_{0},2\rho,f)-f(z)|^{q }\,\mathrm{d}z+2^{q}\int_{\Omega(z_{0},\rho)}|P_{k}(z,z_{1},2\rho,f)-f(z)|^{q} \,\mathrm{d}z\\ &\leq 2^{q+\lambda+1}\rho^{\lambda}[f]_{\mathcal{L}_{k}^{q, \lambda}}^{q}.\end{split}\]
On the other hand, by (B.7), and Lemma B.1 applied to \(P(z)=P_{k}(z,z_{0},2\rho,f)-P_{k}(z,z_{1},2\rho,f)\) and since the \(k\)-th derivative of a polynomial of degree \(k\) is constant, we have
(B.10) \[\begin{split}\big{|}a_{l}\big{(}z_{0},2d_{\ell}(z_{0},z_{1}),f \big{)}&-a_{l}\big{(}z_{1},2d_{\ell}(z_{0},z_{1}),f\big{)}\big{|} ^{q}\\ &\leq c\rho^{-(n+kq)}\int_{\Omega(z_{0},\rho)}\big{|}P_{k}(z,z_{0},2\rho,f)-P_{k}(z,z_{1},2\rho,f)\big{|}^{q}\,\mathrm{d}z.\end{split}\]
Finally, the combination of (B.9) and (B.10) implies (B.8) and concludes the proof.
**Lemma B.4**.: _Let \(f\in\mathcal{L}_{k}^{q,\lambda}(\Omega)\). Then there exists a constant \(c\) such that for all \(z_{0}\in\bar{\Omega},0<\rho\leq\text{diam }\Omega\), \(i\in\mathbb{N}\) and multi-index \(l\in\mathbb{N}^{1+2d}\) with \(|L|\leq k\) there holds_
\[\big{|}a_{l}(z_{0},\rho,f)-a_{l}(z_{0},\rho 2^{-i},f)\big{|}\leq c[f]_{ \mathcal{L}_{k}^{q,\lambda}}\sum_{m=0}^{i-1}2^{m\big{(}\frac{n+|L|q-\lambda}{ q}\big{)}}\rho^{\frac{\lambda-n-|L|q}{q}}.\]
Proof.: We have
\[\big{|}a_{l}(z_{0},\rho,f)-a_{l}(z_{0},\rho 2^{-i},f)\big{|}\leq\sum_{m=0}^{i-1} \big{|}a_{l}(z_{0},\rho 2^{-m},f)-a_{l}(z_{0},\rho 2^{-m-1},f)\big{|}.\]
Using the relation (B.7) and applying Lemma B.1 to \(P_{k}(z,z_{0},\rho 2^{-m},f)-P_{k}(z,z_{0},\rho 2^{-m-1},f)\) we get
\[\begin{split}&\big{|}a_{l}(z_{0},\rho,f)-a_{l}(z_{0},\rho 2^{-i},f) \big{|}\\ &\leq c\rho^{-\frac{n}{q}-|L|}\sum_{m=0}^{i-1}2^{(m+1)\big{(} \frac{n}{q}+|L|\big{)}}\Bigg{[}\int_{\Omega(z_{0},\rho 2^{-m-1})}|P_{k}(z,z_{0},\rho 2^{-m},f)-P_{k}(z,z_{0}, \rho 2^{-m-1},f)|^{q}\,\mathrm{d}z\Bigg{]}^{\frac{1}{q}}.\end{split}\]
We conclude using Lemma B.2.
Now we can prove the following useful lemma.
**Lemma B.5**.: _Let \(f\in\mathcal{L}_{k}^{q,\lambda}(\Omega)\) such that \(n+\tilde{k}q<\lambda\leq n+(\tilde{k}+1)q\) where \(0\leq\tilde{k}\leq k\). Then there exists functions \(\{g_{j}(z_{0})\}\) for \(j\in\mathbb{N}^{1+2d}\) with \(|J|\leq\tilde{k}\) such that for all \(0<\rho\leq\text{diam }\bar{\Omega},z_{0}\in\bar{\Omega}\) there holds_
(B.11) \[\big{|}a_{j}(z_{0},\rho,f)-g_{j}(z_{0})\big{|}\leq c(\lambda,q,k,n,B)\rho^{ \frac{\lambda-n-|J|q}{q}}[f]_{\mathcal{L}_{k}^{q,\lambda}}.\]
_As a consequence, there holds_
(B.12) \[\lim_{\rho\to 0}a_{j}(z_{0},\rho,f)=g_{j}(z_{0}),\]
_uniformly with respect to \(z_{0}\)._
Proof.: We show that the sequence \(\{a_{j}(z_{0},\rho 2^{-i},f)\}\) converges in the limit \(i\to\infty\). Let \(i_{1},i_{2}\) be two non-negative integers and assume without loss in generality that \(i_{2}>i_{1}\). With Lemma B.4 we obtain
\[\big{|}a_{j}(z_{0},\rho 2^{-i_{2}},f)-a_{j}(z_{0},\rho 2^{-i_{1}},f)\big{|} \leq c[f]_{\mathcal{L}_{k}^{q,\lambda}}\sum_{m=i_{1}}^{i_{2}-1}2^{m \big{(}\frac{n+|J|q-\lambda}{q}\big{)}}\rho^{\frac{\lambda-n-|J|q}{q}}.\]
Since \(|J|\leq p=\tilde{k}\) and \(\lambda>n+\tilde{k}q\) the series \(\sum_{m=0}^{\infty}2^{m\big{(}\frac{n+|J|q-\lambda}{q}\big{)}}\) converges. Thus \(\{a_{j}(z_{0},\rho 2^{-i},f)\}\) is a Cauchy sequence and hence converges as \(i\to\infty\).
We now show that the limit is uniform in \(\rho\). Let \(\rho_{1}\) and \(\rho_{2}\) be such that \(0<\rho_{1}\leq\rho_{2}\leq\text{diam }\Omega\). With Lemma B.1 we get
\[\big{|}a_{j}(z_{0},\rho_{1}2^{-i},f)-a_{j}(z_{0},\rho_{2}2^{-i}, f)\big{|}^{q} \leq c\frac{2^{i(n+|J|q)}}{\rho_{1}^{n+|J|q}}\int_{\Omega(z_{0}, \rho_{1}2^{-i})}\big{|}P_{k}(z,z_{0},\rho_{1}2^{-i},f)-P_{k}(z,z_{0},\rho_{2} 2^{-i},f)\big{|}^{q}\,\mathrm{d}z\] \[\leq c\frac{2^{i(n+|J|q)}}{\rho_{1}^{n+|J|q}}\Bigg{[}\int_{\Omega (z_{0},\rho_{1}2^{-i})}\big{|}P_{k}(z,z_{0},\rho_{1}2^{-i},f)-f(z)\big{|}^{q} \,\mathrm{d}z\] \[\qquad\qquad\qquad+\int_{\Omega(z_{0},\rho_{2}2^{-i})}\big{|}P_{ k}(z,z_{0},\rho_{2}2^{-i},f)-f(z)\big{|}^{q}\,\mathrm{d}z\Bigg{]}\] \[\leq c2^{q}\frac{\rho_{1}^{\lambda}+\rho_{2}^{\lambda}}{\rho_{1} ^{n+|J|q}}2^{-i(\lambda-n-|J|q)}[f]_{\mathcal{L}_{k}^{q,\lambda}}\to 0,\]
as \(i\to\infty\) since \(\lambda-n-|J|q>0\).
Thus for \(z_{0}\in\bar{\Omega},0<\rho\leq\text{diam }(\Omega)\) and \(|J|\leq\tilde{k}\) we can take
(B.13) \[g_{j}(z_{0})=\lim_{i\to\infty}a_{j}(z_{0},\rho 2^{-i},f).\]
The sequence \(g_{j}(z_{0})\) is well-defined in \(\bar{\Omega}\). Since the series \(\sum_{m=0}^{\infty}2^{m\big{(}\frac{n+|J|q-\lambda}{q}\big{)}}\) converges, we deduce from Lemma B.4
(B.14) \[\big{|}a_{j}(z_{0},\rho,f)-a_{j}(z_{0},\rho 2^{-i},f)\big{|}\leq c[f]_{ \mathcal{L}_{k}^{q,\lambda}}\rho^{\frac{\lambda-n-|J|q}{q}}.\]
Combining (B.13) and (B.14) yields the result.
### The function \(g_{j}(z_{0})\)
We have the following theorem.
**Theorem B.6**.: _Let \(f\in\mathcal{L}_{k}^{q,\lambda}(\Omega)\) with \(n+kq<\lambda\). Then the functions \(g_{j}(z_{0})\) with \(|J|=k\) are Holder continuous in \(\bar{\Omega}\) and for any \(z_{1},z_{2}\in\bar{\Omega}\) there holds_
(B.15) \[|g_{j}(z_{1})-g_{j}(z_{2})|\leq c[f]_{\mathcal{L}_{k}^{q,\lambda}}d\ell(z_{1},z_ {2})^{\frac{\lambda-n-kq}{q}}.\]
Proof.: Take \(z_{1},z_{2}\in\bar{\Omega}\) such that \(\rho=d_{\ell}(z_{1},z_{2})\leq\frac{\text{diam }\Omega}{2}\). Then
\[|g_{j}(z_{1})-g_{j}(z_{2})|\leq|g_{j}(z_{1})-a_{j}(z_{1},2\rho)|+|g_{j}(z_{2})-a_ {j}(z_{2},2\rho)|+|a_{j}(z_{1},2\rho)-a_{j}(z_{2},2\rho)|.\]
On the one hand, by (B.11) we have
\[|g_{j}(z_{1})-a_{j}(z_{1},2\rho)|\leq c2^{\frac{\lambda-n-kq}{q}}\rho^{\frac{ \lambda-n-kq}{q}}[f]_{\mathcal{L}_{k}^{q,\lambda}},\]
and
\[|g_{j}(z_{2})-a_{j}(z_{2},2\rho)|\leq c2^{\frac{\lambda-n-kq}{q}}\rho^{\frac{ \lambda-n-kq}{q}}[f]_{\mathcal{L}_{k}^{q,\lambda}}.\]
On the other hand (B.8) implies
\[|a_{j}(z_{1},2\rho)-a_{j}(z_{2},2\rho)|\leq c2^{\frac{q+1+\lambda}{q}}\rho^{ \frac{\lambda-n-kq}{q}}[f]_{\mathcal{L}_{k}^{q,\lambda}}.\]
This yields the result in case that \(d_{\ell}(z_{1},z_{2})\leq\frac{\text{diam }\Omega}{2}\).
In case that \(d_{l}(z_{1},z_{2})>\frac{\text{diam }\Omega}{2}\) we can construct a polygon contained in \(\bar{\Omega}\) with extremal points \(z_{1}\) and \(z_{2}\) and with sides of length smaller or equal to \(\frac{\text{diam }\Omega}{2}\), see Figure B.3. The length of the sides can be bounded by \(\text{diam }\Omega\) uniformly with respect to \(z_{1}\) and \(z_{2}\). Thus to conclude it suffices to apply (B.15) to all points at the end of the sides of such a polygonal.
For the sequel, we denote by (0) the \(d\)-tuple \((0,\ldots,0)\) and by \(e_{i}\) the vector in \(\mathbb{R}^{d}\) with the \(i\)-th coordinate equal to \(1\) and else \(0\). We also note that any polynomial degree \(k\in\mathbb{N}+2s\mathbb{N}\) can be written as \(k=2s\cdot k_{0}+(1+2s)\cdot k_{1}+k_{2}\) with \(k_{0},k_{1},k_{2}\in\mathbb{N}\).
**Theorem B.7**.: _Let \(f\in\mathcal{L}_{k}^{q,\lambda}(\Omega)\) with \(k_{0},k_{1},k_{2}\geq 1\) and \(n+kq<\lambda\). Then for any multi-index \(j\in\mathbb{N}^{1+2d}\) such that \(|J|\leq k\) the function \(g_{j}\) has a first partial derivative in \(\Omega\), and for any \(z\in\Omega\) and \(i=1,\ldots,d\) there holds_
(B.16) \[\begin{split}\mathcal{T}g_{j}(z)&=g_{(j_{0}+1,J_{1},J_{2})}(z),\qquad\qquad j_{0}\leq k_{0}-1,|J_{1}|\leq k_{1},|J_{2}|\leq k_{2}\\ \frac{\partial g_{j}(z)}{\partial x_{i}}&=g_{(j_{0},J_{1}+e_{i},J_{2})}(z),\qquad\qquad j_{0}\leq k_{0},|J_{1}|\leq k_{1}-1,|J_{2}| \leq k_{2}\\ \frac{\partial g_{j}(z)}{\partial v_{i}}&=g_{(0,J_{ 1},J_{2}+e_{i})}(z),\qquad\qquad j_{0}=0,|J_{1}|\leq k_{1},|J_{2}|\leq k_{2}-1. \end{split}\]
Proof.: For this proof we omit the dependency on \(f\) in the coefficients \(a_{j}(z_{0},\rho,f)\) and \(P_{k}(z,z_{0},\rho,f)\) and simply write \(a_{j}(z_{0},\rho)\) and \(P_{k}(z,z_{0},\rho)\), respectively.
_Step 1._ We will start proving the first line. We consider \(j=(j_{0},J_{1},J_{2})\) for \(j_{0}\leq k_{0}-1,|J_{1}|=k_{1},|J_{2}|=k_{2}\). Theorem B.6 proves that \(g_{(k_{0},J_{1},J_{2})}\) is Holder continuous in a classical sense for \(|J_{1}|=k_{1},|J_{2}|=k_{2}\) and in particular continuous. Thus we may assume that \(g_{(j_{0}+\delta,J_{1},J_{2})}\) is continuous in \(\bar{\Omega}\) for \(\delta=1,\ldots,k_{0}-j_{0}\). Let \(z_{0}\in\Omega\) and \(\rho\) be such that \(B_{|\rho|}(z_{0})\subset\Omega\). By (B.7) we have
(B.17) \[\begin{split}\frac{a_{j}\big{(}z_{0}+(\rho,(0),(0)),2|\rho| \big{)}-a_{j}(z_{0},2|\rho|)}{\rho}=\frac{D^{j}\big{[}P_{k}\big{(}z,z_{0}+( \rho,(0),(0)),2|\rho|\big{)}-P_{k}(z,z_{0},2|\rho|)\big{]}}{\rho}\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\sum_{ \delta=1}^{k_{0}-j_{0}}\frac{(-1)^{\delta}}{\delta!}\rho^{\delta-1}a_{j} \big{(}z_{0}+(\rho,(0),(0)),2|\rho|\big{)}.\end{split}\]
With Lemma B.1 and (B.9) we obtain
(B.18) \[\begin{split}\left|\frac{D^{j}\big{[}P_{k}\big{(}z,z_{0}+(\rho,(0 ),(0)),2|\rho|\big{)}-P_{k}(z,z_{0},2|\rho|)\big{]}}{\rho}\right|^{q}\\ \leq c|\rho|^{-n-|J|q}\int_{\Omega(z_{0},|\rho|)}\Big{|}P_{k}\big{(} z,z_{0}+(\rho,(0),(0)),2|\rho|\big{)}-P_{k}(z,z_{0},2|\rho|)\Big{|}^{q}\, \mathrm{d}z\\ \leq c2^{q+\lambda+1}|\rho|^{\lambda-n-|J|q}[f]_{\mathcal{L}_{k}^ {q,\lambda}}.\end{split}\]
Moreover, for \(1\leq\delta\leq k_{0}-j_{0}\) there holds
(B.19) \[\begin{split}\Big{|}a_{(j_{0}+\delta,J_{1},J_{2})}\big{(}z_{0}+( \rho,(0),(0)),2|\rho|\big{)}-g_{(j_{0}+\delta,J_{1},J_{2})}\big{(}z_{0}+(\rho, (0),(0))\big{)}\Big{|}\\ \leq|a_{(j_{0}+\delta,J_{1},J_{2})}\big{(}z_{0}+(\rho,(0),(0)),2| \rho|\big{)}-g_{(j_{0}+\delta,J_{1},J_{2})}\big{(}z_{0}+(\rho,(0),(0))\big{)} \big{|}\\ +|g_{(j_{0}+\delta,J_{1},J_{2})}\big{(}z_{0}+(\rho,(0),(0))\big{)} -g_{(j_{0}+\delta,J_{1},J_{2})}(z_{0})\big{|}.\end{split}\]
Using (B.11) we can estimate the first term on the right hand side of (B.19) by
(B.20) \[\begin{split}\Big{|}a_{(j_{0}+\delta,J_{1},J_{2})}\big{(}z_{0}+( \rho,(0),(0)),2|\rho|\big{)}-g_{(j_{0}+\delta,J_{1},J_{2})}&\big{(} z_{0}+(\rho,(0),(0))\big{)}\Big{|}\\ \leq c2^{\frac{\lambda-n-(|J|+2\pi\delta)}{q}}|\rho|^{\frac{ \lambda-n-(|J|+2\pi\delta)}{q}}[f]_{\mathcal{L}_{k}^{q,\lambda}}.\end{split}\]
From (B.19) and (B.20) and since by induction hypothesis \(g_{(j_{0}+\delta,J_{1},J_{2})}\) are continuous for \(\delta=1,\ldots,k_{0}-j_{0}\) we have
(B.21) \[\lim_{\rho\to 0}a_{(j+\delta,J_{1},J_{2})}\big{(}z_{0}+(\rho,(0),(0)),2|\rho| \big{)}=g_{(j_{0}+\delta,J_{1},J_{2})}(z_{0})\qquad\delta=1,\ldots,k_{0}-j_{0}.\]
Thus from (B.17), (B.18) and (B.21) we deduce that
\[\lim_{\rho\to 0}\frac{a_{j}\big{(}z_{0}+(\rho,(0),(0)),2|\rho|\big{)}-a_{j}(z_{0},2| \rho|\big{)}}{\rho}=g_{(j_{0}+1,J_{1},J_{2})}(z_{0}),\]
uniformly in \(z_{0}\). Thus if we can show that
(B.22) \[\lim_{\rho\to 0}\frac{g_{j}\big{(}z_{0}+(\rho,(0),(0))\big{)}-g_{j}(z_{0})}{\rho}= \lim_{\rho\to 0}\frac{a_{j}\big{(}z_{0}+(\rho,(0),(0)),2|\rho|\big{)}-a_{j}(z_{0},2\rho)}{\rho},\]
then we can conclude the proof of the first line of (B.16). We first notice that by (B.11)
(B.23) \[\left|\frac{g_{j}\big{(}z_{0}+(\rho,(0),(0))\big{)}-a_{j}\big{(}z_{0}+(\rho,(0),(0)),2|\rho|\big{)}}{\rho}\right|\leq c2^{\frac{\lambda-n-|J|q}{q}}|\rho|^{ \frac{\lambda-n}{q}-|J|-1}[f]_{\mathcal{L}_{k}^{q,\lambda}},\]
and
(B.24) \[\left|\frac{g_{j}(z_{0})-a_{j}(z_{0},2|\rho|)}{\rho}\right|\leq c2^{\frac{ \lambda-n-|J|q}{q}}|\rho|^{\frac{\lambda-n}{q}-|J|-1}[f]_{\mathcal{L}_{k}^{q, \lambda}}.\]
Thus with the triangle inequality (B.23) and (B.24) imply (B.22), which in turn implies the first line of(B.16).
_Step 2._ To prove the second statement in (B.16) we proceed as in Step 1. Now we consider \(j_{0}=1,\ldots,k_{0},|J_{1}|\leq k_{1}-1\) and \(|J_{2}|=k_{2}\). We have shown that \(g_{j}\) is continuous for \(j_{0}=1,\ldots,k_{0},|J_{1}|=k_{1},|J_{2}|=k_{2}\). Assume then that \(g_{(j_{0},J_{1}+\delta e_{i},J_{2})}\) is continuous in \(\bar{\Omega}\) for \(\delta=1,\ldots,k_{1}-|J_{1}|\). We again have by (B.7)
(B.25) \[\begin{split}\frac{a_{j}\big{(}z_{0}+\rho(0,e_{i},(0)),2|\rho| \big{)}-a_{j}(z_{0},2|\rho|)}{\rho}&=\frac{D^{j}\big{[}P_{k} \big{(}z,z_{0}+\rho(0,e_{i},(0)),2|\rho|\big{)}-P_{k}(z,z_{0},2|\rho|)\big{]} }{\rho}\\ &\quad-\sum_{\delta=1}^{k_{1}-|J_{1}|}\frac{(-1)^{\delta}}{ \delta!}\rho^{\delta-1}a_{j}\big{(}z_{0}+\rho(0,e_{i},(0)),2|\rho|\big{)}. \end{split}\]
The proof is exactly the same if we replace \((\rho,(0),(0))\) with \(\rho(0,e_{i},(0))\), \(k_{0}-j_{0}\) with \(k_{1}-|J_{1}|\) and instead of \(2s\delta\) in the exponent of (B.20) we get \((1+2s)\delta\).
_Step 3._ To deduce the final statement in (B.16) the ideas are the same but the statement only holds for \(j_{0}=0\) since \(\mathcal{T}\) and \(D_{v}\) do not commute. Therefore it was important to prove the first statement first, since now we know that \(g_{j}\) is continuous for \(j_{0}=0,|J_{1}|\leq k_{1}\) and \(|J_{2}|=k_{2}\). We now assume that \(g_{(j_{0},J_{1},J_{2}+\delta e_{i})}\) is continuous in \(\bar{\Omega}\) for \(\delta=1,\ldots,k_{2}-|J_{2}|\). Replacing \((\rho,(0),(0))\) with \(\rho(0,(0),e_{i})\), \(k_{0}-j_{0}\) with \(k_{2}-|J_{2}|\) and \(2s\delta\) in the exponent of (B.20) with \(\delta\), but otherwise proceeding as above, we conclude.
Finally, combining the argument for the continuity of \(g_{j}\) in all three steps yields the improvement in ranges of \(|J_{1}|\) and \(|J_{2}|\) as stated in the theorem.
As a corollary of Theorem B.6 and B.7 we get
**Theorem B.8**.: _Let \(f\in\mathcal{L}_{k}^{q,\lambda}(\Omega)\) with \(n+kq<\lambda\). Then the function \(g_{(0)}\in C_{\ell}^{\beta}(\bar{\Omega})\) where \(\beta=\frac{\lambda-n}{q}\) and there holds_
\[\mathcal{T}^{j_{0}}D_{x}^{J_{1}}D_{v}^{J_{2}}g_{(0)}(z)=g_{j}(z)\qquad\forall z \in\Omega,\ \forall|J|\leq k.\]
_Recall \(j=(j_{0},J_{1},J_{2})\in\mathbb{N}^{1+2d}\) and \(|J|=2s\cdot j_{0}+(1+2s)\cdot|J_{1}|+|J_{2}|\)._
_Remark B.9_.: For \(f\in\mathcal{L}_{k}^{q,\lambda}(\Omega)\) with \(n+(k+1)q<\lambda\) we deduce from (B.15) that \(g_{j}\) with \(|J|=k\) are constant and thus by Theorem B.8, \(g_{(0)}\) is a polynomial of kinetic degree at most \(k\).
### Comparing the Holder norm and the Campanato norm
**Theorem B.10**.: _Let \(f\in\mathcal{L}_{k}^{q,\lambda}(\Omega)\) with \(n+kq<\lambda\leq n+(k+1)q\). Then \(f\in C_{\ell}^{\beta}(\Omega)\) where \(\beta=\frac{\lambda-n}{q}\) and there holds_
(B.26) \[[f]_{C_{\ell}^{\beta}}\leq c[f]_{\mathcal{L}_{k}^{q,\lambda}}.\]
_If \(\lambda>n+(k+1)q\) then \(f\) is a polynomial of kinetic degree at most \(k\)._
Proof.: Due to Theorem B.8 and Remark B.9 it suffices to show that \(f(z)=g_{(0)}(z)=\lim_{\rho\to 0}a_{(0)}(z,\rho)\) for almost every \(z\in\Omega\). Then (B.26) follows from (B.15) in Theorem B.6 and Taylor's formula.
Since \(f\in L^{q}(\Omega)\) there holds for almost every \(z_{0}\in\Omega\)
(B.27) \[\lim_{\rho\to 0}\frac{1}{|\Omega(z_{0},\rho)|}\int_{\Omega(z_{0},\rho)}|f(z)-f(z_{0 })|^{q}\,\mathrm{d}z=0.\]
Now let \(z_{0}\in\Omega\) be such that (B.27) holds. Then for almost every \(z\in\Omega\) we have
\[\big{|}a_{(0)}(z_{0},\rho)-f(z_{0})\big{|}^{q}\leq c\Big{(}|P_{k}(z,z_{0},\rho) -a_{(0)}(z_{0},\rho)\big{|}^{q}+\big{|}P_{k}(z,z_{0},\rho)-f(z)\big{|}^{q}+|f(z) -f(z_{0})|^{q}\Big{)}.\]
Integrating this inequality over \(\Omega(z_{0},\rho)\) yields
(B.28) \[\begin{split}\big{|}a_{(0)}(z_{0},\rho)-f(z_{0})\big{|}^{q}& \leq\frac{c}{A_{1}\rho^{n}}\int_{\Omega(z_{0},\rho)}\big{|}P_{k}(z,z_{0},\rho)- a_{(0)}(z_{0},\rho)\big{|}^{q}\,\mathrm{d}z\\ &\quad+\frac{c}{A_{1}\rho^{n}}\int_{\Omega(z_{0},\rho)}\big{|}P_{ k}(z,z_{0},\rho)-f(z)\big{|}^{q}\,\mathrm{d}z+\frac{c}{A_{1}\rho^{n}}\int_{ \Omega(z_{0},\rho)}|f(z)-f(z_{0})|^{q}\,\mathrm{d}z.\end{split}\]
By definition of \(\mathcal{L}_{k}^{q,\lambda}\) we have
\[\frac{c}{R^{-n}|Q_{R}(\bar{z}_{0})|\rho^{n}}\int_{\Omega(z_{0},\rho)}\big{|}P_ {k}(z,z_{0},\rho)-f(z)\big{|}^{q}\,\mathrm{d}z\leq c\frac{\rho^{\lambda-n}}{R^ {-n}|Q_{R}(\bar{z}_{0})|}[f]_{\mathcal{L}_{k}^{q,\lambda}}\xrightarrow[\rho \to 0]{}0.\]
Due to (B.27) the last integral in (B.28) vanishes as well in the limit \(\rho\to 0\). Finally there holds
\[\frac{c}{R^{-n}|Q_{R}(\bar{z}_{0})|\rho^{n}}\int_{\Omega(z_{0},\rho)}\big{|}P_ {k}(z,z_{0},\rho)-a_{(0)}(z_{0},\rho)\big{|}^{q}\,\mathrm{d}z\leq c(n,q,k)\sum _{\begin{subarray}{c}j\in\mathbb{N}^{1+2d}\\ |J|\leq k\end{subarray}}\big{|}a_{j}(z_{0},\rho)\big{|}^{q}\rho^{|J|q}.\]
Due to (B.12) this integral vanishes in the limit \(\rho\to 0\), so that (B.28) gives for almost every \(z_{0}\in\Omega\)
\[\lim_{\rho\to 0}a_{(0)}(z_{0},\rho)=f(z_{0}).\]
Equivalently, there holds \(f(z)=g_{(0)}(z)\) almost everywhere in \(\Omega\).
Proof of Theorem 2.7.: If \(f\in\mathcal{L}_{k}^{p,\lambda}(\Omega)\), then Theorem B.10 yields \(f\in C_{\ell}^{\beta}(\Omega)\) and the Holder semi-norm is bounded above by the Campanato semi-norm (B.26).
Conversely, let \(f\in C_{\ell}^{\beta}(\bar{\Omega})\) and \(P\in\mathcal{P}_{k}\) where \(k=\deg_{\mathrm{kin}}\mathrm{P}<\beta\). For \(z\in Q_{r}(z_{0})\cap\Omega\) we have
\[|f(z)-P(z)|\leq[f]_{C_{\ell}^{\beta}}r^{\beta}.\]
Thus for \(\beta=\frac{\lambda-n}{p}\) there holds
\[\frac{1}{r^{\lambda}}\int_{Q_{r}(z_{0})\cap\Omega}|f(z)-P(z)|^{p}\,\mathrm{d}z \leq C[f]_{C_{\ell}^{\beta}}^{p}r^{p\beta-\lambda+n}=C[f]_{C_{\ell}^{\beta}}^{ p}.\]
## Appendix C Interpolation Inequality for Holder spaces
For the sake of completeness, we prove Lemma 2.12 following the arguments of Imbert-Silvestre [19, Proposition 2.10].
Proof of Lemma 2.12.: It suffices to prove the statement for \(\beta_{3}\) sufficiently close to \(\beta_{1}\). Thus we assume that there exists only one element \(\bar{\beta}\in\mathbb{N}+2s\mathbb{N}\) such that \(\bar{\beta}\in[\beta_{1},\beta_{3})\). We know that if \(p_{z}^{i}\in\mathcal{P}_{\beta_{i}}\) is the polynomial expansion of \(f\) at \(z\) of order less than \(\beta_{i}\) for all \(i\in\{1,\ldots,3\}\), then for all \(z\circ\xi\in Q_{1}\)
(C.1) \[\big{|}f(z\circ\xi)-p_{z}^{i}(\xi)\big{|}\leq[f]_{C_{\ell}^{\beta_{i}}}\|\xi \|^{\beta_{i}},\quad i=1,2,3.\]
The polynomials \(p_{z}^{i}\) are of increasingly higher order. We assume that the difference of degree of homogeneity of \(p_{z}^{1}\) and \(p_{z}^{3}\) is at most one, so that \(p_{z}^{2}\) coincides with either \(p_{z}^{1}\) or \(p_{z}^{3}\), depending on whether \(\bar{\beta}\geq\beta_{2}\) or \(\bar{\beta}<\beta_{2}\). If there is no \(\bar{\beta}\) then all three polynomials coincide. Let us first assume therefore that there is exactly one \(\bar{\beta}\). We have by subtracting (C.1) for \(i=1,3\) from each other
(C.2) \[|p_{z}^{3}(\xi)-p_{z}^{1}(\xi)|\leq[f]_{C_{\ell}^{\beta_{1}}}\|\xi\|^{\beta_{1 }}+[f]_{C_{\ell}^{\beta_{3}}}\|\xi\|^{\beta_{3}}.\]
For any \(R\in(0,1]\) and \(z\in Q_{1}\) we pick \(\xi_{1}\in Q_{1}\) such that \(\|\xi_{1}\|\leq R\) and whenever \(d_{\ell}(\xi_{1},\xi)<cR\), then \(\|\xi\|\leq R\) and \(z\circ\xi\in Q_{1}\) with some universal constant \(c\). From (C.2) we then have
\[\sup_{\xi:d_{\ell}(\xi_{1},\xi)\leq cR}|p_{z}^{3}(\xi)-p_{z}^{1}(\xi)|\leq[f]_{C _{\ell}^{\beta_{1}}}R^{\beta_{1}}+[f]_{C_{\ell}^{\beta_{3}}}R^{\beta_{3}}.\]
Since \(p_{z}^{3}-p_{z}^{1}\) is homogeneous of degree \(\bar{\beta}\) we get by scaling
\[\sup_{\xi:d_{\ell}((\xi_{1})_{R^{-1}},\xi)\leq c}|p_{z}^{3}(\xi)-p_{z}^{1}(\xi) |\leq[f]_{C_{\ell}^{\beta_{1}}}R^{\beta_{1}-\bar{\beta}}+[f]_{C_{\ell}^{\beta _{3}}}R^{\beta_{3}-\bar{\beta}}.\]
Using the triangle inequality from [19, Prop. 2.2] we can assure that whenever \(|\xi|\leq 1\) then \(d_{\ell}\big{(}(\xi_{1})_{R^{-1}},\xi\big{)}\leq C\) for some universal constant \(C\). Since all norms on the space of polynomials are equivalent, we have
\[\|p_{z}^{3}-p_{z}^{1}\| =\sup_{\xi:\|\xi\|\leq 1}|p_{z}^{3}(\xi)-p_{z}^{1}(\xi)|\leq C \sup_{\xi:d_{\ell}\big{(}(\xi_{1})_{R^{-1}},\xi\big{)}\leq c}|p_{z}^{3}(\xi)- p_{z}^{1}(\xi)|\] \[\leq C[f]_{C_{\ell}^{\beta_{1}}}R^{\beta_{1}-\bar{\beta}}+C[f]_{C _{\ell}^{\beta_{3}}}R^{\beta_{3}-\bar{\beta}}.\]
For
\[R=\left(\frac{[f]_{C_{\ell}^{\beta_{1}}}}{[f]_{C_{\ell}^{\beta_{3}}}}\right)^ {\frac{1}{\beta_{3}-\beta_{1}}}\]
we obtain
\[\|p_{z}^{3}-p_{z}^{1}\|\leq C[f]_{C_{\ell}^{\beta_{1}}}^{\bar{\theta}}[f]_{C_{ \ell}^{\beta_{3}}}^{1-\bar{\theta}}+[f]_{C_{\ell}^{\beta_{1}}},\]
where \(\bar{\beta}=\bar{\theta}\beta_{1}+(1-\bar{\theta})\beta_{3}\).
Therefore we can estimate \(f-p_{z}^{2}\). Assume first \(\beta_{2}\leq\bar{\beta}\). Then \(p_{z}^{2}=p_{z}^{1}\) and
\[|f(z\circ\xi)-p_{z}^{2}(\xi)|\leq\begin{cases}[f]_{C_{\ell}^{\beta_{1}}}\|\xi \|^{\beta_{1}},\\ [f]_{C_{\ell}^{\beta_{3}}}\|\xi\|^{\beta_{3}}+\left([f]_{C_{\ell}^{\beta_{1}}}^ {\theta}[f]_{C_{\ell}^{\beta_{3}}}^{1-\theta}+[f]_{C_{\ell}^{\beta_{1}}} \right)\|\xi\|^{\bar{\beta}}.\end{cases}\]
Now if \(\|\xi\|\geq R\) then
\[[f]_{C_{\ell}^{\beta_{1}}}\|\xi\|^{\beta_{1}}\leq[f]_{C_{\ell}^{\beta_{1}}}^ {\theta}[f]_{C_{\ell}^{\beta_{3}}}^{1-\theta}\|\xi\|^{\beta_{2}}.\]
Else if \(\|\xi\|<R\)
\[[f]_{C_{\ell}^{\beta_{3}}}\|\xi\|^{\beta_{3}}+\left([f]_{C_{\ell}^{\beta_{1}}} ^{\bar{\theta}}[f]_{C_{\ell}^{\beta_{3}}}^{1-\bar{\theta}}+[f]_{C_{\ell}^{ \beta_{1}}}\right)\|\xi\|^{\bar{\beta}}\leq[f]_{C_{\ell}^{\beta_{1}}}^{\theta }[f]_{C_{\ell}^{\beta_{3}}}^{1-\theta}\|\xi\|^{\beta_{2}}+[f]_{C_{\ell}^{\beta_ {1}}}\|\xi\|^{\bar{\beta}}.\]
Thus we conclude \(|f(z\circ\xi)-p_{z}^{2}(\xi)|\leq[f]_{C_{\ell}^{\beta_{1}}}^{\theta}[f]_{C_{ \ell}^{\beta_{3}}}^{1-\theta}\|\xi\|^{\beta_{2}}+[f]_{C_{\ell}^{\beta_{1}}}\| \xi\|^{\beta_{2}}.\)
In case that \(\bar{\beta}<\beta_{2}\)
\[|f(z\circ\xi)-p_{z}^{2}(\xi)|\leq\begin{cases}[f]_{C_{\ell}^{\beta_{1}}}\|\xi \|^{\beta_{1}}+\left([f]_{C_{\ell}^{\beta_{1}}}^{\bar{\theta}}[f]_{C_{\ell}^{ \beta_{3}}}^{1-\bar{\theta}}+[f]_{C_{\ell}^{\beta_{1}}}\right)\|\xi\|^{\bar{ \beta}},\\ [f]_{C_{\ell}^{\beta_{1}}}\|\xi\|^{\beta_{3}}.\end{cases}\]
and we conclude as above.
In case that no \(\bar{\beta}\) exists, then all polynomials coincide and we get
\[|f(z\circ\xi)-p_{z}^{2}(\xi)|\leq[f]_{C_{\ell}^{\beta_{1}}}^{\theta}[f]_{C_{ \ell}^{\beta_{3}}}^{1-\theta}\|\xi\|^{\beta_{2}}.\]
## Appendix D Proof of Bouchut's Proposition
For the sake of self-containment, we recall the proof of Proposition 3.4 from [3, Proposition 1.1].
Proof of Proposition 3.4.: We denote by \(\hat{f}(\eta,k,v)\) the Fourier-transform of a solution \(f\) of (3.7) in time \(t\) and space \(x\). Then \(\hat{f}\) solves
\[i(\eta+v\cdot k)\hat{f}=\hat{S}.\]
We introduce a smoothing sequence \(\rho_{1}\in C_{c}^{\infty}(\mathbb{R}^{d})\) in velocity such that
(D.1) \[\rho_{\varepsilon}(v)=\frac{1}{\varepsilon^{d}}\rho_{1}\Big{(}\frac{v}{ \varepsilon}\Big{)},\qquad\int\rho_{1}\,\mathrm{d}v=1,\qquad\int v^{\alpha} \rho_{1}=0\text{ for }1\leq|\alpha|<|\beta|.\]
For fixed \((\eta,k)\) we decompose
(D.2) \[\hat{f}(\eta,k,v)=\Big{(}\rho_{\varepsilon}*_{v}\hat{f}\Big{)}\left(\eta,k,v \right)+\Big{(}\hat{f}-\Big{(}\rho_{\varepsilon}*_{v}\hat{f}\Big{)}\Big{)} \left(\eta,k,v\right),\]
where \(*_{v}\) denotes the convolution in velocity \(v\). Then by the properties of \(\rho\) (D.1) we can bound \(|1-\hat{\rho}_{\varepsilon}|\leq C_{d,\beta}|\varepsilon v|^{\beta}\) so that
(D.3) \[\left\|\left(\hat{f}-\Big{(}\rho_{\varepsilon}*_{v}\hat{f}\Big{)}\right)(\eta,k,\cdot)\right\|_{L^{2}(\mathbb{R}^{d})}\leq C_{d,\beta}\varepsilon^{\beta} \big{\|}|D_{v}|^{\beta}\hat{f}(\eta,k,\cdot)\big{\|}_{L^{2}(\mathbb{R}^{d})}.\]
For the first term in (D.2) we introduce \(\lambda>0\) such that
\[\big{(}\lambda+i(\eta+v\cdot k)\big{)}\hat{f}(\eta,k,v)=\lambda\hat{f}(\eta,k,v)+\hat{S}(\eta,kv).\]
Equivalently,
\[\hat{f}(\eta,k,v)=\frac{\lambda\hat{f}(\eta,k,v)+\hat{S}(\eta,k,v)}{\lambda+i (\eta+v\cdot k)},\]
which yields
\[\Big{(}\rho_{\varepsilon}*_{v}\hat{f}\Big{)}\left(\eta,k,v\right)=\int\frac{ \lambda\hat{f}(\eta,k,\xi)+\hat{S}(\eta,k,\xi)}{\lambda+i(\eta+\xi\cdot k)} \rho_{\varepsilon}(v-\xi)\,\mathrm{d}\xi.\]
Then we bound
\[\Big{|}\left(\rho_{\varepsilon}*_{v}\hat{f}\right)(\eta,k,v) \Big{|}\] \[\leq\Big{(}\big{\|}\hat{f}(\eta,k,\cdot)|\rho_{\varepsilon}(v- \cdot)|^{\frac{1}{2}}\big{\|}_{L^{2}(\mathbb{R}^{d})}+\lambda^{-1}\big{\|} \hat{S}(\eta,k,\cdot)|\rho_{\varepsilon}(v-\cdot)|^{\frac{1}{2}}\big{\|}_{L^{ 2}(\mathbb{R}^{d})}\Big{)}\Bigg{(}\int\frac{|\rho_{\varepsilon}(v-\xi)|}{|1+i( \eta+\xi\cdot k)\lambda^{-1}|^{2}}\,\mathrm{d}\xi\Bigg{)}^{\frac{1}{2}}.\]
The last integral is estimated using \(|\rho_{\varepsilon}(v)|\leq C_{d,\beta}\varepsilon^{-d}\chi_{|v|\leq\varepsilon}\), and decomposing \(\xi=\tilde{\xi}\frac{k}{|k|}+\xi^{\perp}\) with \(\xi^{\perp}\cdot k=0\), so that
\[\int\frac{|\rho_{\varepsilon}(v-\xi)|}{|1+i(\eta+\xi\cdot k)\lambda^{-1}|^{2}} \,\mathrm{d}\xi\leq C_{d,\beta}\frac{1}{\varepsilon}\int\frac{\chi_{|\frac{v \cdot k}{|k|}-\tilde{\xi}|<\varepsilon}}{|1+i(\eta+\tilde{\xi}\cdot k)\lambda^ {-1}|^{2}}\,\mathrm{d}\tilde{\xi}\leq C_{d,\beta}\frac{\lambda}{\varepsilon|k|}.\]
Thus
\[\Big{\|}\left(\rho_{\varepsilon}*_{v}\hat{f}\right)(\eta,k,\cdot)\Big{\|}_{L^{ 2}(\mathbb{R}^{d})}\leq C_{d,\beta}\Big{(}\frac{\lambda}{\varepsilon|k|}\Big{)} ^{\frac{1}{2}}\Big{(}\big{\|}\hat{f}(\eta,k,\cdot)\big{\|}_{L^{2}(\mathbb{R}^{d })}+\lambda^{-1}\big{\|}\hat{S}(\eta,k,\cdot)\big{\|}_{L^{2}(\mathbb{R}^{d})} \Big{)}.\]
Choosing
\[\lambda=\frac{\big{\|}\hat{S}(\eta,k,\cdot)\big{\|}_{L^{2}(\mathbb{R}^{d})}}{ \big{\|}\hat{f}(\eta,k,\cdot)\big{\|}_{L^{2}(\mathbb{R}^{d})}}\]
yields
(D.4) \[\Big{\|}\left(\rho_{\varepsilon}*_{v}\hat{f}\right)(\eta,k,\cdot)\Big{\|}_{L^{ 2}(\mathbb{R}^{d})}\leq\frac{C_{d,\beta}}{\sqrt{\varepsilon|k|}}\big{\|}\hat{f} (\eta,k,\cdot)\big{\|}_{L^{2}(\mathbb{R}^{d})}^{\frac{1}{2}}\big{\|}\hat{S}( \eta,k,\cdot)\big{\|}_{L^{2}(\mathbb{R}^{d})}^{\frac{1}{2}}.\]
Combining (D.2) with (D.3) and (D.4) yields
\[\big{\|}\hat{f}(\eta,k,\cdot)\big{\|}_{L^{2}(\mathbb{R}^{d})}\leq\frac{C_{d,\beta }}{\sqrt{\varepsilon|k|}}\big{\|}\hat{f}(\eta,k,\cdot)\big{\|}_{L^{2}(\mathbb{R }^{d})}^{\frac{1}{2}}\big{\|}\hat{S}(\eta,k,\cdot)\big{\|}_{L^{2}(\mathbb{R}^{ d})}^{\frac{1}{2}}+C_{d,\beta}\varepsilon^{\beta}\big{\|}|D_{v}|^{\beta}\hat{f}( \eta,k,\cdot)\big{\|}_{L^{2}(\mathbb{R}^{d})}.\]
We finally optimise \(\varepsilon\) so that
\[\big{\|}\hat{f}(\eta,k,\cdot)\big{\|}_{L^{2}(\mathbb{R}^{d})}\leq\left(\frac{1 }{|k|}\big{\|}\hat{f}(\eta,k,\cdot)\big{\|}_{L^{2}(\mathbb{R}^{d})}\big{\|} \hat{S}(\eta,k,\cdot)\big{\|}_{L^{2}(\mathbb{R}^{d})}\right)^{\frac{\beta}{1+ 2\beta}}\big{\|}|D_{v}|^{\beta}\hat{f}(\eta,k,\cdot)\big{\|}_{L^{2}(\mathbb{R} ^{d})}^{\frac{1}{1+2\beta}}.\]
Dividing by \(\big{\|}\hat{f}(\eta,k,\cdot)\big{\|}_{L^{2}(\mathbb{R}^{d})}^{\frac{\beta}{1+ 2\beta}}\) yields
\[\big{\|}\hat{f}(\eta,k,\cdot)\big{\|}_{L^{2}(\mathbb{R}^{d})}\leq\left(\frac{1 }{|k|}\big{\|}\hat{S}(\eta,k,\cdot)\big{\|}_{L^{2}(\mathbb{R}^{d})}\right)^{ \frac{\beta}{1+\beta}}\big{\|}|D_{v}|^{\beta}\hat{f}(\eta,k,\cdot)\big{\|}_{L^ {2}(\mathbb{R}^{d})}^{\frac{1}{1+\beta}},\]
which concludes the proof of (3.8) after integrating over \((\eta,k)\).
**Acknowledgements.** We thank Clement Mouhot for his continuous support and intuitive discussions on the subject. This work was supported by the Cambridge International & Newnham College Scholarship from the Cambridge Trust.
|
2310.05961 | Atmospheric waves disturbances from the solar terminator according to
the VLF radio stations data | The perturbations from the solar terminator in the range of acoustic-gravity
waves (AGWs) periods from 5 minutes to 1 hour were analysed with the use of
measurements of VLF radio signals amplitudes on the European radio path
GQD--A118 (Great Britain--France). These observations provide information on
the propagation of waves at altitudes near the mesopause ($\sim$ 90 km), where
VLF radio waves are reflected. On the considered radio path a systematic
increase in fluctuations in the amplitudes of radio waves was observed within a
few hours after the passage of the evening terminator. For April, June, October
2020 and February 2021 events, the distribution of the number of wave
perturbations with large amplitudes over AGWs time periods has been studied.
Our results show that the evening terminator for different seasons is dominated
by waves in the range of periods of 15--20 minutes. The amplitudes of the AGWs
from the terminator at the heights of the mesosphere (fluctuations in the
concentration of neutral particles, velocity components and vertical
displacement of the volume element) are approximately determined by the
fluctuations of the amplitudes of the VLF radio signals. The amplitudes of the
AGWs on the terminator are 12--14\% in relative concentration fluctuations,
which correspond to the vertical displacement of the atmospheric gas volume of
1.1--1.3 km. Based on the analysis of the AGW energy balance equation, it was
concluded that the waves predominantly propagate in a quasi-horizontal
direction at the terminator. The possibility of studying the long-term changes
in the mesosphere parameters using fluctuations in the amplitudes of VLF radio
waves at the terminator is shown. | Cheremnykh O., Fedorenko A., Voitsekhovska A., Selivanov Yu., Ballai I., Verth G., Fedun V | 2023-09-12T17:41:32Z | http://arxiv.org/abs/2310.05961v1 | # Atmospheric waves disturbances from the solar terminator according to the VLF radio stations data
###### Abstract
The perturbations from the solar terminator in the range of acoustic-gravity waves (AGWs) periods from 5 minutes to 1 hour were analysed with the use of measurements of VLF radio signals amplitudes on the European radio path GQD-A118 (Great Britain-France). These observations provide information on the propagation of waves at altitudes near the mesopause (\(\sim\) 90 km), where VLF radio waves are reflected. On the considered radio path a systematic increase in fluctuations in the amplitudes of radio waves was observed within a few hours after the passage of the evening terminator. For April, June, October 2020 and February 2021 events, the distribution of the number of wave perturbations with large amplitudes over AGWs time periods has been studied. Our results show that the evening terminator for different seasons is dominated by waves in the range of periods of 15-20 minutes. The amplitudes of the AGWs from the terminator at the heights of the mesosphere (fluctuations in the concentration of neutral particles, velocity components and vertical displacement of the volume element) are approximately determined by the fluctuations of the amplitudes of the VLF radio signals. The amplitudes of the AGWs on the terminator are 12-14% in relative concentration fluctuations, which correspond to the vertical displacement of the atmospheric gas volume of 1.1-1.3 km. Based on the analysis of the AGW energy balance equation, it was concluded that the waves predominantly propagate in a quasi-horizontal direction at the terminator. The possibility of studying the long-term changes in the mesosphere parameters using fluctuations in the amplitudes of VLF radio waves at the terminator is shown.
V +
Footnote †: journal: Accepted XXX
0000-0002-8891-7000]Olle Cheremnykh
0000-0002-888-0000]Alla Fedorenko
0000-0002-8886-7000]Anna Voitsekhovska
0000-0002-8886-7000]Yuriy Selivanov
0000-0002-8886-7000]Istvan Ballai
0000-0002-8886-7000]Gary Verth
0000-0002-8886-7000]Viktor Fedun
0000-0002-8886-7000]Viktor Fedun
VLF AGW waves ; solar terminator
## 1 Introduction
The solar terminator is a global source of various types of atmospheric disturbances (Forbes & Moudden, 2009). As indicated by theoretical studies (Beer, 1973; Somsikov, 1983, 1995), ground-based onservations (Galushko et al., 1998; Afraimovich et al., 2009) and satellite observations in the Earth's atmosphere and ionosphere (Forbes et al., 2008; Lizunov et al., 2009; Liu et al., 2009; Bespalova et al., 2016) it include acoustic-gravity waves (AGWs) as well. The possibility of generating atmospheric waves by the solar terminator for the first time was reported by Beer (1973).
The solar terminator can be described as the sharp boundary between the region of the atmosphere illuminated by the Sun and the Earth's shadow. The projection of the speed of the ter
minator onto the horizontal plane \(V_{ST}\) is about \(460\) m s\({}^{-1}\) near the equator. This speed weakly depends on the height in the atmosphere and decreases in the direction from the equator to the pole with increasing latitude. The optical terminator, as the visible boundary between light and shadow, is not a direct source of AGWs. The so-called "physical" terminator, or the region of sharp gradients of atmospheric parameters, which arises as a result of the absorption of solar energy and moves approximately at the speed of the Earth's rotation, is considered to be a source of wave disturbances (Somsikov, 1983). The low latitudes near the geographic equator are most favourable for observing wave disturbances on the terminator, where they have the largest amplitudes and present for a long time (Somsikov, 1983). The numerical modeling of the generation of wave disturbances by a moving boundary between the solar illuminated area and the Earth shadow (terminator, solar eclipse) makes it possible to take into account the real features of the source (Karpov and Bessarab, 2008; Kurdyaeva et al., 2021).
Observations of disturbances associated with the solar terminator are obtained using ground-based observations of the ionosphere using various remote methods (ionospheric sounding, incoherent scattering, observation of the total electron content using GPS, etc., see e.g. Galushko et al. (1998); Afraimovich et al. (2009)). These observations allow us to follow changes in the ionised component but do not provide information about neutral atmospheric gas. On the other hand, disturbances on the terminator can also be observed using in situ satellite measurements. Low-orbiting satellites make 14-16 revolutions around the Earth per day, and, on each revolution, they cross the terminator _line_ twice in the morning and evening local time. However, such studies are rather limited due to the need to fulfill certain conditions regarding the height and configuration of the orbit, as well as scientific equipment (Forbes et al., 2008; Liu et al., 2009; Bespalova et al., 2016). AGWs on the terminator were previously studied based on measurements of the Atmospheric Explorer-E equatorial-orbit satellite in the interval of atmospheric heights of \(250-400\) km (Bespalova et al., 2016). In the present work, we investigated wave disturbances on the evening terminator using ground-based measurements of the amplitudes of radio waves of very low frequencies (VLF). Perturbations were studied in the range of periods corresponding to atmospheric AGWs from \(\approx 5\) min (Brunt-Vaisala period) up to \(\approx 30\) min. Propagation of VLF radio waves occurs in the Earth-ionosphere waveguide with a reflection height during the day at altitudes of \(\approx 75\) km (D-region of the ionosphere) and at night at altitudes of \(\approx 90\) km (E-region of the ionosphere), see e.g. Yampol'skij et al. (1984); Wait and Spies (1964). The global network of VLF receivers opened up wide opportunities for diagnosing the state of the lower ionosphere and mesosphere (Silber and Price, 2016). VLF measurement data of radio stations can be used to solve a number of scientific problems, in particular, the study of the influence of sources of space and ground origin on the state of the lower ionosphere (Silber and Price, 2016). The propagation of AGWs in the atmosphere is usually recorded in the form of periodic fluctuations in the amplitudes and phases of VLF radio waves. Such fluctuations with periods of tens of minutes can be clearly visible in nighttime measurements, as well as at the terminator (Nina and Cadez, 2013; Rozhnoi et al., 2014).
## 2 Observational data analysis
To study the wave disturbances on the terminator in our work, we used the data from a VLF radio transmitter at a mid-latitude station in Great Britain (GQD, \(f=22.1\) kHz) with a reception point in France (A118). The data sets (with the sampling rate \(0.1\) s\({}^{-1}\)) are available via [https://sidstation.loudet.org/data-en.xhtml](https://sidstation.loudet.org/data-en.xhtml). The length of the considered GQD-A118 radio path is \(1279\) km. The location of the terminator and wavefronts relative to the selected radio path in the horizontal plane is shown in Fig. 1. The use of geometric optics approximation is a suitable framework to study these waves at relatively short distances. i.e. less than \(1500\) km (see e.g. Yoshida et al., 2008; Fedorenko et al., 2021). Some properties of the AGWs can also be determined by measuring the amplitudes of radio signals. Due to the presence of sharp amplitude jumps associated with changes in the effective height of radio waves reflection, as well as conditions in the atmosphere after sunrise and sunset (Yoshida et al., 2008; Fedorenko et al., 2021), the data processing of VLF waves at the terminator is difficult. It is clearly seen in Fig. 2 where the time dependence of the radio signal's amplitude along the GQD-A118 path for October 25, 2020 is shown. At the moment of passing the solar terminator, a sharp decrease in signal amplitude is systematically observed in the morning and evening. Note, that the nature of the change in the amplitude at the terminator differs on different tracks depending on the length of the track and the frequency of the radio signal, but these changes are always sharp (Yoshida et al., 2008; Fedorenko et al., 2021). Therefore, during the automatic processing of data series by using standard methods of spectral analysis, these sharp changes in amplitude
Figure 1: The sketch of solar terminator line and wavefronts relative to the considered radio path GQD-A118.
provide strong non-physical spectral harmonics. Here were not interested in the moment of the passage of the solar terminator, but in the wave disturbances that accompany it and develop after the terminator passing. By assuming that the main transition processes from daytime to nighttime conditions have already occurred, the time interval was chosen to be far enough from the evening terminator passage. It is important for another reason as well, i.e. the theory of freely propagating AGWs (used for the numerical estimates later in the paper) is applicable for the modeling of the processes which are far away from the source of disturbances. In this regard, in order to search for waves from the terminator, we considered evening sections of data after the passage of the terminator lasting several hours (Fig. 2, panel (b)). At the same time, the very moment of passing the terminator was excluded from the analysis by assuming that the wave activity develops after it.
For the analysis of wave disturbances, it is necessary to separate them from large-scale changes of another origin. To achieve this, the output signal was considered in the form \(A=\bar{A}+\Delta A\), where \(\bar{A}\) is the average undisturbed value of the amplitude, and \(\Delta A\) is the disturbance. To obtain a smoothed curve for \(\bar{A}\), we applied the moving average method with a rectangular one-hour averaging window. The sizes of the averaging window were selected to single out the disturbances with periods less than 1 h in the original series. These disturbances correspond to medium-scale AGWs in the atmosphere.
The curve obtained by this method is shown in Fig. 2, panel (c). For wave processes, it is appropriate to consider relative fluctuations \(\Delta A/\bar{A}\). This consideration excludes information on amplitude changes which are not related to the wave propagation, both of a physical nature and of technical origin, caused, among other things, by the peculiarities of the radio route and signal reception (Fedorenko et al., 2021). The obtained values of \(\Delta A/\bar{A}\) for the evening hours of October 25, 2020 are shown in Fig. 2 panel (d).
With the help of the method described above, the data of measurements of the radio signals amplitudes along the GQD-A118 path over 4 months (April, June, October 2020, and February 2021) were analysed using Wavelet analysis of evening values. The complex Morlet wavelet was used. Figure 3 shows the amplitude of the wavelet spectra as a function of time (UT). The results of this analysis are displayed in Fig. 3 which shows the results corresponding to data covering four separate days related to different seasons. The relative amplitude of fluctuations in different months is typically 3-10%. It should be noted that large values do not necessarily mean large AGWs amplitudes, but maybe a consequence of changes in atmospheric conditions and the height of radio waves reflection (Fedorenko et al., 2021). Usually, in the series of measurements of the amplitudes of radio waves, a superposition of oscillations of several time scales is observed, as evidenced by the results of spectral analysis. However, we have noticed that after passing the evening terminator, \(\Delta A/\bar{A}\) fluctuations with periods of 15-20 minutes prevail. At the same time, the maximum waves activity develops 1.5-2.5 hours after passing the terminator. In June, waves activity from the terminator is expected to develop later in local time than in other months. Note that along the considered GQD-A118 path, the local solar time is close to UT. Therefore, the dependence of radio waves amplitudes on UT is shown in Figs. 2 and 3 roughly reflect the dependence on local time. Figure 4 shows histograms for the distribution of periods of observed fluctuations with amplitudes limited by the ratio \(\Delta A/\bar{A}>0.03\), plotted for four months from different seasons. For three of the four considered months (except February), the predominance of fluctuations in the range of 15-20 min is noticeable, which indicates the existence of a certain dominant wave mode on the terminator. The diagram shown in Fig. 5 shows the total distribution of the number of cases of waves fluctuations in three months (April, June and October). In this diagram, the regularities of the distribution by periods are more clearly manifested due to the larger number of events. It is most likely that the wave mode of 15-20 min corresponds to the condition of synchronism with the terminator (Lizunov et al., 2009). In February, the superposition of fluctuations of several scales is more often observed, without a clear predominance of this mode. This result may be explained in terms of less solar energy entering the atmosphere or the bad location of the terminator line relative to the radio trace. The conditions for observing the main AGW's mode at the terminator should depend on the season and the orientation of the path relative to the terminator line. To study these features, further statistical studies are needed for different seasons with the involvement of measurement data on other radio paths.
## 3 Interpretation of observations
From general considerations, it is clear that the waves fronts generated by the terminator should move approximately in the direction from west to east due to the rotation of the Earth. These fronts must also be parallel to the terminator line. The horizontal speed of the terminator at the equator at some height, \(z\), in the atmosphere, is \(V_{T}=2\pi\left(R_{E}+z\right)/T_{rot}\), where \(R_{E}\) is the Earth's radius and \(T_{rot}\) is the Earth's rotation period. This speed is about 460 m s\({}^{-1}\) near the Earth's surface, increasing to \(\sim\) 480 m s\({}^{-1}\) at an altitude of 300 km. Thus, near the equator, \(V_{T}\) exceeds the speed of sound (see Fig. 6, Bespalova et al. (2016)) near the Earth's surface (\(\sim\) 300 m s\({}^{-1}\)) but is less than the speed of sound in the upper atmosphere (\(\sim\) 700-900 m s\({}^{-1}\)) at the average solar activity. At atmospheric altitudes where the terminator is supersonic (below 150-180 km), both infrasonic and internal gravity waves can be generated (Somsikov, 1983). The horizontal speed of the terminator movement decreases in the direction from the equator to the pole approximately according to the law \(V_{x}=V_{x}(0)\cos\phi\), where \(\phi\) is the geographic latitude, and \(V_{x}(0)\) is the speed of the terminator at the equator. For the GQD-A118 mid-latitude path considered in the present study the location of the GQD transmitter is 54.73\({}^{\circ}\) N; 2.88\({}^{\circ}\) W; (Skelton, Great Britain), while the A118 receiver is situated at 43.46\({}^{\circ}\) N; 1.33\({}^{\circ}\) E (Muret, France).
In the approximation of geometrical optics, the main contribution to the fluctuations of radio waves amplitudes is given by the first ionospheric harmonic, which is reflected from the ionosphere exactly in the middle between the transmitter and the receiver (Yoshida et al., 2008). Along the GQD-A118 path,
Figure 2: Temporal variation of the amplitude of the VLF radio signal, \(A\), on the GQD–A118 path on October 25, 2020 (a); The same variation of amplitude \(A\) in the evening hours (b); the smoothed value of the amplitude, \(\tilde{A}\), by the moving average method (c); relative amplitude fluctuations, \(\Delta A/\tilde{A}\), in the evening hours (d).
[MISSING_PAGE_POST]
for this reflection point, the latitude is about 49\({}^{\circ}\), that is, the horizontal component of the velocity of the terminator in the middle of the path is \(V_{Tx}\approx 300\) m s\({}^{-1}\), which is close to the speed of sound. According to the conclusions of the work by Somsikov (1983), AGWs from the terminator should propagate approximately in the zonal direction with a phase speed that is equal to the speed of the terminator movement, \(V_{T}\), for each harmonic. This is the condition of wave synchronism with a moving source. The condition of wave synchronism with the terminator means that the period and wavelength are related as \(T=\lambda_{x}/V_{Tx}\). Therefore, on the considered radio path, the horizontal phase speed of the waves, \(U_{x}\), which are synchronised with the terminator, is \(U_{x}=\omega/k_{x}=V_{Tx}\approx 300\) m s\({}^{-1}\), where \(\omega\) is the frequency, \(k_{x}\) is the horizontal component of the wave vector. Therefore, for those AGWs prevailing in the evening hours with periods of 15-20 min, the horizontal wavelengths should be \(\lambda_{x}=U_{x}T\approx 270\)-360 km. Our earlier studies of the satellite data measurements indicate the predominance of AGWs synchronised with it on the terminator (Bespalova et al., 2016), let us recall some properties of AGWs that follow from this synchronism. The dispersion relation of AGWs can be written as (Hines, 1960; Fedorenko, 2010):
\[k_{z}^{2}+\xi^{2}=k_{x}^{2}\left\{\frac{\omega_{b}^{2}}{\omega^{2}}-1\right\} \left(1-\frac{U_{x}^{2}}{c_{x}^{2}}\right), \tag{1}\]
where \(k_{z}\) is the vertical component of the wave vector, \(\omega_{b}\) is the Brunt-Vaisala frequency, \(c_{s}=\sqrt{\gamma gH}\) is the isothermal sound speed, \(H\) is the atmosphere scale height, \(g\) is the gravitational acceleration, \(\gamma\) is the adiabatic heats ratio, and \(\xi^{2}=g^{2}\left(1-\gamma/2\right)^{2}/c_{s}^{4}\) is a small value with a wave number dimension (\(\sim\mathcal{O}(H^{-2})\)).
At the observation height of these waves, the speed of sound can be calculated as \(c_{s}=\sqrt{\gamma gH}\approx 309\) m s\({}^{-1}\), where we used \(\gamma=1.4\), \(H=7\) km. The Brunt-Vaisala period, \(T_{b}=2\pi/\omega_{b}\), at these heights is about 5 minutes, which is 3-4 times less than the prevailing periods of waves observed at the terminator. Therefore, these waves are gravity modes, and not acoustic. Since the speed of the terminator in mid-latitudes is close to the speed of sound, according to Eq. (1), the AGWs synchronised with terminator can be either internal waves with a small value of \(k_{z}\) or evanescent waves with \(k_{z}^{2}<0\) (Cheremnykh et al., 2019). According to the theory, the horizontal phase velocities of internal gravity waves \(U_{x}<c_{s}\). Since \(\omega_{b}>\omega\), then
Fig. 4: Histograms showing the number of wave disturbances on the terminator with relative amplitudes more than 0.03 depending on the period along the GQD-A118 path in the evening hours. The four panels display the results for the events taking place during April, June, October 2020, and February 2021. The total number of the events analysed in the particular month is the following: 39 (April 2020), 50 (June 2020), 55 (October 2020) and 46 (February 2021).
at \(U_{x}<c_{s}\) according to Eq. (1) \(k_{z}>0\) and waves on the terminator propagate freely at some small angle with respect to the horizontal plane. If \(U_{x}>c_{s}\), then \(k_{z}^{2}<0\) and they propagate horizontally, that is, they are evanescent. However, it is rather difficult to establish exactly whether the observed AGWs belong to freely propagating or evanescent waves due to the closeness of \(U_{x}\) to the speed of sound, which we cannot accurately determine without direct measurements of atmospheric parameters.
## 4 Determination of the characteristics of the AGWs on the terminator
The propagation of AGWs in the atmosphere is manifested in periodic changes in a number of atmospheric parameters: density, temperature, particle velocity, or pressure. In the case of direct measurements, the amplitudes of AGWs can be understood as amplitudes of fluctuations of any of these quantities, which are related to each other by polarisation ratios (Hines, 1960). In this study, we addressed the fluctuations in the amplitudes of VLF radio waves, \(\Delta A/\bar{A}\), which indirectly reflect the distribution of AGWs at the heights of the mesosphere where these radio waves are reflected. The periods of the observed \(\Delta A/\bar{A}\) fluctuations correspond to the periods of AGWs in the atmosphere, however, information on other spectral and amplitude characteristics of AGWs remains unknown. In the general case, the \(\Delta A/\bar{A}\) values are related to the AGWs amplitudes by a complex functional dependence, which is determined to a greater extent by the features of the radio path than by the physical properties of the waves themselves in the atmosphere. Therefore, we can estimate the amplitudes of the AGWs at the heights of radio wave reflection only approximately within the framework of certain assumptions.
To determine the amplitudes of AGWs on the terminator based on the observations of \(\Delta A/\bar{A}\) we used the previously developed method (Fedorenko et al., 2021). The authors showed that in the approximation of geometric optics
\[\frac{\Delta A}{\bar{A}}\approx K\Delta h, \tag{2}\]
where \(\Delta h\) is the wave fluctuations of the displacement of the effective level of reflection of radio signals, and \(K\) is a coefficient that depends on the length of the radio path, the frequency of the signal and the ratio between the amplitudes of near-ground and ionospheric waves. It is clear that the coefficient, \(K\), is different for each radio path. The study by Fedorenko et al. (2021) also determined that the expression that relates the relative fluctuations of the concentration of neutral particles, \(\Delta N/N_{0}\) (in fact, the amplitude of AGWs), with the magnitude of waves fluctuations \(\Delta h\) can be written as
\[\frac{\Delta N}{N_{0}}=\frac{\Delta h}{H}\left(1+\frac{dH}{dz}\right). \tag{3}\]
This relationship confirms that once the value of \(K\) for the selected radio path is determined, the amplitude of the AGWs can be calculated in the fluctuations of the concentration of neutral particles \(\Delta N/N_{0}\) from the measurements of the fluctuations of \(\Delta A/\bar{A}\). By combining Eqs. (2) and (3) we can determine the amplitude of the AGW as
\[\frac{\Delta N}{N_{0}}=\widetilde{K}\frac{\Delta A}{\bar{A}}, \tag{4}\]
where
\[\widetilde{K}=\frac{1}{HK}\left(1+\frac{dH}{dz}\right).\]
For approximate estimates of the amplitude of radio waves at the receiving point along a relatively short path (\(<\)1500 km), we will consider the interference only two waves: a near-ground wave with an amplitude \(A_{g}\) and the 1st ionospheric wave with an amplitude \(A_{1}\), which is reflected from the ionosphere once before reaching the receiver. As shown by Fedorenko et al. (2021), in this approximation, the coefficient \(K\) can be expressed in terms of the value of the parameter \(\beta\) defined as
\[\beta=\left(A_{1}/A_{g}\right)+\left(A_{g}/A_{1}\right).\]
It is known that for a radio signal with a frequency of \(\sim\)20 kHz along 700-1000 km long paths, \(A_{1}\approx A_{g}\). Along longer paths, the amplitude of the ionospheric waves begins to exceed the amplitude of the near-ground waves due to its lower attenuation (Yoshida et al., 2008). The dependence of \(K\) on the effective height of radio waves reflection is plotted in Fig. 6 for three different \(\beta\) values corresponding to realistic values of the \(A_{1}/A_{g}\) ratio along paths \(<1500\) km. To clarify the value of \(\beta\), we will use the results by Yoshida et al. (2008), where the values of \(A_{1}\) and \(A_{g}\) were calculated for radio waves of different frequencies and with different track lengths. Considering Fig. 3b in Yoshida et al. (2008), we obtain that \(\beta\approx 2.5\). Therefore, from the dependencies displayed in Fig. 6, in the case of the GQD-A118 path, we have to use the variation shown by the hatched curve. It can be seen that for typical daytime radio waves for reflection heights of \(\sim\)75 km, the value of \(K\)(75 km) \(\approx-0.03\) km\({}^{-1}\) is several times smaller than the value corresponding to nighttime reflection heights at \(\sim\) 90 km, for which \(K\)(90 km) \(\approx\) 0.1 km\({}^{-1}\). If we assume that during the day and at night the AGWs have
Figure 5: Histogram of the total three-month distribution of periods of waves disturbances with the evening terminator.
the same amplitudes of \(\Delta N/N_{0}\), then considering the course of the function \(K(h)\), it is clear that the night values of \(\Delta A/\bar{A}\), in this case, will be approximately 3 times larger. Accordingly, we expect that in the measurements of amplitudes of radio signals, waves disturbances are more visible at night.
After sunset, there is a sharp transition from day (\(\sim 75\) km) to night heights (\(\sim 90\) km) of the reflection of radio waves. For the GQD-A118 path, the moment of passage of the terminator corresponds to the value of \(K(88\) km\()=0\). Note that for different traces the values of height, \(h\), at which \(K\) passes through zero are different (Fedorenko et al., 2021). To the left of this point of the curve are the morning and daytime \(K(h<88\) km) values, and to the right are the evening and night \(K(h>88\) km) values.
In the evening hours, along the GQD-A118 track, there are fluctuations in the amplitudes of radio waves in the \(\Delta A/\bar{A}\) range, which we associate with the propagation of AGWs from the terminator at the heights of the mesosphere. Let us assume that the amplitudes of AGWs, \(\Delta N/N_{0}\), should be approximately the same for several consecutive days since the conditions of illumination by the Sun change by a small amount. Then the observed differences in the evening amplitudes of \(\Delta A/\bar{A}\) on a fixed path are caused by a change in the effective height of the reflection height, \(h\), which is associated with a change in the background conditions in the mesosphere and ionosphere at an altitude of \(\sim 90\) km. The maximum for four months of observation of the amplitudes of fluctuations in the evening terminator is \(\left(\Delta A/\bar{A}\right)_{max}=0.1\ldots 0.12\). These maximum values correspond to \(K_{max}\approx 0.12\) km\({}^{-1}\) and \(h\approx 91\) km, which is shown in Fig. 6. Since in the evening and at night the reflection of radio waves occurs near mesopause heights, we will put \(dH/dz=0\) and the maximum value of the AGWs' amplitude becomes
\[\frac{\Delta N}{N_{0}}\approx\frac{1}{HK_{max}}\left(\frac{\Delta A}{\bar{A}} \right)_{max}. \tag{5}\]
Therefore, for the characteristic values of \(H=7\) km, \(K_{max}\approx 0.12\) km\({}^{-1}\) and \(\left(\Delta A/\bar{A}\right)_{max}=0.1\ldots 0.12\), we will obtain for the amplitude of AGWs in concentration fluctuations \(\Delta N/N_{0}=0.12\ldots 0.14\).
To estimate other characteristics of the AGWs on the terminator, we will use the well-known relationship that follow from the system of hydrodynamic equations (Makhlouf et al., 1990):
\[\frac{\Delta\rho}{\rho_{0}}=\left(\frac{\gamma-1}{\gamma}\right)\frac{\Delta z }{H}+\frac{V_{x}U_{x}}{c_{s}^{2}}. \tag{6}\]
Here \(\rho_{0}\) is the undisturbed value of the density, \(\Delta z=V_{z}/i\omega\) is the vertical displacement of the volume element, \(V_{z}\) and \(V_{x}\) are the vertical and horizontal components of the particle velocity. The reflection of radio waves occurs below the turbopause (\(\sim 100km\)), where all atmospheric gases are well mixed. Therefore, \(\Delta\rho/\rho_{0}=\Delta N/N_{0}\) and Eq. (6) is also valid for concentration fluctuations. The first term in the right-hand side of Eq. (6) reflects the vertical displacements of the volume element, and the second denotes density fluctuations associated with pressure changes as we can consider that for monochromatic plane waves \(\Delta p=\rho_{0}V_{x}U_{x}\). Since the equilibrium density and pressure in the atmosphere are related by \(\rho_{0}=p_{0}\gamma/c_{s}^{2}\), the second term can be written as \(\Delta p/(\gamma p_{0})\). This term actually reflects the contribution of the acoustic volume compression to the resulting density fluctuations. For AGWs with periods \(T>T_{b}\) and horizontal phase velocities small compared to \(c_{s}\), the second term in Eq. (6) is usually neglected. However, for AGWs that are in synchronism with the movement of the terminator, the value of the horizontal phase speed (\(U_{x}\)) is close to \(c_{s}\), therefore, the acoustic part in Eq. (6) cannot be neglected.
Let us estimate the value of \(V_{z}\) using Eq. (6) by assuming that \(U_{x}\approx c_{s}\), \(V_{x}\approx V_{z}(T/T_{b})\), which is carried out for sufficiently large number periods of AGWs compared to \(T_{b}\). After simple transformations it follows from Eq. (6) that \(\left[\Delta N/N_{0}\right]\approx\alpha\left|V_{z}\right|\), where the coefficient \(\alpha\) depends on the wave period. Approximate values of the characteristics of AGWs at the evening terminator for two typical periods are given in Table 1.
## 5 Analysis of the AGWs energetics on the terminator
On the GQD-A118 radio path under consideration, the horizontal speed of the terminator is \(V_{Tx}\approx 300\) m s\({}^{-1}\), which is close to the speed of sound. At the same time, for freely propagating AGWs, \(U_{x}<c_{s}\), while for waves synchronised with the terminator, \(V_{Tx}=U_{x}\approx c_{s}\). In general it is problematic to determine the type of waves. It should be noted that direct satellite measurements of several atmospheric parameters at the same time make it possible to determine the type of AGWs based on polarisation ratios between fluctuations of various quantities, e.g. density, temperature, and velocity (Klymenko et al., 2021). In the absence of direct measurements of various parameters to clarify the type of AGWs observed on the evening terminator in the fluctuations of radio wave amplitudes, we will additionally analyse their energetics. It is known that for freely propagating waves, the equality of the average values of kinetic \(\bar{E}_{K}\) and potential energy \(\bar{E}_{P}\) over the period is fulfilled (Fedorenko, 2010). If it turns out that \(\bar{E}_{K}\neq\bar{E}_{P}\), then the observed waves are evanescent. In the case of AGWs we considered the standard dependence (Hines, 1960):
\[V_{x},V_{z}\sim\exp\left(z/2H\right)exp\left[i\left(\omega t-k_{x}x-k_{z}z \right)\right]. \tag{7}\]
Figure 6: The variation of the coefficient \(K\) (km\({}^{-1}\)) on the effective height \(h\) of radio waves reflection for three values of the parameter \(\beta\) along the GQD–A118 radio path: 2.2 (dashed), 2.5 (hatched), 3 (solid curve).
In addition, based on the standard hydrodynamic equations the energy balance equation reads (Fedorenko, 2010)
\[E_{Kx}+\bar{E}_{Kz}=\bar{E}_{A}+\bar{E}_{G}, \tag{8}\]
where the terms denote the average values of the individual components of the AGW's energy over the period. Accordingly, \(\bar{E}_{Kx}=\rho_{0}V_{z}^{2}/4\) is the kinetic energy of horizontal movements, \(\bar{E}_{Kz}=\rho_{0}V_{z}^{2}/4\) is the kinetic energy of vertical movements (with \(\bar{E}_{K}=\bar{E}_{Kx}+\bar{E}_{Kz}\) the total kinetic energy), \(\bar{E}_{A}=\rho_{0}V_{z}^{2}\left(\omega_{k}/k_{z}c_{s}\right)^{2}/4\) the potential acoustic energy, \(\bar{E}_{G}=\rho_{0}V_{z}^{2}\left(\omega_{0}/\omega\right)^{2}/4\) the potential thermobaric (or gravity) energy, with \(\bar{E}_{P}=\bar{E}_{A}+\bar{E}_{G}\) the total potential energy. For the particular case of horizontal evanescent wave disturbances (see e.g. Cheremnykh et al., 2019):
\[V_{x},V_{z}\sim\exp(az)\exp\left[i(\omega t-k_{x}x)\right]. \tag{9}\]
In this case, the energy balance equation has the form (Fedorenko et al., 2022):
\[\left(a-\frac{N^{2}}{g}\right)\left(\bar{E}_{Kx}-\bar{E}_{A}\right)=\left( \frac{g}{c_{s}^{2}}-a\right)\left(\bar{E}_{G}-\bar{E}_{Kz}\right), \tag{10}\]
where the value \(a\) determines the dependence of the disturbance amplitudes' on height.
It can be seen that at arbitrary values of \(a\), the kinetic and potential energies in the case of evanescent AGWs are not equal to each other. The condition of equality of these energies is fulfilled when \(a=1/2H\), that is, at the boundary between the region of free propagation and the evanescent region of the AGWs.
Since for waves on the terminator \(U_{x}\approx c_{s}\), \(\omega_{b}/\omega\gg 1\), then \(\bar{E}_{Kx}\approx\bar{E}_{A}\), \(\bar{E}_{Kz}\ll\bar{E}_{G}\). It can be seen that \(\bar{E}_{K}\neq\bar{E}_{P}\), that is, the observed waves are evanescent. It follows from Eq. (10) that with \(\bar{E}_{Kx}\approx\bar{E}_{A}\), the vertical amplitude changes according to the law \(a\approx g/c_{s}^{2}\). The properties of waves observed at the terminator, \(U_{x}\approx c_{s}\) and \(a\approx g/c_{s}^{2}\), correspond to Lamb pseudo-modes (Cheremnykh et al., 2019). Note that this type of waves is generated by the terminator at altitudes \(<100\) km in mid-latitudes, where \(V_{Tx}\approx c_{s}\). Depending on the geographical latitude and height in the atmosphere, the solar terminator can generate different types of waves. Near the equator, at atmospheric heights up to \(\sim 100\) km, the horizontal speed of the terminator exceeds the speed of sound (\(V_{Tx}>c_{s}\)), so the freely propagating AGWs with \(U_{x}<c_{s}\) cannot be in synchronism with this source. In the upper atmosphere, we have \(c_{s}\approx 900\) m s\({}^{-1}\) at the average activity of the Sun, that is, the terminator is subsonic at different latitudes and therefore can generate freely propagating AGWs.
## 6 Study of long-term changes in mesosphere parameters
It was shown above that the value of \(\Delta A/\bar{A}\) is related to the amplitude of AGWs through the complex function \(\widetilde{K}\) (see Eq. 4). This function is determined both by the features of the radio path and by the physical properties of the AGWs, as well as by the state of the atmosphere and ionosphere at the heights of reflection of VLF radio waves. When finding the amplitudes of AGWs from measurements of the amplitudes of radio waves an important problem lies precisely in determining the value of \(\widetilde{K}\), which depends on a number of parameters. The solar terminator, as a regular source of AGWs, opens up additional opportunities for determining the properties of these waves, as well as for the analysis of long-term changes in atmospheric parameters near the mesopause - at altitudes that are difficult to reach for other observation methods. Since the terminator is a regular source, the properties of the waves generated by it should be similar if the conditions of illumination by the Sun change by a small amount. Therefore, the AGWs on the terminator can be considered as reference waves, the amplitudes of which \(\Delta N/N_{0}\) on a fixed radio path, as well as on different, but geographically close paths, differ by a little.
Our studies revealed that the \(\Delta A/\bar{A}\) fluctuations on the terminator along the radio path of GQD-A118 were situated in the interval \(\Delta A/\bar{A}=0.03\ldots 0.12\). Let's suppose that the \(\Delta N/N_{0}\) on the terminator is approximately the same for several days. Then the differences in fluctuations \(\Delta A/\bar{A}\) for the amplitudes of radio waves should be related to changes in the state of the atmosphere. Considering the AGWs at the terminator as reference waves, allows us to identify long-term (compared to the periods of the AGWs) trends in the parameters of the atmosphere. These can be seasonal or other changes in the state of the mesosphere, as well as long-period wave fluctuations. On a fixed path, slow changes in the state of the atmosphere at the heights of radio waves reflection will be reflected in the measurement data in the form of slow fluctuations of \(\Delta A/\bar{A}\).
An example of such slow changes is shown in Fig. 7, which shows the variation of the amplitude associated with \(\Delta A/\bar{A}\) at the evening terminator along the path GQD-A118, during October and April 2020. In the data covering October 2020, a quasi-wave structure with a period of \(\sim 10-12\) days is clearly visible, which probably represents a planetary waves. In the data for April 2020, there is also a certain trend of \(\Delta A/\bar{A}\) amplitudes, but the expressed periodicity is not followed. That is, by measuring the amplitudes of fluctuations of \(\Delta A/\bar{A}\) on the terminator, it is possible to study not only AGWs directly caused by this source but also planetary waves manifested in slow trends of \(\Delta A/\bar{A}\). For such studies, it is necessary to stitch series of \(\Delta A/\bar{A}\) fluctuations on a fixed radio path for several months or even years.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline AGW period & Coefficient \(\alpha\) & Fluctuations in particle & Vertical velocity & Horizontal velocity & Vertical volume \\ min & s/m & concentration \(\Delta N/N_{0}\) & of particles \(V_{z}\), m/s & of particles \(V_{x}\), m/s & displacement, \(\Delta z\), km \\ \hline
15 & \(1.6\cdot 10^{-2}\) & \(0.12\ldots 0.14\) & \(7.5\ldots 8.8\) & \(22.5\ldots 26.4\) & \(1.1\ldots 1.3\) \\ \hline
20 & \(2.1\cdot 10^{-2}\) & \(0.12\ldots 0.14\) & \(5.7\ldots 6.7\) & \(22.8\ldots 26.8\) & \(1.1\ldots 1.3\) \\ \hline \end{tabular}
\end{table}
Table 1: Characteristics of AGWs on the evening terminator
If we consider different, but geographically close radio paths, the changes in the value of \(\Delta A/\bar{A}\) on the evening terminator on the same calendar date will differ in amplitude due to the different values of \(\bar{K}\) on these paths. An example of long-term changes along two closely located radio paths DHO38-A118 (Germany-France) and GQD-A118 (Great Britain-France) is shown in Fig. 5 in the study by Fedorenko et al. (2021). This figure reveals a slow trend (most likely of seasonal origin) of \(\Delta A/\bar{A}\) amplitude fluctuations in the evening hours (\(UT=20\ldots 24^{k}\)) during 2013-2014. Note that on one of these paths, (DHO38-A118), the amplitudes of \(\Delta A/\bar{A}\) are systematically larger, but long-term changes are consistent on both paths. Therefore, when analysing synchronous measurements of \(\Delta A/\bar{A}\) values simultaneously on several radio paths, it is possible to specify the value of the coefficient \(\bar{K}\) for different paths.
## 7 Conclusions
The wave disturbances from the evening terminator were studied in the range of periods of medium-scale AGWs from 5 minutes up to 1 hour. Data from measurements of amplitudes of VLF radio waves on the mid-latitude path GQD-A118 (Great Britain-France) were used. A systematic increase in the amplitudes of wave fluctuations was recorded on the path in question for several hours after the passage of the evening terminator. This indicates an increase in waves activity at the heights of the mesosphere, where radio waves of the VLF range are reflected. Fluctuations in radio signal amplitudes were observed for four months: April, June, October 2020 and February 2021. For different seasons, the existence of predominant wave periods of \(\approx 15-20\) min at the terminator was found. The obtained results probably indicate the predominant realisation on the solar terminator of wave harmonics corresponding to the condition of synchronism with this moving source. Amplitudes of AGWs from the terminator at the heights of the mesosphere (relative fluctuations in the concentration of neutral particles, velocity fluctuations, and vertical displacement of the volume) were calculated from the fluctuations of the radio signal amplitudes. The amplitudes of acoustic-gravity waves at the terminator are 12-14% in relative concentration fluctuations, which correspond to the vertical displacement of the atmospheric gas volume of 1.1-1.3 km. The energy balance of the AGWs observed at the terminator was analyzed. Based on the energy analysis, it was concluded that in the mid-latitude mesosphere, the solar terminator mainly generates Lamb pseudo-modes with \(U_{x}\approx c_{s}\) and \(a\approx g/c_{s}^{2}\). The possibility of studying long-term changes in the parameters of the mesosphere based on observations of trends in fluctuations of radio waves amplitudes at the terminator is considered.
## Acknowledgments
The study was supported by the National Research Fund of Ukraine, project 2020.02/0015 Theoretical and Experimental Studies of Global Disturbances of Natural and Man-Made Origin in the Earth-Atmosphere-Ionosphere System. OC, VF, IB and GV are grateful to The Royal Society, International Exchanges Scheme, collaboration with Ukraine (IES\(\backslash\)R1\(\backslash\)211177).
|
2310.20511 | Generic derivations on algebraically bounded structures | Let K be an algebraically bounded structure and T be its theory. If T is
model complete, then the theory of K endowed with a derivation, denoted by
$T^{\delta}$, has a model completion. Additionally, we prove that if the theory
T is stable/NIP then the model completion of $T^{\delta}$ is also stable/NIP.
Similar results hold for the theory with several derivations, either commuting
or non-commuting. | Fornasiero Antongiulio, Terzo Giuseppina | 2023-10-31T14:55:00Z | http://arxiv.org/abs/2310.20511v4 | # Generic derivations on algebraically bounded structures
###### Abstract.
Let K be an algebraically bounded structure. If K is model complete, then the theory of K endowed with a derivation has a model completion. Similar results hold for several derivations, both commuting and non-commuting. Moreover we prove that many of the model-theoretic properties of the theory of K are inherited by the theory of K endowed with several derivations.
Key words and phrases:Derivation, algebraically bounded, model completion 2020 Mathematics Subject Classification: Primary: 03C60; 12H05; 12L12 Secondary: 03C10
###### Contents
* 1 Introduction
* 1.1 A brief model theoretic history
* 1.2 Acknowledgments:
* 2 Algebraically boundedness and dimension
* 2.1 Examples
* 2.2 Assumptions
* 3 Generic derivation
* 3.1 Model completion
* 3.2 The axioms
* 3.3 Proof preliminaries
* 3.4 Proof of Theorem 3.5
* 3.5 Corollaries
* 4 Several non-commuting derivations
* 5 Several commuting derivations
* 5.1 Configurations
* 5.2 The axioms
* 5.3 Proof of Prop. 5.5
* 6 Stability and NIP
* 7 Algebraic closure and independence relations
* 8 Simplicity
* 9 Uniform finiteness
* 10 The field of constants
* 11 Open core
* 12 Differential dimension
* 12.1 The commutative case
12.2 The non-commutative case
* 13Genericity
* 14Pierce-Pillay axioms
* 15Conjectures and open problems
* 15.1Elimination of imaginaries
* 15.2Definable types
* 15.3Zariski closure
* 15.4Kolchin polynomial
* 15.5Monoid actions
## 1. Introduction
Let \(\mathbb{K}\) be a structure expanding a field of characteristic \(0\). Recall that \(\mathbb{K}\) is **algebraically bounded** if the model-theoretic algebraic closure and the field-theoretic algebraic closure coincide in every structure elementarily equivalent to \(\mathbb{K}\). Algebraically closed, real closed, p-adically closed, pseudo-finite fields, and algebraically closed valued fields are examples of algebraically bounded structures; for more details, examples, and main properties see [21] and SS2.
Let \(L\) be the language of \(\mathbb{K}\) and \(T\) be its theory. In order to study derivations on \(\mathbb{K}\), we denote by \(\delta\) a new function symbol, and by \(T^{\delta}\) the \(L^{\delta}\)-theory expanding \(T\) by saying that \(\delta\) is a derivation. Let \(\mathbb{K}\) be algebraically bounded; we define an \(L^{\delta}\)-theory \(T^{\delta}_{g}\) extending \(T^{\delta}\), with three equivalent axiomatizations (see SSSS3, 14); one of them is given by \(T^{\delta}\), plus the following axiom scheme:
For every \(X\subseteq\mathbb{K}^{n}\times\mathbb{K}^{n}\) which is \(L\)-definable with parameters, if the dimension of the projection of \(X\) onto the first \(n\) coordinates, which we denote by \(\Pi_{n}(X)\), is \(n\), then there exists \(\bar{a}\in\mathbb{K}^{n}\) such that \(\langle\bar{a},\delta\bar{a}\rangle\in X\).
**Theorem 1.1**.: _If \(T\) is model complete, then \(T^{\delta}_{g}\) is the model completion of \(T^{\delta}\)._
Moreover in SS13 (under some assumptions on \(\mathbb{K}\)) we show that the family of derivations on \(\mathbb{K}\) which are models of \(T^{\delta}_{g}\) is a dense \(\mathcal{G}_{\delta}\) inside the family of all derivations.
We endow \(\mathbb{K}\) with several derivations \(\delta_{1},\ldots,\delta_{m}\) and we consider both the case when they commute and when we don't impose any commutativity. We obtain two theories that we denote by:
\(T^{\bar{\delta}}\)**:** the expansion of \(T\) saying that the \(\delta_{i}\) are derivations which commute with each other;
\(T^{\bar{\delta},nc}\)**:** the expansion of \(T\) saying that the \(\delta_{i}\) are derivations without any further conditions.
Both theories have a model completion (if \(T\) is model complete) (see SSSS5, 4). For convenience, we use \(T^{\bar{\delta},?}_{g}\) to denote either of the model completions, both for commuting derivations and the non-commuting case. Many of the model-theoretic properties of \(T\) are inherited by \(T^{\bar{\delta},?}_{g}\):
**Theorem 1.2** (SS6, SS8).: \(T^{\bar{\delta},?}_{g}\) _is uniformly finite (see SS9). Assume that \(T\) is stable/NIP/simple. Then \(T^{\bar{\delta},?}_{g}\) is stable/NIP/simple._
Moreover, if \(\mathbb{K}\) has a definable topology, then, under some reasonable assumptions, we have that \(T\) is the open core of \(T^{\bar{\delta},?}_{g}\) (see SS11).
In SS7 we characterize the algebraic closure inside models of \(T^{\bar{\delta},?}_{g}\), and show that \(T^{\bar{\delta},?}_{g}\) inherit independence relations from \(T\).
In SS12 we show that models of \(T^{\bar{\delta}}_{g}\) have a dimension function, and generalize the result in [10] on "coincidence of dimensions".
In SS10 we study the field of constant \(\mathfrak{C}_{\bar{\delta}}\) of a model \(\langle\mathbb{K},\bar{\delta}\rangle\models T^{\bar{\delta},?}_{g}\). We show that \(\langle\mathbb{K},\mathfrak{C}_{\bar{\delta}}\rangle\) is a lovely pair of geometric structures (in the sense of [1]), and we study the definable subsets of \(\mathfrak{C}^{n}_{\bar{\delta}}\). We conclude the paper with several open questions and conjectures.
### A brief model theoretic history
From a model theoretic point of view, differential fields have been studied at least since Robinson [20] proved that the theory of fields of characteristic \(0\) with one derivation has a model completion, the theory \(\operatorname{DCF}_{0}\) of differentially closed fields of characteristic \(0\).
Blum gave a simpler sets of axioms for \(\operatorname{DCF}_{0}\), saying that \(\mathbb{K}\) is a field of characteristic \(0\), and, whenever \(p\) and \(q\) are differential polynomials in one variable, with \(q\) not constant and of order strictly less than the order of \(p\), then there exists \(a\) in \(\mathbb{K}\) such that \(p(a)=0\) and \(q(a)\neq 0\) (see [14, 15] for more details). Pierce and Pillay [21] gave yet another axiomatization for \(\operatorname{DCF}_{0}\), which has been influential in the axiomatizations of other structures (see SS14).
The theory \(\operatorname{DCF}_{0}\) (and its models) has been studied intensively, both for its own sake, for applications, and as an important example of many "abstract" model theoretic properties: it is \(\omega\)-stable of rank \(\omega\), it eliminates imaginaries, it is uniformly finite, etc. For some surveys see [14, 13, 15, 16].
Models of \(\operatorname{DCF}_{0}\), as fields, are algebraically closed fields of characteristic \(0\); their study has been extended in several direction. An important extension, which however goes beyond the scope of this article, is Wood's work [22] on fields of finite characteristic.
From now on all fields are of characteristic \(0\). More close to the goal of this article is the passage from one derivation to several commuting
ones: McGrail [13] axiomatized \(\mathrm{DCF}_{0,\mathrm{m}}\) (the model completion of the theory of fields of characteristic \(0\) with \(m\) commuting derivations). While the axiomatization is complicate (see SS5 for an easier axiomatization, and [14, 15] for alternative ones), from a model theoretic point of view \(\mathrm{DCF}_{0,\mathrm{m}}\) is quite similar to \(\mathrm{DCF}_{0}\): its models are algebraically closed (as fields), it is \(\omega\)-stable of rank \(\omega^{m}\), it eliminates imaginaries, it is uniformly finite, etc.
Moosa and Scanlon followed a different path in [16], where they studied a general framework of fields with non-commuting opeators; for this introduction, the relevant application is that they proved that the theory of \(m\) non-commuting derivations has a model completion (see [16] and SS4), which we denote by \(\mathrm{DCF}_{0,\mathrm{m,nc}}\). Here the model theory is more complicate: \(\mathrm{DCF}_{0,\mathrm{m,nc}}\) is stable, but not \(\omega\)-stable; however, it still eliminates imaginaries and it is uniformly finite.
Surprisingly, we can give 3 axiomatizations for \(\mathrm{DCF}_{0,\mathrm{m,nc}}\) which are much simpler than the known axiomatizations for \(\mathrm{DCF}_{0,\mathrm{m}}\) (including the one given in this article), see SSS4, 14. We guess that the reason why this has not been observed before is that people were deceived by the rich algebraic structure of \(\mathrm{DCF}_{0,\mathrm{m}}\).
Indeed, from an algebraic point of view, \(\mathrm{DCF}_{0,\mathrm{m}}\) has been studied extensively (see [14] for a starting point) and is much simpler than \(\mathrm{DCF}_{0,\mathrm{m,nc}}\,.\) The underlying combinatorial fact is that the free commutative monoid on \(m\) generators \(\Theta\), with the partial ordering given by \(\alpha\preceq\beta\alpha\) for every \(\alpha,\beta\in\Theta\), is a well-partial-order (by Dickson's Lemma); this fact is a fundamental ingredient in Ritt-Raudenbush Theorem, asserting that there is no infinite ascending chain of radical differential ideals in the ring of differential polynomials with \(m\) commuting derivations with coefficients in some differential field; moreover, every radical differential ideal is a finite intersection of prime differential ideals. Since in models of \(\mathrm{DCF}_{0,\mathrm{m}}\) there is a natural bijection between \(\mathrm{DCF}_{0,\mathrm{m}}\) is \(\omega\)-stable as we mentioned before.
Very different is the situation for the free monoid on \(m\) generators \(\Gamma\), with the same partial ordering. \(\Gamma\) is well-founded, but (when \(m\) is at least \(2\)) not a well-partial-order. Given an infinite anti-chain in \(\Gamma\), it is easy to build an infinite ascending chain of radical differential ideals (in the corresponding ring of non-commuting differential polynomials), and therefore Ritt-Raudenbush does not hold in this situation.
Some limited form of non-commutativity was considered already in [15, 16, 17], where the derivations live in a finite-dimensional Lie algebra.
People have extended \(\mathrm{DCF}_{0}\) in another direction by considering fields which are not algebraically closed: Singer, and later others [16, 1, 1, 10] studied real closed fields with one generic derivation, and [18] extended to \(m\) commuting derivations
(see also [11] for a different approach); [12, 13, 14, 15] studied more general topological fields with one generic derivation. In [16] the author studied fields with \(m\) independent orderings and one generic derivation and in [11] they studied o-minimal structures with several commuting generic "compatible" derivations. In her PhD thesis, Borrata [1] studied ordered valued fields and "tame" pairs of real closed fields endowed with one generic derivation.
The results in [1, 13, 14, 15, 16, 17, 18, 19] extend the one in [19] and are mostly subsumed in the results of this article (because the structures they study are mostly algebraically bounded).
On the other hand, Leon Sanchez and Tressl [19, 18, 18] study generic derivations on fields which are "large" in the sense of Pop.
It turns out that, while in practice many of the fields studied in model theory are both large and algebraically bounded (and therefore their generic derivations can be studied by using either our framework or the one of Tressl), there exist large fields which are not algebraically bounded (the field \(\mathbb{C}((X,Y))\) is large but not algebraically bounded, see [12, Example 8]), and there exist algebraically bounded fields which are not large (see [10]).
On the other hand, if \(\mathbb{K}\) is a pure field that is large and model complete (in the language of rings), then \(\mathbb{K}\) is algebraically bounded (see [10, Thm.5.4]; there is a slight misstatement in their theorem, in that \(\mathbb{K}\) must be in the language rings with constants, and not only a "pure" field as defined in their paper; besides, their proof allows adding constants to the language in characteristic \(0\)). Moreover, in this paper we consider fields which are not pure fields, such as algebraically closed valued fields (see SS2 for more examples).
It would be interesting to know it there is a common framework that would include generic derivations on both large and algebraically bounded fields.
Often the fields considered have a topology (e.g. they are ordered fields or valued fields): however, the theories described above do not impose any continuity on the derivation (and the corresponding "generic" derivations are not continuous at any point). In [17, 18] and [19] the authors consider the case of a valued field endowed with a "monotone" derivation (i.e. a derivation \(\delta\) such that \(v(\delta x)\geq v(x)\); in particular, \(\delta\) is continuous) and prove a corresponding Ax-Kochen-Ersov principle.
Some proofs after Section 7 are incomplete and we will add them in a future version.
### Acknowledgments:
The authors thank to Noa Lavi, Giorgio Ottaviani, Francoise Point, Omar Sanchez and Marcus Tressl for the interesting discussions on the topic.
## 2. Algebraically boundedness and dimension
We fix an L-structure \(\mathbb{K}\) expanding a field of characteristic \(0\).
We recall the following definition in [20], as refined in [10]:
**Definition 2.1**.: Let \(F\) be a subring of \(\mathbb{K}\). We say that \(\mathbb{K}\) is algebraically bounded over F if, for any formula \(\phi(\bar{x},y)\), there exist finitely many polynomials \(p_{1},\dots,p_{m}\in F[\bar{x},y]\) such that for any \(\bar{a}\), if \(\phi(\bar{a},\mathbb{K})\) is finite, then \(\phi(\bar{a},\mathbb{K})\) is contained in the zero set of \(p_{i}(\bar{a},y)\) for some \(i\) such that \(p_{i}(\bar{a},y)\) doesn't vanish. \(\mathbb{K}\) is **algebraically bounded** if it is algebraically bounded over \(\mathbb{K}\).
Since we assumed that \(\mathbb{K}\) has characteristic \(0\), in the above definition we can replace "\(p_{i}(\bar{a},y)\) doesn't vanish" with the following:
"\(p_{i}(\bar{a},b)=0\) and \(\frac{\partial p_{i}}{\partial y}(\bar{a},b)\neq 0\)".
**Fact 2.2** ([10], see also [11]).: _T.f.a.e.:_
1. _The model theoretic algebraic closure coincide with the field theoretic algebraic closure over_ \(F\) _in every elementarily extension of_ \(\mathbb{K}\) _(it suffices to check it in the monster model);_
2. \(\mathbb{K}\) _is algebraically bounded over_ \(F\)_;_
3. \(\mathbb{K}\) _is algebraically bounded over_ \(\operatorname{dcl}(\emptyset)\)_._
**Remark 2.3**.: Junker and Koenigsmann in [10] defined \(\mathbb{K}\) to be "very \(\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{ \operatorname{ \cdot}}}}}}}}}}}}}\)" if in the monster model the field-theoretic algebraic closure over the prime field coincide with the model-theoretic algebraic closure: thus, \(\mathbb{K}\) is very \(\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\cdot{\operatorname{ \
automorphisms of the ambient structure: equivalently, dim is "code-definable" in the sense of [1].
We will also use the rank, denoted by \(\operatorname{rk}\), associated to the matroid \(\operatorname{acl}\colon\operatorname{rk}(V/B)\) is the cardinality of a basis of \(V\) over \(B\). Thus, if \(X\subseteq\mathbb{M}^{n}\) is definable with parameters \(\bar{b}\),
\[\dim(X)=\max\Bigl{(}\operatorname{rk}(\bar{a}/\bar{b}):\bar{a}\in X\Bigr{)}.\]
### Examples
Some well known examples of fields which are algebraically bounded structures as pure fields are: algebraically closed fields, \(p\)-adics and more generally Henselian fields (see [10, Thm 5.5]), real closed fields, pseudo-finite fields; curve-excluding fields in the sense of [11] are also algebraically bounded. Other examples of algebraically bounded structures which are not necessarily pure fields are:
* Algebraically closed valued fields;
* Henselian fields (of characteristic \(0\)) with arbitrary relations on the value group and the residue field (see [12]);
* All "open theories of topological fields", as defined in [13];
* The expansion of an algebraically bounded structure by a generic predicate (in the sense of [10]) is still algebraically bounded (see [10, Corollary 2.6]);
* The theory of fields with several independent orderings and valuations has a model companion, which is algebraically bounded (see [12], [13, Corollary 3.12]).
Johnson and Ye in a recent paper [11] produced examples of an infinite algebraically bounded field with a decidable first-order theory which is not large, and of a pure field that is algebraically bounded but not very slim.
### Assumptions
Our assumptions for the whole article are the following:
* \(\mathbb{K}\) is a structure expanding a field of characteristic \(0\).
* \(L\) is the language of \(\mathbb{K}\) and \(T\) is its \(L\)-theory.
* \(F\coloneqq\operatorname{dcl}(\emptyset)\subseteq\mathbb{K}\).
* \(\mathbb{K}\) is algebraically bounded (over \(F\)).
* \(\dim\) is the dimension function on \(\mathbb{K}\) (or on any model of \(T\)), \(\operatorname{acl}\) is the \(T\)-algebraic closure, and \(\operatorname{rk}\) the rank of the corresponding matroid.
## 3. Generic derivation
We fix a derivation \(\eta:F\to F\) (if \(F\) is contained in the algebraic closure of \(\mathbb{Q}\) in \(\mathbb{K}\), that we denote by \(\overline{\mathbb{Q}}\), then \(\eta\) must be equal to \(0\)). We denote by \(T^{\delta}\) the expansion of \(T\), saying that \(\delta\) is a derivation on \(\mathbb{K}\) extending \(\eta\).
In the most important case, \(F=\overline{\mathbb{Q}}\) and therefore \(\eta=0\), and \(T^{\delta}\) is the expansion of \(T\) saying that \(\delta\) is a derivation on \(\mathbb{K}\).
### Model completion
A. Robinson introduced the notion of model completion in relation with solvability of systems of equations. For convenience we recall the definition:
**Definition 3.1**.: Let \(U\) and \(U^{*}\) be theories in the same language \(L.\)\(U^{*}\) is a model completion of \(U\) if the following hold:
1. If \(A\models U^{*},\) then \(A\models U;\)
2. If \(A\models U,\) then there exists a \(B\supset A\) such that \(B\models U^{*};\)
3. If \(A\models U,\)\(A\subset B,\)\(A\subset C,\) where \(B,C\models U^{*},\) then B is elementary equivalent to C over A.
We give the following general criteria for model completion. In our context we use (3).
**Proposition 3.2**.: _Let \(U\) and \(U^{*}\) be theories in the same language \(L\) such that \(U\subseteq U^{*}\). The following are equivalent:_
1. \(U^{*}\) _is the model completion of_ \(U\) _and_ \(U^{*}\) _eliminates quantifiers._
2. 1. _For every_ \(A\models U\)_, for every_ \(\sigma_{1},\ldots,\sigma_{n}\in U^{*}\)_, there exists_ \(B\models U\) _such that_ \(A\subseteq B\) _and_ \(B\models\sigma_{1}\wedge\cdots\wedge\sigma_{n}\)_;_ 2. _For every_ \(L\)_-structures_ \(A,B,C\) _such that_ \(B\models U\)_,_ \(C\models U^{*}\)_, and_ \(A\) _is a common substructure, for every quantifier-free_ \(L(A)\)_-formula_ \(\phi(\bar{x})\)_, for every_ \(\bar{b}\in B^{n}\) _such that_ \(B\models\phi(\bar{b})\)_, there exists_ \(\bar{c}\in C^{n}\) _such that_ \(C\models\phi(\bar{c})\)_._
3. 1. _For every_ \(A\models U\)_, for every_ \(\sigma_{1},\ldots,\sigma_{n}\in U^{*}\)_, there exists_ \(B\models U\) _such that_ \(A\subseteq B\) _and_ \(B\models\sigma_{1}\wedge\cdots\wedge\sigma_{n}\)_;_ 2. _For every_ \(L\)_-structures_ \(A,B,C\) _such that_ \(B\models U\)_,_ \(C\models U^{*}\)_, and_ \(A\) _is a common substructure, for every quantifier-free_ \(L(A)\)_-formula_ \(\phi(x)\)_, and for every_ \(b\in B\) _such that_ \(B\models\phi(b)\)_, there exists_ \(c\in C\) _such that_ \(C\models\phi(c)\)_._
4. _For all models_ \(A\) _of_ \(U_{\forall}\) _we have:_ 1. \(\operatorname{Diag}(A)\cup U^{*}\) _is consistent,_ 2. \(\operatorname{Diag}(A)\cup U^{*}\) _is complete,_ _where_ \(\operatorname{Diag}(A)\) _is the_ \(L\)_-diagram of_ \(A.\)__
5. _(Blum criterion)_ 1. _Any model of_ \(U_{\forall}\) _can be extended to some model of_ \(U^{*}\) _._ 2. _For any_ \(A,A(b)\models U_{\forall}\) _and for all_ \(C^{*}\models U^{*},\) _where_ \(C^{*}\) _is_ \(|A|^{+}\)_-saturated, there exists an immersion of_ \(A(b)\) _in_ \(C^{*}.\)__
6. \(U^{*}\) _is the model completion of_ \(U_{\forall}.\)__
Proof.: First of all we prove that (1) is equivalent to (6): If \(U^{*}\) is the model completion of \(U_{\forall}\) trivially \(U^{*}\) is a model completion of \(U\) and by [12, Thm 13.2], we have that \(U^{*}\) eliminates quantifiers. For the converse we have trivially that any models of \(U^{*}\) is a model of \(U_{\forall}.\) Moreover, if \(A\models U_{\forall}\) then there exists \(C\models U\) such that there exists an immersion of \(A\) in \(C.\) But by (6) there exists \(B\models U_{\forall}\) such that there exists an immersion of \(C\) in \(B,\) and so an immersion of \(A\) in \(B.\) It is trivial to verify (3) in Definition 3.1. (1) is equivalent to (4) see [12].
Also for the equivalence between (5) and (6) see [10]. Now we have to prove only the equivalence between (1) and (2). \((1)\Rightarrow(2)\Rightarrow(3)\) is easy. For \((3)\Rightarrow(1)\), in order to obtain that \(U^{*}\) is the model completion of \(U\) we prove that \(\operatorname{Diag}(A)\cup U^{*}\) is consistent, but it is enough to see that it is finitely consistent. By (3)a we have the finitely consistent. To prove that \(U^{*}\) eliminates quantifiers it is equivalent to prove that \(\operatorname{Diag}(A)\cup U^{*}\) is complete, which follows easily from (3)b.
### The axioms
We introduce the following notation:
Let \(\delta:\mathbb{K}\to\mathbb{K}\) be some function, \(n\in\mathbb{N}\), \(a\in\mathbb{K}\) and \(\bar{a}\) tuple of \(\mathbb{K}^{n}.\) We denote by
\[\operatorname{Jet}_{\delta}^{\infty}(a)\coloneqq\langle\delta^{i}a:i\in \mathbb{N}\rangle, \operatorname{Jet}_{\delta}^{n}(a)\coloneqq\langle\delta^{i}a:i\leq n \rangle, \operatorname{Jet}(a):=\operatorname{Jet}_{\delta}^{n}(a)\text{ for some }n,\] \[\operatorname{Jet}_{\delta}^{\infty}(\bar{a})\coloneqq\langle \delta^{i}\bar{a}:i\in\mathbb{N}\rangle, \operatorname{Jet}_{\delta}^{n}(\bar{a})\coloneqq\langle\delta^{i}\bar{a}: i\leq n\rangle, \operatorname{Jet}(\bar{a}):=\operatorname{Jet}_{\delta}^{n}(\bar{a})\text{ for some }n.\]
**Definition 3.3**.: Let \(X\subseteq\mathbb{K}^{n}\) be \(L\)-definable with parameters. We say that \(X\) is **large** if \(\dim(X)=n.\)
Two possible axiomatizations for the model completion \(T_{g}^{\delta}\) are given by \(T^{\delta}\) and either of the following axiom schemas:
1. For every \(Z\subseteq\mathbb{K}^{n+1}\)\(L(\mathbb{K})\)-definable, if \(\Pi_{n}(Z)\) is large, then there exists \(c\in\mathbb{K}\) such that \(\operatorname{Jet}_{\delta}^{n}(c)\in Z;\)
2. For every \(W\subseteq\mathbb{K}^{n}\times\mathbb{K}^{n}\)\(L(\mathbb{K})\)-definable, if \(\Pi_{n}(W)\) is large, then there exists \(\bar{c}\in\mathbb{K}^{n}\) such that \(\langle\bar{c},\delta\bar{c}\rangle\in W\).
**Definition 3.4**.: We denote by
\[T_{\operatorname{deep}}^{\delta}:=T_{g}^{\delta}\cup(\texttt{Deep}),\qquad T_ {\operatorname{wide}}^{\delta}:=T_{g}^{\delta}\cup(\texttt{Wide})\]
We will show that both \(T_{\operatorname{deep}}^{\delta}\) and \(T_{\operatorname{wide}}^{\delta}\) give an axiomatization for the model completion of \(T^{\delta}\). Notice that the axiom scheme \((\texttt{Wide})\) deals with many variables at the same time, but has only one iteration of the map \(\delta\), while \((\texttt{Deep})\) deals with only one variable at the same time, but many iteration of \(\delta\).
**Theorem 3.5**.: _Assume that the theory T is model complete. Then the model completion \(T_{g}^{\delta}\) of \(T^{\delta}\) exists, and the theories \(T_{\operatorname{deep}}^{\delta}\) and \(T_{\operatorname{wide}}^{\delta}\) are two possible axiomatizations of \(T_{g}^{\delta}.\)_
### Proof preliminaries
In order to prove the main result we first introduce the following notation: given a polynomial \(p(\bar{x},y)\) we write
\[p(\bar{a},b)=^{y}0\iff p(\bar{a},b)=0\wedge\frac{\partial p}{\partial y}(\bar{ a},b)\neq 0.\]
We need the following preliminary lemmas.
**Lemma 3.6**.: _Let \(\alpha(x,\bar{y})\) be a L-formula and \((B,\delta)\models T^{\delta}.\) Then there exists a function \(\alpha^{\partial}\) definable in \(T\) such that \(\delta a=\alpha^{\partial}(a,\overline{b},\delta\overline{b}),\) for every \(a,\overline{b}\in B\) with \(B\models\alpha(a,\overline{b})\) and \(|\alpha(a,B)|<\infty.\)_
Proof.: Let \(\alpha(x,\bar{y})\) be an L-formula. Since \(\mathbb{K}\) is algebraically bounded over \(F\) and of characteristic \(0\), there exist polynomials \(p_{1}(x,\bar{y}),\ldots,p_{k}(x,\bar{y})\in F[x,\bar{y}]\) associated to the formula \(\alpha(x,\bar{y})\) and formulas \(\beta_{i}(x,\bar{y})=\) "\(p_{i}(x,\bar{y})=\)" \(0\)" such that \(T\vdash(\alpha(x,\bar{y}))\wedge|\alpha(x,\cdot)|<\infty)\to\bigvee_{i=1}^{k} \beta_{i}(x,\bar{y})\). Now we can associate to each polynomial \(p_{i}\) the partial function
\[f_{i}(x,\bar{y},\delta\bar{y}):=\frac{\frac{\partial p_{i}}{\partial\bar{y}} \cdot\delta\bar{y}+p^{\eta}}{\frac{\partial p_{i}}{\partial x}},\]
where \(p^{\eta}\) is the polynomial obtained by \(p\) by applying \(\eta\) to each coefficients.
So now we have a total T-definable function \(f(x,\bar{y},\delta\bar{y})\) whose graph is defined in the following way:
\[z=f(x,\bar{y},\delta y)\,\Leftrightarrow\,\Big{(}\beta_{1}(x, \bar{y})\wedge z=f_{1}(x,\bar{y},\delta y)\Big{)}\vee\Big{(}\neg\beta_{1}(x, \bar{y})\wedge\beta_{2}(x,\bar{y})\wedge z=f_{2}(x,\bar{y},\delta y)\Big{)}\lor\] \[\vee\ \ldots\vee\Big{(}\neg\beta_{1}(x,\bar{y})\wedge\ldots \wedge\neg\beta_{k-1}(x,\bar{y})\wedge\beta_{k}(x,\bar{y})\wedge z=f_{k}(x, \bar{y},\delta y)\Big{)}\vee\] \[\vee\ \Big{(}\neg\beta_{1}(x,\bar{y})\wedge\ldots\wedge\neg\beta_{k -1}(x,\bar{y})\wedge\neg\beta_{k}(x,\bar{y})\wedge z=0\Big{)}.\qed\]
**Corollary 3.7**.: _For any T-definable function \(f(\bar{x})\) there exists a T-definable function \(f^{\partial}\) such that \(\delta(f(\bar{x}))=f^{\partial}(\bar{x},\operatorname{Jet}(\delta\bar{x}))\)._
**Lemma 3.8**.: _Let \(t(\bar{x})\) be a \(L^{\delta}\) term. Then there is a T-definable function \(f(\bar{x},\bar{y})\) such that \(t(\bar{x})=f(\bar{x},\operatorname{Jet}(\bar{x}))\)._
Proof.: We prove by induction on the complexity of the term \(t(\bar{x}).\) If \(t(\bar{x})\) is a variable it is trivial. Suppose that \(t(\bar{x})=h(s(\bar{x})).\) By induction there exists a \(T\)-definable function \(g\) such that \(s(\bar{x})=g(\bar{x},\operatorname{Jet}(\bar{x}))\). If the function \(h\) is in \(L\) we can conclude. Otherwise \(h=\delta\) and we obtain \(t(\bar{x})=\delta(g(\bar{x},\operatorname{Jet}(\bar{x})))\). By Corollary 3.7 we conclude the proof.
**Lemma 3.9**.: _Let \(\phi(\bar{x})\) be a quantifier free \(L(\delta)\)-formula. Then there exists an L-formula \(\psi\) such that \(T^{\delta}\vdash\phi(\bar{x})\leftrightarrow\psi(\bar{x},\operatorname{Jet}( \bar{x})).\)_
Proof.: Follows from Lemma 3.8.
### Proof of Theorem 3.5
We can finally prove that both \(T^{\delta}_{\operatorname{deep}}\) and \(T^{\delta}_{\operatorname{wide}}\) axiomatize \(T^{\delta}_{g}\). The proof is in three steps: firstly we show that \(T^{\delta}_{\operatorname{wide}}\vdash T^{\delta}_{\operatorname{deep}}\), and later we prove that the conditions (3) of Proposition 3.2 hold for \(U=T^{\delta}\) and, more precisely, (a) holds for \(U^{*}\) equal to \(T^{\delta}_{\operatorname{wide}}\) (i.e., that every model of \(T^{\delta}\) can be embedded in a model of \(T^{\delta}_{\operatorname{wide}}\)), and (b) for \(U^{*}\) equal \(T^{\delta}_{\operatorname{deep}}\) (i.e., if \(B\models T^{\delta}\) and \(C\models T^{\delta}_{\operatorname{deep}}\) have a common substructure \(A\), then every quantifier-free \(L^{\delta}(A)\)-formula with one free variable having a solution in \(B\) also has a solution in \(C\)).
**Lemma 3.10**.: \(T^{\delta}_{\operatorname{wide}}\vdash T^{\delta}_{\operatorname{deep}}\)_._
Proof.: Let \(Z\subseteq\mathbb{K}^{n+1}\) be \(L(\mathbb{K})\)-definable such that \(\Pi_{n}(Z)\) is large. Define
\[W:=\{\langle\bar{x},\bar{y}\rangle\in\mathbb{K}^{n}\times\mathbb{K}^{n}:\langle \bar{x},y_{n}\rangle\in Z\wedge\bigwedge_{i=1}^{n-1}y_{i}=x_{i+1}\}.\]
Clearly, \(\Pi_{n}(W)=\Pi_{n}(Z)\), and therefore \(\Pi_{n}(W)\) is large. By (Wide), there exists \(\bar{c}\in\mathbb{K}^{n}\) such that \(\langle\bar{c},\delta\bar{c}\rangle\in W\). Then, \(\operatorname{Jet}_{\delta}^{n}(c_{1})\in Z\).
**Lemma 3.11**.: _Let \((A,\delta)\models T^{\delta}\). Let \(Z\subseteq A^{n}\times A^{n}\) be \(L\)-definable with parameters in \(A\), such that \(\Pi_{n}(Z)\) is large. Then, there exists \(\langle B,\varepsilon\rangle\supseteq\langle A,\delta\rangle\) and \(\bar{b}\in B^{n}\) such that \(B\succeq A\), \(\langle B,\varepsilon\rangle\models T^{\delta}\), and \(\langle\bar{b},\varepsilon\bar{b}\rangle\in Z_{B}\) (which is the interpretation of \(Z\) in B)._
Proof.: Let \(B\succ A\) (as \(L\)-structures) such that \(B\) is \(|A|^{+}\)-saturated. By definition of dimension, there exists \(\bar{b}\in\Pi_{n}(Z_{B})\) which is algebraically independent over \(A\). Let \(\bar{d}\in B^{n}\) such that \(\langle\bar{b},\delta\bar{b}\rangle\in Z_{B}\). Let \(\varepsilon\) be any derivation on \(B\) which extends \(\delta\) and such that \(\varepsilon\bar{b}=\bar{d}\).
**Lemma 3.12**.: _Let \(\langle B,\delta\rangle\models T^{\delta}\), \(\langle C,\delta\rangle\models T^{\delta}_{\operatorname{deep}}\), and \(\langle A,\delta\rangle\) be an \(L^{\delta}\)-substructures of both models, such that \(B\) and \(C\) have the same \(L(A)\)-theory. Let \(b\in B\) such that \(\langle B,\delta\rangle\models\theta(b)\), where \(\theta(x)\) is a quantifier free \(L^{\delta}\)-formula with parameters in \(A\). Then, there exists \(c\in C\) such that \(\langle C,\delta\rangle\models\theta(c)\)._
Proof.: By Lemma 3.9 there exist \(n\in\mathbb{N}\) and an \(L(A)\)-formula \(\psi\) such that \(\theta(x)=\psi(\operatorname{Jet}_{\delta}^{n}(x))\).
Let \(Y^{B}:=\psi(B)=\{\bar{d}\in B^{n+1}:B\models\psi(\bar{d})\}\), and \(Y^{C}:=\psi(C)\).
Let \(d\) be the smallest integer such that \(\delta^{d}(b)\) is algebraically dependent over \(\operatorname{Jet}^{d-1}(b)\cup A\) (or \(d=+\infty\) if \(\operatorname{Jet}_{\delta}^{\infty}(b)\) is algebraically independent over \(A\)). We distinguish two cases:
1) \(d\geq n\): in this case, \(\Pi_{n}(Y^{C})\) is large because \(\operatorname{Jet}^{n-1}(b)\in\Pi_{n}(Y^{B})\), therefore, by (Deep), there exists \(c\in C\) such that \(C\models\theta(\operatorname{Jet}_{\delta}^{n}(c))\).
2) \(d<n\): this means that \(\delta^{d}b\in\operatorname{acl}(\operatorname{Jet}^{d-1}(b))\), so there exists polynomial \(p(\bar{y},x)\in A[\bar{y},x]\) such that \(p(\operatorname{Jet}^{d-1}(b),\delta^{d}b)=^{x}0\). By Lemma 3.6 there exist \(L(A)\)-definable functions \(f_{d+1},f_{d+2},\ldots,f_{n}\) such that \(\delta^{i}=f_{i}(\operatorname{Jet}^{d}(b))\) where \(i=d+1,d+2,\ldots,n.\) Let
\[Z^{B}:=\{\bar{y}\in B^{d+1}:p(\bar{y})=^{y_{d+1}}0\wedge\theta(\bar{y},f_{d+1} (\bar{y}),\ldots,f_{n}(\bar{y}))\}.\]
Notice that \(\Pi_{d}(Z^{C})\) is large, because \(\operatorname{Jet}^{d-1}(b)\in\Pi_{d}(Z^{B})\), and therefore by axiom (Deep) there exists \(c\in C\) such that \(\operatorname{Jet}^{d}(c)\in Z^{C}\) and so \(\operatorname{Jet}^{n}(c)\in Y^{C}\).
### Corollaries
**Corollary 3.13**.: _Assume that \(T\) eliminates quantifiers. Then, \(T^{\delta}_{\operatorname{deep}}\) and \(T^{\delta}_{\operatorname{wide}}\) are axiomatizations for the model completion \(T^{\delta}_{g}\) of \(T^{\delta}\)._
_Moreover, \(T_{g}^{\delta}\) admits elimination of quantifiers, and for every \(L^{\delta}\)-formula \(\alpha(\bar{x})\) there exists a quantifier-free \(L\)-formula \(\beta(\bar{y})\) such that_
\[T_{g}^{\delta}\models\forall\bar{x}\,\big{(}\alpha(\bar{x})\leftrightarrow\beta (\operatorname{Jet}(\bar{x}))\big{)}.\]
_Finally, \(T_{g}^{\delta}\) is complete._
**Corollary 3.14**.: _Assume that \(T\) is model complete. Then, \(T_{\operatorname{deep}}^{\delta}\) and \(T_{\operatorname{wide}}^{\delta}\) are axiomatizations for the model completion \(T_{g}^{\delta}\) of \(T^{\delta}\)._
The next corollary is without any further assumptions on \(T\).
**Corollary 3.15**.: \(T_{\operatorname{deep}}^{\delta}\) _and \(T_{\operatorname{wide}}^{\delta}\) are equivalent consistent theories (which we denote by \(T_{g}^{\delta}\))._
_Moreover, for every \(L^{\delta}\)-formula \(\alpha(\bar{x})\) there exists an \(L\)-formula \(\beta(\bar{y})\) such that_
\[T_{g}^{\delta}\models\forall\bar{x}\,\big{(}\alpha(\bar{x})\leftrightarrow \beta(\operatorname{Jet}\bar{x})\big{)}.\]
_Finally, \(T_{g}^{\delta}\) is complete._
## 4. Several non-commuting derivations
We analyze first the case when there are several not commuting derivations \(\delta_{1},\dots,\delta_{k}\) because it is simpler, and later in Section 5 we examine the harder case of commuting derivations.
Let \(\bar{\delta}:=\langle\delta_{1},\dots,\delta_{k}\rangle\). Let \(\eta_{1},\dots,\eta_{k}\) be derivations on \(F\). We denote by \(T^{\bar{\delta},nc}\) the \(L^{\bar{\delta}}\)-expansion of \(T\) saying that each \(\delta_{i}\) is a derivation and that \(\delta_{i}\) extends \(\eta_{i}\) for \(i\leq k\).
**Theorem 4.1**.: _Assume that \(T\) is model complete. Then, \(T^{\bar{\delta},nc}\) has a model completion \(T_{g}^{\bar{\delta},nc}\)._
To give the axioms for \(T_{g}^{\bar{\delta},nc}\) we need some more definitions and notations. We fix \(\langle\mathbb{K},\bar{\delta}\rangle\models T^{\bar{\delta},nc}\).
Let \(\Gamma\) be the free non commutative monoid generated by \(\bar{\delta}\), with the canonical partial order \(\preceq\) given by \(\beta\preceq\alpha\beta\), for all \(\alpha,\beta\in\Gamma.\) We fix the total order on \(\Gamma\), given by
\[\theta\leq\theta^{\prime}\Leftrightarrow|\theta|<|\theta^{\prime}|\ \lor\ \Big{(}|\theta|=|\theta^{\prime}|\ \wedge\ \theta<_{lex}\theta^{\prime}\Big{)},\]
where \(<_{lex}\) is the lexicographic order, and \(|\theta|\) is the length of \(\theta\) as a word in the alphabet \(\bar{\delta}\).
**Remark 4.2**.: \(\preceq\) is a well-founded partial order on \(\Gamma\), but it is not a well-partial-order (i.e., there exist infinite anti-chains).
**Remark 4.3**.: (1) As an ordered set, \(\langle\Gamma,\leq\rangle\) is isomorphic to \(\langle\mathbb{N},\leq\rangle\);
(2) \(\emptyset\) (i.e., the empty word, corresponding to the identity function on \(\mathbb{K}\)) is the minimum of \(\Gamma\);
(3) If \(\alpha\preceq\beta\), then \(\alpha\leq\beta\);
(4) If \(\alpha\leq\beta\), then \(\gamma\alpha\leq\gamma\beta\) and \(\alpha\gamma\leq\beta\gamma\).
For every variable \(x\) and every \(\gamma\in\Gamma\) we introduce the variable \(x_{\gamma}\). Given \(V\subseteq\Gamma\), we denote \(x_{V}\coloneqq\langle x_{\gamma}:\gamma\in V\rangle\) and \(a^{V}\coloneqq\langle\gamma a:\gamma\in V\rangle\). We remark that \(a^{V}\) is an analogue of the notion of Jet in one derivation, i.e. \(\mathit{Jet}^{n}(a)=a^{\{0,1,\ldots,n\}}.\) Moreover, we denote \(\Pi_{A}\) the projection from \(\mathbb{K}^{B}\) to \(\mathbb{K}^{A}\) (for some \(A,B\subseteq\Gamma\) and \(B\supseteq A\)), mapping \(\langle a_{\mu}:\mu\in B\rangle\) to \(\langle a_{\mu}:\mu\in A\rangle\).
We give now two alternative axiomatizations for \(T^{\bar{\delta},nc}_{g}\).
1. Let \(\mathcal{V}\subset\Gamma\) be finite and \(\preceq\)-initial. Let \(\mathcal{P}\subseteq\mathcal{V}\) be the set of \(\preceq\)-maximal elements of \(\mathcal{V}\), and \(\mathcal{F}\coloneqq\mathcal{V}\setminus\mathcal{P}\). Let \(Z\subseteq\mathbb{K}^{\mathcal{V}}\) be \(L(A)\)-definable. If \(\Pi_{\mathcal{F}}(Z)\) is large, then there exists \(c\in\mathbb{K}\) such that \(c^{\mathcal{V}}\in Z\).
2. Let \(W\subseteq\mathbb{K}^{n}\times\mathbb{K}^{k\times n}\)\(L(\mathbb{K})\)-definable, such that \(\Pi_{n}(W)\) is large. Then, there exists \(\bar{c}\in\mathbb{K}^{n}\) such that \(\langle\bar{c},\delta_{1}\bar{c},\ldots,\delta_{k}\bar{c}\rangle\in W\).
**Definition 4.4**.: We denote by
\[T^{\bar{\delta},nc}_{deep}:=T^{\bar{\delta},nc}_{g}\cup(\texttt{nc-Deep}), \qquad T^{\bar{\delta},nc}_{wide}:=T^{\bar{\delta},nc}_{g}\cup(\texttt{nc-Wide})\]
**Theorem 4.5**.: (1)_\(T^{\bar{\delta},nc}_{deep}\) and \(T^{\bar{\delta},nc}_{wide}\) are consistent and equivalent to each other._
2. _If_ \(T\) _is model-complete, then the model completion_ \(T^{\bar{\delta},nc}_{g}\) _of_ \(T^{\bar{\delta},nc}\) _exists, and the theories_ \(T^{\bar{\delta},nc}_{deep}\) _and_ \(T^{\bar{\delta},nc}_{wide}\) _are two possible axiomatizations of_ \(T^{\bar{\delta},nc}_{g}\)_._
3. _If_ \(T\) _eliminates quantifiers, then_ \(T^{\bar{\delta},nc}_{g}\) _eliminates quantifiers._
4. _For every_ \(L^{\bar{\delta}}\)_-formula_ \(\alpha(\bar{x})\) _there exists an_ \(L\)_-formula_ \(\beta(\bar{x})\) _such that_ \[T^{\bar{\delta},nc}_{g}\models\forall\bar{x}\,\left(\alpha(\bar{x})\leftrightarrow \beta(\bar{x}^{\Gamma})\right)\]
For the proof, we proceed as in SS3.4, i.e. we procede in three steps:
**Lemma 4.6**.: \(T^{\bar{\delta},nc}_{wide}\vdash T^{\bar{\delta},nc}_{deep}\)_._
Proof.: Let \(Z,\mathcal{F},\mathcal{P},\mathcal{V}\) be as in (nc-Deep).
_Claim 1_.: W.l.o.g., we may assume that \(\mathcal{P}\) is equal to the set of \(\preceq\)-minimal elements of \(\Gamma\setminus\mathcal{F}\).
In fact, let \(\mathcal{P}^{\prime}\) be the set of \(\preceq\)-minimal elements of \(\Gamma\setminus\mathcal{F}\); notice that \(\mathcal{P}\subseteq\mathcal{P}^{\prime}\). We can replace \(\mathcal{V}\) with \(\mathcal{V}\coloneqq\mathcal{V}\cup\mathcal{P}^{\prime}\), and \(Z\) with \(Z^{\prime}\coloneqq\Pi^{-1}(Z)\). So, we define the function
\[\Pi:\mathbb{K}^{\mathcal{V}^{\prime}} \longrightarrow\mathbb{K}^{\mathcal{V}}\] \[\bar{x} \longmapsto\langle x_{\mu}:\mu\in\mathcal{V}\rangle.\]
Then, \(\Pi_{\mathcal{F}}(Z^{\prime})=\Pi_{\mathcal{F}}(Z)\), and if \(a^{\mathcal{V}^{\prime}}\in Z^{\prime}\), then \(a^{\mathcal{V}}\in Z\).
We introduce variables \(x_{0},x_{1},\ldots,x_{k}\) and corresponding variable \(x_{i,\gamma}\), which for readability we denote by \(x(i,\gamma)\) such that \(0\leq i\leq k,\gamma\in\Gamma\)
For brevity, we denote
\[\bar{x}\coloneqq\langle x(i,\gamma):0\leq i\leq k,\gamma\in\mathcal{V}\rangle\qquad \text{and}\qquad\bar{x}_{i}\coloneqq\langle x(i,\gamma):\gamma\in\mathcal{V} \rangle,\quad i=0,\ldots,k.\]
We also denote
\[\Pi_{0}:(\mathbb{K}^{\mathcal{V}})^{k+1} \longrightarrow\mathbb{K}^{\mathcal{V}}\] \[\bar{x} \longmapsto\bar{x}_{0}\]
For each \(\pi\in\mathcal{P}\), we choose \(\mu_{\pi}\in\mathcal{F}\) and \(i_{\pi}\in\{1,\ldots,k\}\) such that \(\delta_{i_{\pi}}\mu_{i}=\pi\). Moreover, given \(\bar{a}\in(\mathbb{K}^{\mathcal{F}})^{k+1}\), we define \(\bar{a}^{\prime}\in K^{\mathcal{V}}\) as the tuple with coordinates
\[a^{\prime}_{\gamma}\coloneqq\begin{cases}a(0,\gamma)&\text{ if }\gamma\in \mathcal{F}\\ a(i_{\gamma},\mu_{\gamma})&\text{ if }\gamma\in\mathcal{P}.\end{cases}\]
We define
\[W\coloneqq\{\langle\bar{a}\in(\mathbb{K}^{\mathcal{F}})^{k+1}\rangle:\bar{a}^ {\prime}\in Z\wedge a(i,\gamma)=a(0,\delta_{i}\gamma)\text{ for }i=1,\ldots,k\text{ and }\gamma\in \mathcal{F}.\}\]
Notice that \(\Pi_{0}(W)\) is equal to \(\Pi_{\mathcal{F}}(Z)\), and therefore it is large. Thus, by (nc-Wide), there exists \(\bar{a}\in\mathbb{K}^{\mathcal{F}}\) such that \(\langle\bar{a},\delta_{1}(\bar{a}),\ldots,\delta_{k}(\bar{a})\rangle\in W\). Finally, taking \(a\coloneqq a(0,\emptyset)\), we get \(a^{\mathcal{V}}\in Z\).
**Lemma 4.7**.: _Let \((A,\bar{\delta})\models T^{\bar{\delta},nc}\). Let \(Z\subseteq A^{n}\times(A^{n})^{k}\) be \(L\)-definable with parameters in \(A\), such that \(\Pi_{n}(Z)\) is large. Then, there exists \(\langle B,\bar{\varepsilon}\rangle\supseteq\langle A,\bar{\delta}\rangle\) and \(\bar{b}\in B^{n}\) such that \(B\succeq A\), \(\langle B,\bar{\varepsilon}\rangle\models T^{\bar{\delta},nc}\), and \(\langle\bar{b},\bar{\varepsilon}\bar{b}\rangle\in Z_{B}\)._
Proof.: Same proof as for Lemma 3.11.
**Lemma 4.8**.: _Let \(\langle B,\bar{\delta}\rangle\models T^{\bar{\delta},nc}\), \(\langle C,\bar{\delta}\rangle\models T^{\bar{\delta},nc}_{deep}\), and \(\langle A,\bar{\delta}\rangle\) be an \(L(\bar{\delta})\)-substructures of both models, such that \(B\) and \(C\) have the same \(L(A)\)-theory. Let \(b\in B\) such that \(\langle B,\bar{\delta}\rangle\models\theta(b)\), where \(\theta(x)\) is a quantifier free \(L(\bar{\delta})\)-formula with parameters in \(A\). Then, there exists \(c\in C\) such that \(\langle C,\bar{\delta}\rangle\models\theta(c)\)._
Proof.: By Lemma 3.9 there exists \(U\) finite subset of \(\Gamma\) and an \(L(A)\)-formula \(\psi(\bar{y})\) such that \(U\) is \(\preceq\)-initial and that \(T^{\bar{\delta},nc}\models\theta(x)=\psi(x^{U})\). Let \(Y^{B}\coloneqq\Psi(B)\) and \(Y^{C}\coloneqq\psi(C)\). Let
\[\mathcal{F}\coloneqq\{\gamma\in U:\gamma b\notin\operatorname{acl}(A,b^{U< \gamma})\},\text{ where we denote by }b^{U<\gamma}\coloneqq\langle\mu b:\mu<\gamma\wedge\mu\in U\rangle\]
Define \(\mathcal{B}\coloneqq\Gamma\setminus\mathcal{F}\) and \(\mathcal{P}\) be the set of \(\preceq\)-minimal elements of \(\mathcal{B}\) (notice that \(\mathcal{P}\) might be infinite). As usual, define \(\mathcal{V}\coloneqq\mathcal{F}\cup\ \mathcal{P}\).
For every \(\gamma\in\Gamma\) there exists \(q_{\gamma}\in A(x_{\mathcal{V}\leq\gamma})\) such that \(\gamma b=q_{\gamma}(b^{\mathcal{V}\leq\gamma})\). Let \(\beta\) be the following \(L(A)\)-formula:
\[\beta(x_{\mathcal{V}})\coloneqq\psi(q_{\gamma}(x_{\mathcal{V}}):\gamma\in U).\]
Notice that \(\langle B,\bar{\delta}\rangle\models\beta(b^{\mathcal{V}})\). Let \(\mathcal{V}_{0}\subseteq\mathcal{V}\) be the set of indexes of the variables of \(\beta\): w.l.o.g., we may assume that \(\mathcal{V}_{0}\) is a \(\preceq\)-initial subset of \(\mathcal{V}_{0}\). Let \(\beta\) be the following \(L(A)\)-formula:
\[\beta(x_{\mathcal{V}})\coloneqq\psi(q_{\gamma}(x_{\mathcal{V}}):\gamma\in U).\]
Notice that \(\langle B,\bar{\delta}\rangle\models\beta(b^{\mathcal{V}})\).
of \(\Gamma\). Let \(\mathcal{P}_{0}\) be the set of \(\preceq\)-maximal elements of \(\mathcal{V}_{0}\). Define
\[Z\coloneqq\{\bar{d}\in\mathbb{K}^{\mathcal{V}_{0}}:\langle B,\bar{\delta} \rangle\models\beta(\bar{d})\}.\]
Notice that \(\Pi_{\mathcal{F}_{0}}(Z)\) contains \(b^{\mathcal{F}_{0}}\), and therefore it is large. Thus, by (nc-Deep), there exists \(c\in C\) such that \(c^{\mathcal{V}_{0}}\in Z\), and therefore \(c^{\hat{U}}\) satisfies \(\psi\).
## 5. Several commuting derivations
We now deal with the case when there are several **commuting** derivations \(\delta_{1},\ldots,\delta_{k}\). The technique used here for the treatment of the study of several derivations are a variant of [10]. In particular, we avoid as much as possible the algebraic approach in [11] based on autoreduced sets.
Let \(\bar{\delta}:=\langle\delta_{1},\ldots,\delta_{k}\rangle\). Let \(\eta_{1},\ldots,\eta_{k}\) be commuting derivations on \(F\). Let \(T^{\bar{\delta}}\) be the \(L^{\bar{\delta}}\)-expansion of \(T\) saying that each \(\delta_{i}\) is a derivation, that \(\delta_{i}\) extends \(\eta_{i}\) for \(i\leq k\), and that \(\delta_{i}\circ\delta_{j}=\delta_{j}\circ\delta_{i}\), for \(i,j\leq k\).
**Theorem 5.1**.: _Assume that \(T\) is model complete. Then, \(T^{\bar{\delta}}\) has a model completion \(T^{\bar{\delta}}_{g}\)._
### Configurations
To give the axioms for \(T^{\bar{\delta}}_{g}\) we need some more definitions and notations. We fix \(\langle\mathbb{K},\bar{\delta}\rangle\models T^{\bar{\delta}}\).
Let \(\Theta\) be the free commutative monoid generated by \(\bar{\delta}\), with the canonical partial order \(\preceq\) (notice that \(\Theta\) is isomorphic to \(\mathbb{N}^{k}\)). We fix the total order on \(\Theta\), given by
\[\theta\leq\theta^{\prime}\text{ iff }|\theta|<|\theta^{\prime}|\ \vee\ \Big{(}|\theta|=|\theta^{\prime}|\ \wedge\ \theta<_{lex}\theta^{\prime}\Big{)},\]
where \(<_{lex}\) is the lexicographic order, and \(|\langle\delta_{1}^{n_{1}}\cdots\delta_{k}^{n_{k}}\rangle|\coloneqq n_{1}+ \cdots+n_{k}\).
Given \(a\in\mathbb{K}\) and \(\theta\in\Theta\), we denote by \(a^{<\theta}\coloneqq\langle\mu a:\mu<\theta\rangle\), and similarly \(a^{\leq\theta}\coloneqq\langle\mu a:\mu\leq\theta\rangle\), and \(a^{\Theta}\coloneqq\langle\mu a:\mu\in\Theta\rangle\). Moreover, for each \(\theta\in\Theta\) we have a variable \(x_{\theta}\), and we denote \(x_{<\theta}\coloneqq\langle x_{\mu}:\mu<\theta\rangle\). Moreover, given a set \(A\subseteq\Theta\), we denote \(x_{A}\coloneqq\langle x_{\theta}:\theta\in A\rangle\), and \(x_{A\leq\mu}\coloneqq\langle x_{\nu}:\nu\in A\ \wedge\ \nu\leq\mu\rangle\). Given a rational function \(q\in\mathbb{K}(x_{\Theta})\), we denote
\[\frac{\partial q}{\partial\mu}\coloneqq\frac{\partial q}{\partial x_{\mu}} \qquad\text{and}\qquad q(\bar{a})=^{\mu}0\text{ iff }q(\bar{a})=0\ \wedge\ \frac{\partial q}{\partial\mu}(\bar{a})\neq 0.\]
Let \(B\subset\mathbb{K}\) such that \(\bar{\delta}B\subseteq B\). A configuration \(\mathfrak{S}\) with parameters in \(B\) is given by the following data.
1) A \(\preceq\)-anti-chain \(\mathcal{P}\subset\Theta\). Notice that, by Dickson's Lemma, \(\mathcal{P}\) must be finite.
We distinguish two sets:
\(\bullet\)\(\mathcal{B}\coloneqq\{\mu\in\Theta:\exists\pi\in\mathcal{P}\,\mu\succeq \pi\}\), the set of leaders; (1)
* \(\mathcal{F}\coloneqq\Theta\setminus\mathcal{B}=\{\mu\in\Theta:\forall\pi\in \mathcal{P}\,\mu\nsubseteq\pi\}\), the set of free elements. Moreover, we define
* \(\mathcal{V}\coloneqq\mathcal{F}\cup\mathcal{P}\); \(\mathcal{S}\coloneqq\mathcal{B}\setminus\mathcal{P}\).
2) For every \(\pi\in\mathcal{P}\) we are given a nonzero polynomial \(p_{\pi}\in B[x_{\mathcal{V}\leq\pi}]\).
3) Finally, we are given an \(L(B)\)-formula \(\alpha(x_{\mathcal{V}})\).
An element \(a\in\mathbb{K}\)**realizes**\(\mathfrak{S}\) iff:
* For every \(\pi\in\mathcal{P}\), \(p_{\pi}(a^{\Theta})=^{\pi}0\);
* \(\mathbb{K}\models\alpha(a^{\theta})\).
Not all configurations can be realized. We will give a sufficient first-order condition for \(\mathfrak{S}\) to be realized (either in \(\mathbb{K}\) or in some extension), and then the axioms of \(T_{g}^{\bar{\delta}}\) will say that all configurations satisfying those sufficient condition are realized.
We need some further construction. For every \(\mu\in\Theta\), let
\[Pr(\mu)\coloneqq\{\nu\in\Theta:\exists i\leq k\,\mu=\delta_{i}\nu\}\]
(the set of \(\preceq\)-predecessors of \(\mu\)). For every derivation \(\delta\) on \(\mathbb{K}\) there exists a unique derivation on \(\mathbb{K}(\bar{x})\), \(q(\bar{x})\mapsto q(\bar{x})^{\delta}\), defined such that;
for all \(a\in\mathbb{K}\)\(a^{\delta}=\delta a\), and for all \(x_{i}\) in the n-upla, \(\bar{x}\)\(x_{i}^{\delta}=x_{i}\).
**Definition 5.2**.: For every \(\mu\in\Theta\), we give a finite family of rational functions \(F_{\mu}\subset B(\bar{x}_{\mathcal{V}\leq\mu})\), and a distinguished function \(f_{\mu}\in F_{\mu}\).
1) If \(\mu\in\mathcal{V}\), we define \(f_{\mu}\coloneqq x_{\mu}\) and \(F_{\mu}\coloneqq\{x_{\mu}\}\).
2) If \(\mu\in\mathcal{S}\), we define \(F_{\mu}\) and \(f_{\mu}\) inductively. Let \(\nu\in Pr(\mu)\cap\mathcal{B}\); we have \(\mu=\delta\nu\) for a unique \(\delta\in\bar{\delta}\). We define
\[f_{\nu,\mu}\coloneqq f_{\nu}^{\delta}+\sum_{\rho\in\mathcal{V}\leq\nu}\frac{ \partial f_{\nu}}{\partial\rho}f_{\delta\rho}. \tag{1}\]
Let \(\pi\in Pr(\mu)\cap\mathcal{P}\); we have \(\mu=\delta\pi\) for a unique \(\delta\in\bar{\delta}\). We define
\[f_{\pi,\mu}\coloneqq-\frac{p_{\pi}^{\delta}+\sum_{\rho<\pi}\frac{\partial p_ {\pi}}{\partial\rho}f_{\delta\rho}}{\frac{\partial p_{\pi}}{\partial\pi}}.\]
Notice that, for every \(\nu\in Pr(\mu)\cap\mathcal{B}\), the function \(f_{\nu}\) and \(f_{\nu,\mu}\) and the functions \(f_{\delta\rho}\) which appear in the definition of \(f_{\mu}\) are already defined by induction hypothesis: therefore, \(f_{\nu,\mu}\) is well-defined.
We define
\[F_{\mu}\coloneqq\{f_{\nu,\mu}:\nu\in Pr(\mu)\cap\mathcal{B}\},\]
and we choose \(f_{\mu}\in F_{\mu}\) arbitrarily.
Once we have all the \(f_{\mu}\) defined, for every \(h\in\mathbb{K}(x_{\mathcal{V}})\) and \(\delta\in\bar{\delta}\), we can define
\[R^{\delta}h\coloneqq h^{\delta}+\sum_{\rho\in\mathcal{V}}\frac{\partial h}{ \partial\rho}f_{\delta\rho}.\]
Then, (1) becomes
\[f_{\nu,\mu}=R^{\delta}f_{\nu}.\]
Notice that \(R^{\delta}\) is the unique derivation on \(\mathbb{K}(x_{\mathcal{V}})\) such that:
\[\forall c\in\mathbb{K}\ R^{\delta}c=\delta c;\qquad\forall\mu\in\mathcal{V}\ R^ {\delta}x_{\mu}=x_{\delta\mu}.\]
We have
\[F_{\mu}=\{R^{\delta}f_{\nu}:\delta\in\bar{\delta},\nu\in\Theta,\delta\nu=\mu\}.\]
The main difficulty is that in general the derivations \(R^{\delta_{i}}\) might not commute.
**Definition 5.3**.: Let \(\bar{a}\in\mathbb{K}^{\Theta}\). We say that \(\bar{a}\) is a **global formal solution** of \(\mathfrak{S}\) iff:
1. For all \(\mu\in\Theta\) and for all \(\in F_{\mu}\), \(f(\bar{a})=a_{\mu}\);
2. For all \(\pi\in\mathcal{P}\), \(p_{\pi}(\bar{a})=^{\pi}0\);
3. \(K\models\alpha(\bar{a})\).
Notice that if \(a\in\mathbb{K}\) realizes \(\mathfrak{S}\), then \(a^{\Theta}\) is a global formal solution of \(\mathfrak{S}\).
Let \(\theta^{\prime}\in\Theta\) be the \(\preceq\) (least upper bound) of \(\mathcal{P}\) and of all monomials \(\mu\in\Theta\) such that \(x_{\mu}\) appears in the formula \(\alpha\). Let \(\delta_{0}\coloneqq\delta_{1}\delta_{2}\cdots\delta_{k}\in\Theta\), and \(\theta^{\prime\prime}\coloneqq\delta_{0}\theta^{\prime}\), \(\theta\coloneqq\delta_{0}\theta^{\prime\prime}\).
**Definition 5.4**.: A **local formal solution** of \(\mathfrak{S}\) is a tuple \(\bar{a}\in\mathbb{K}^{\Theta\leq\theta}\) satisfying 2) and 3), and 1) only for \(\mu\leq\theta\).
The following is the main result, which allows us to express satisfiability in a first-order way.
**Proposition 5.5**.: _Let \(\bar{a}\in\mathbb{K}^{\Theta\leq\theta}\) be a local formal solution. For every \(\mu\in\mathcal{F}\) with \(\mu>\theta\), define \(a_{\mu}\) arbitrarily. Then, there is a unique way to define \(a_{\nu}\) for every \(\nu\in\mathcal{B}\) where \(\nu>\theta\) in such a way that \(\bar{a}\coloneqq(a_{\nu}:\nu\in\Theta)\) is a global formal solution._
We will give the proof later.
Let \(D_{\mathfrak{S}}\subseteq\mathbb{K}^{\Theta\leq\theta}\) be the set of local formal solutions of \(\mathfrak{S}\). Notice that \(D_{\mathfrak{S}}\) is \(L(B)\)-definable.
We say that \(\mathfrak{S}\) is **obviously formally satisfiable** if \(D_{\mathfrak{S}}\) is large.
### The axioms
**Definition 5.6**.: The axioms of \(T^{\bar{\delta}}_{g}\) are the axioms of \(T^{\bar{\delta}}\) plus the following axiom scheme:
1. Every obviously formally satisfiable condition is realized in \(\mathbb{K}\).
Notice that the above is the analogue of the axiom scheme (Deep): we don't have an analogue for the axiom scheme (Wide).
**Theorem 5.7**.: (1)_\(T^{\bar{\delta}}_{g}\) is a consistent and complete extension of \(T^{\bar{\delta}}\)._
_._
2. _If_ \(T\) _is model-complete, then_ \(T^{\bar{\delta}}_{g}\) _is an axiomatization for the model completion of_ \(T^{\bar{\delta}}\)_._
3. _If_ \(T\) _eliminates quantifiers, then_ \(T^{\bar{\delta}}_{g}\) _eliminates quantifiers._
4. _For every_ \(L^{\bar{\delta}}\)_-formula_ \(\alpha(\bar{x})\) _there exists an_ \(L\)_-formula_ \(\beta(\bar{x})\) _such that_ \[T^{\bar{\delta}}_{g}\models\forall\bar{x}\big{(}\alpha(\bar{x})\leftrightarrow \beta(\bar{x}^{\Theta})\big{)}\]
_For every \((\mathbb{K},\bar{\delta})\models T^{\bar{\delta}}_{g}\), for every \(\bar{a}\) tuple in \(\mathbb{K}\) and \(B\subseteq\mathbb{K}\), the \(L^{\bar{\delta}}\)-type of \(\bar{a}\) over \(B\) is determined by the \(L\)-type of \(\bar{a}^{\Theta}\) over \(B^{\Theta}\)._
We assume that \(T\) eliminates quantifiers. We use the criterion in Proposition 3.2(3) to show that \(T^{\bar{\delta}}_{g}\) is the model completion of \(T^{\bar{\delta}}\) and it eliminates quantifiers. We will do it in two lemmas.
**Lemma 5.8**.: _Let \(\langle A,\bar{\delta}\rangle\models T^{\bar{\delta}}\). Let \(\mathfrak{S}\) be an obviously formally satisfiable configuration with parameters in \(A\). Then, there exists \(\langle B,\bar{\varepsilon}\rangle\supseteq\langle A,\bar{\delta}\rangle\) and \(b\in B\) such that \(B\succeq A\), \(\langle B,\bar{\varepsilon}\rangle\models T^{\bar{\delta}}\), and \(b\) realizes \(\mathfrak{S}\)._
Proof.: Let \(B\succ A\) be \(|A|^{+}\)-saturated. By assumption, \(D_{\mathfrak{S}}\) is large; therefore, there exists \(\bar{b}\in D_{\mathfrak{S}}\) such that \(\bar{b}\) is algebraically independent over \(A\).
Let \(\mathcal{F}\), \(\mathcal{B}\), and \(\theta\) be as in the definition of a configuration. By definition, \(\bar{b}\) can be extended to a local formal solution \(\bar{b}^{\prime}:=(b_{\mu}:\mu\leq\theta)\) of \(\mathfrak{S}\). For every \(\mu\in\mathcal{F}\) with \(\mu>\theta\), choose \(b_{\mu}\in B\) such that \((b_{\mu}:\mu\in\mathcal{F})\) is algebraically independent over \(A\). By Proposition 5.5, \(\bar{b}^{\prime}\) and the above choices of \(b_{\mu}\) for \(\mu\in\mathcal{F}\) can be extended to a global formal solution \(\bar{b}^{\prime\prime}:=(b_{\mu}:\mu\in\Theta)\).
Extend \((b_{\mu}:\mu\in\mathcal{F})\) to a transcendence basis \((\bar{b}:\mu\in I)\) of \(B\) over \(A\). For each \(i\leq k\), define derivations \(\varepsilon_{i}\) on \(B\) in the following way. On \(A\), \(\varepsilon_{i}\) is equal to \(\delta_{i}\). If \(\mu\in\mathcal{F}\), we define \(\delta_{i}(b_{\mu}):=b_{\delta_{i}\mu}\). If \(\mu\in I\setminus\mathcal{F}\), define \(\delta_{i}(b_{\mu}):=0\). The above conditions define a unique derivation \(\varepsilon_{i}\) on \(B\) extending \(\delta_{i}\).
_Claim 2_.: The derivations \(\varepsilon_{i}\) commute with each other.
It suffices to show that \(\varepsilon_{i}\) and \(\varepsilon_{j}\) commute on a generating set (for each \(i,j\leq k\)). By definition, they commute on \(A\). Thus, it suffices to shot that they commute on \(b_{\mu}\) for each \(\mu\in I\). If \(\mu\in I\setminus\mathcal{F}\), then \(\varepsilon_{i}(\mu)=\varepsilon_{j}(\mu)=0\), and therefore they commute. If \(\mu\in\mathcal{F}\), then
\[\varepsilon_{j}(\varepsilon_{i}(b_{\mu}))=b_{\delta_{j}\delta_{i}\mu}=b_{ \delta_{i}\delta_{j}\mu}=\varepsilon_{i}(\varepsilon_{j}(b_{\mu})),\]
proving the claim. Thus, \(\langle B;\bar{\varepsilon}\rangle\models T^{\bar{\delta}}\).
Finally, \(b_{0}\) realizes \(\mathfrak{S}\), because
\[b_{0}^{\Theta\leq\theta}=\bar{b}^{\prime}\]
**Lemma 5.9**.: _Let \(\langle B,\bar{\delta}\rangle\models T^{\bar{\delta}}\), \(\langle C,\bar{\delta}\rangle\models T^{\bar{\delta}}_{g}\), and \(\langle A,\bar{\delta}\rangle\) be a common substructure, such that \(B\) and \(C\) have the same \(L(A)\)-theory. Let \(\gamma(x)\)
_be a quantifier-free \(L^{\bar{\delta}}\)-formula with parameters in \(A\). Let \(b\in B\) such that \(\langle B,\bar{\delta}\rangle\models\gamma(b)\). Then, there exists \(c\in C\) such that \(\langle C,\bar{\delta}\rangle\models\gamma(c)\)._
Proof.: We may assume that \(\gamma(x)\) is of the form \(\beta(x^{\Theta})\) for some \(L\)-formula \(\beta\). We define a configuration \(\mathfrak{S}\) in the following way.
\[\mathcal{F}:=\{\mu\in\Theta:\mu b\notin\operatorname{acl}(Ab^{\Theta<\mu})\}, \qquad\mathcal{B}:=\Theta\setminus\mathcal{F}.\]
Notice that \(\mathcal{F}\) is \(\preceq\)-initial subset of \(\Theta\); let \(\mathcal{P}\) be the set of \(\preceq\)-minimal elements of \(\mathcal{B}\). Let \(\mathcal{V}\) and \(\mathcal{S}\) as in the definition of a configuration.
For each \(\pi\in\mathcal{P}\), there exists some polynomial \(p(\bar{x})\in A[x_{\mathcal{V}\leq\pi}]\) such that \(p_{\pi}(b^{\mathcal{V}\leq\pi})=^{\pi}0\). For every \(\mu\in\Theta\), define \(f_{\mu}\) as in Definition 5.2. Finally, let \(\alpha(x_{\mathcal{F}})\) be the following \(L(A)\)-formula
\[\alpha(x_{\mathcal{F}})\Leftrightarrow\beta(f_{\Theta}(x_{\mathcal{F}})).\]
Thus, \(\mathfrak{S}\) is satisfied by \(b\). Since \(b_{\mathcal{F}}\) is algebraically independent over \(A\), \(D_{\mathfrak{S}}\) is large. Therefore, by \((k\text{-}\text{Deep})\), there exists \(c\in C\) realizing \(\mathfrak{S}\). In particular,
\[\langle C,\bar{\delta}\rangle\models\beta(f_{\Theta}(a^{\mathcal{F}}))\]
and for each \(\mu\in\Theta\), \(f_{\mu}(a^{\mathcal{F}})=\mu a\). Thus, \(\langle C,\bar{\delta}\rangle\models\beta(a^{\Theta})\) which is equivalent to \(\gamma(a)\).
### Proof of Prop. 5.5
Let \(h\in\mathbb{K}(x_{\mathcal{V}})\) and let \(\delta,\varepsilon\in\bar{\delta}\).
**Lemma 5.10**.: \[R^{\varepsilon}R^{\delta}h-R^{\delta}R^{\varepsilon}h=\sum_{\rho\in\mathcal{V} }\frac{\partial h}{\partial\rho}(R^{\varepsilon}f_{\delta\rho}-R^{\delta}f_{ \varepsilon\rho}).\]
Proof.: \[R^{\varepsilon}R^{\delta}h=(R^{\delta}h)^{\varepsilon}+\sum_{\lambda}\frac{ \partial R^{\delta}h}{\partial\lambda}f_{\varepsilon\lambda}=\\ (h^{\delta})^{\varepsilon}+\sum_{\rho}(\frac{\partial h}{ \partial\rho}f_{\delta\rho})^{\varepsilon}+\sum_{\lambda}\frac{\partial(h^{ \delta}+\sum_{\rho}\frac{\partial h}{\partial\rho}f_{\delta\rho})}{\partial \lambda}f_{\varepsilon\lambda}=\\ h^{\delta\varepsilon}+\sum_{\rho}\frac{\partial h^{\varepsilon}}{ \partial\rho}f_{\delta\rho}+\sum_{\rho}\frac{\partial h}{\partial\rho}f_{ \delta\rho}+\sum_{\lambda}\frac{\partial h^{\delta}}{\partial\lambda}f_{ \varepsilon\lambda}+\sum_{\lambda,\rho}\frac{\partial^{2}h}{\partial\lambda \partial\rho}f_{\varepsilon\lambda}f_{\delta\rho}+\sum_{\lambda,\rho}\frac{ \partial h}{\partial\rho}\frac{\partial f_{\delta\rho}}{\partial\lambda}f_{ \varepsilon\lambda}.\]
Since \(h^{\varepsilon\delta}=h^{\delta\varepsilon}\) and \(\frac{\partial^{2}h}{\partial\lambda\partial\rho}=\frac{\partial^{2}h}{ \partial\rho\partial\lambda}\), we have
\[R^{\varepsilon}R^{\delta}h-R^{\delta}R^{\varepsilon}h=\sum_{\rho}\frac{ \partial h}{\partial\rho}f_{\delta\rho}^{\varepsilon}-\sum_{\rho}\frac{ \partial h}{\partial\rho}f_{\varepsilon\rho}^{\delta}+\sum_{\lambda,\rho}\frac{ \partial h}{\partial\rho}\frac{\partial f_{\delta\rho}}{\partial\lambda}f_{ \varepsilon\lambda}-\sum_{\lambda,\rho}\frac{\partial h}{\partial\rho}\frac{ \partial f_{\varepsilon\rho}}{\partial\lambda}f_{\delta\lambda}=\sum_{\rho} \frac{\partial h}{\partial\rho}g_{\rho},\]
where
\[g_{\rho}=f_{\delta\rho}^{\varepsilon}+\sum_{\lambda}\frac{\partial f_{\delta \rho}}{\partial\lambda}f_{\varepsilon\lambda}-f_{\varepsilon\rho}^{\delta}- \sum_{\lambda}\frac{\partial f_{\varepsilon\rho}}{\partial\lambda}f_{\delta \lambda}=\frac{\partial h}{\partial\rho}(R^{\varepsilon}f_{\delta\rho}-R^{ \delta}f_{\varepsilon\rho})\]
**Lemma 5.11**.: _Let \(\nu\in\mathcal{B},\) where \(\nu>\theta\). If \(\mu_{1},\mu_{2}\in\mathcal{B}\cap\mathrm{Pr}(\nu)\). Then, there exists \(\mu_{0}\in\mathcal{S}\cap\mathrm{Pr}(\nu)\) such that \(\mu_{0}\wedge\mu_{i}\in\mathcal{S}\) for \(i=1,2\)._
Proof.: See [10][Proof of Proposition 6.4].
Notice that, in general, in the above lemma we cannot claim that \(\mu_{1}\wedge\mu_{2}\in\mathcal{B}\).
We can now prove Proposition 5.5. We have to show that, for every \(\nu\in\Theta\) and \(f\in F_{\nu}\), \(f(\bar{a})=f_{\nu}(\bar{a})\).
Assume not: let \(\nu\) minimal such that there exists \(f\in F_{\nu}\) with \(f(\bar{a})\neq f_{\nu}(\bar{a})\). If \(\nu\in\mathcal{V}\) or \(\nu\leq\theta\), then \(f(\bar{a})=f_{\nu}(\bar{a})\) holds by definition; therefore \(\nu\in\mathcal{S}\) and \(\nu>\theta\). We have that, for some \(\mu_{1},\mu_{2}\in Pr(\mu)\cap\mathcal{B}\), and \(\delta_{1},\delta_{2}\in\bar{\delta}\), \(\delta_{1}\mu_{1}=\delta_{2}\mu_{2}=\nu\), and \(f_{\nu}=R^{\delta_{1}}f_{\mu_{1}}\), \(f=R^{\delta_{2}}f_{\mu_{2}}\). Let \(\mu_{0}\) be as in Lemma 5.11. It suffices to show that \(R^{\delta_{1}}f_{\mu_{1}}(\bar{a})=R^{\delta_{0}}f_{\mu_{0}}(\bar{a})\) and similarly \(R^{\delta_{2}}f_{\mu_{2}}(\bar{a})=R^{\delta_{0}}f_{\mu_{0}}(\bar{a})\).
Thus, w.l.o.g. we may assume that \(\lambda\coloneqq\mu_{1}\wedge\mu_{2}\in\mathcal{S}\).
For the next computations in this proof, all the functions are evaluated in \(\bar{a}\). By inductive hypothesis
\[f_{\nu}=R^{\delta_{1}}f_{\mu_{1}}=R^{\delta_{1}}R^{\delta_{2}}f_{\lambda}, \qquad f=R^{\delta_{2}}f_{\mu_{2}}=R^{\delta_{2}}R^{\delta_{1}}f_{\lambda}.\]
By Lemma 5.10,
\[f_{\nu}-f=\sum_{\rho\in\mathcal{V}}\frac{\partial f_{\lambda}}{\partial\rho}( R^{\delta_{1}}f_{\delta_{2}\rho}-R^{\delta_{2}}f_{\delta_{1}\rho}).\]
However, again by inductive hypothesis, for every \(\rho\in\mathcal{V}\) and \(\rho<\lambda\), \(R^{\delta_{1}}f_{\delta_{2}\rho}=f_{\delta_{1}\delta_{2}\rho}=R^{\delta_{2}} f_{\delta_{1}\rho}\), hence \(f_{\nu}-f=0\).
**Remark 5.12**.: A global formal solution \(\bar{a}\) will satisfy \((R^{\delta_{i}}R^{\delta_{j}}h)(\bar{a})=(R^{\delta_{j}}R^{\delta_{i}}h)(\bar{ a})\) for every \(h\in\mathbb{K}(x_{\mathcal{V}})\).
**Remark 5.13**.: Assume that the set \(D_{\mathfrak{S}}\) of formal solutions is large. Then, for every \(\mu\in\Theta\), the functions in \(F_{\mu}\) will coincide on \(D_{\mathfrak{S}}\). However, since the functions in \(F_{\mu}\) are rational functions, if they coincide on a large set, they coincide everywhere. Thus, we could change the definition of \(\mathfrak{S}\) being obviosly formally satisfiable to:
"For every \(\mu\in\Theta\), \(F_{\mu}\) is a singleton".
## 6. Stability and NIP
In this section we see that many of the model theoretic properties of T are inherited by \(T^{\bar{\delta},?}_{g}.\) We assume basic knowledge about stable and NIP theories: see [12, 13].
**Theorem 6.1**.: (1) _If \(T\) is stable, then \(T^{\bar{\delta},?}_{g}\) is stable._
(2) _If \(T\) is NIP, then \(T^{\bar{\delta},?}_{g}\) is NIP._
The above theorem follows immediately from the following one.
**Theorem 6.2**.: _Let \(U\) be an \(L\)-theory. Let \(\bar{\delta}\) be a set of new **unary** function symbols. Let \(U^{\prime}\) be an \(L^{\bar{\delta}}\)-theory expanding \(U\). Assume that, for every \(L^{\bar{\delta}}\)-formula \(\alpha(\bar{x})\) there exists and \(L\)-formula \(\beta(\bar{y})\) such that_
\[U^{\prime}\models\forall\bar{x}\ \alpha(\bar{x})\leftrightarrow\beta(\bar{x}^{ \Gamma}),\]
_where \(\bar{x}^{\Gamma}\) is the set of \(\bar{\delta}\)-terms in the variables \(\bar{x}\)._
_Then, for every \((M,\bar{\delta})\models U^{\prime}\) and every \(\bar{a}\) tuple in \(M\) and \(B\) subset of \(M\), the \(L^{\bar{\delta}}\)-type of \(\bar{a}\) over \(B\) is uniquely determined by the \(L\)-tuple of \(\bar{a}^{\Gamma}\) over \(B^{\Gamma}\)._
_Moreover,_
1. _If_ \(U\) _is stable, then_ \(U^{\prime}\) _is stable._
2. _If_ \(U\) _NIP, then_ \(U^{\prime}\) _is NIP._
Proof.: The results follow easily by applying the following criteria.
1) [10, Thm II. 2.13] A theory \(U\) is stable iff, for every subset \(A\) of a model \(M\) of \(U\), and for every sequence \((\bar{a}_{n})_{n\in\mathbb{N}}\) of tuples in \(M\), if \((\bar{a}_{n})_{n\in\mathbb{N}}\) is an indiscernible sequence, then it is totally indiscernible.
2) [10, Proposition 2.8] A theory \(U\) is NIP iff, for every formula \(\phi(\bar{x};\bar{y})\) and for any indiscernible sequence \((\bar{a}_{i}:i\in I)\) and tuple \(\bar{b}\), there is some end segment \(I_{0}\subseteq I\) such that \(\phi(a_{i};b)\) is "constant" on \(I_{0}\): that is, either for every \(i\in I_{0}\)\(\phi(\bar{a}_{i};\bar{b})\) holds, or for every \(i\in I_{0}\)\(\neg\phi(\bar{a}_{i};\bar{b})\) holds.
## 7. Algebraic closure and independence relations
The results in this section are interesting on their own, and will be used in SS8 and SS12.
We fix \(\langle\mathbb{M};\bar{\delta}\rangle\) monster model of \(T^{\bar{\delta},?}_{g}\).
Let \(\mathop{\mathchoice{\kern 0.0pt\hbox to 0.0pt{\hss$\smile$}}{\kern 0.0pt \hbox to 0.0pt{\hss$\smile$}}{\kern 0.0pt\hbox to 0.0pt{\hss$ \smile$}}{\kern 0.0pt\hbox to 0.0pt{\hss$\smile$}}}\nolimits\) be some ternary relation on subsets of \(\mathbb{M}\). We define the following ternary relation on subsets of \(\mathbb{M}\).
\[A\mathop{\mathchoice{\kern 0.0pt\hbox to 0.0pt{\hss$\smile$}}{\kern 0.0pt \hbox to 0.0pt{\hss$\smile$}}{\kern 0.0pt\hbox to 0.0pt{\hss$ \smile$}}{\kern 0.0pt\hbox to 0.0pt{\hss$\smile$}}}\nolimits_{C}^{ \mid\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Theorem 7.3**.: _Let \(A\subseteq B\subset\mathbb{M}\) be small sets such that \(\bar{\delta}A\subseteq A\) and \(\bar{\delta}B\subseteq B\). Let \(p\in S_{L}^{n\cdot\Gamma}(A)\) be \(\bar{\delta}\)-compatible. Let \(q\in S_{L}^{n\cdot\Gamma}(B)\) be an extension of \(p\). If \(q\mathop{\mathchoice{\vbox{\hbox{\kern 2.0pt\vrule width 0.4pt height 6.0pt depth -0.2pt\hss}\hbox{$ \smile$}}}{\vbox{\hbox{\kern 2.0pt\vrule width 0.4pt height 6.0pt depth -0.2pt\hss}\hbox{$ \smile$}}}{\vbox{\hbox{\kern 2.0pt\vrule width 0.4pt height 6.0pt depth -0.2pt\hss}\hbox{$ \smile$}}}{\vbox{\hbox{\kern 2.0pt\vrule width 0.4pt height 6.0pt depth -0.2pt\hss}\hbox{$ \smile$}}}{\vbox{\hbox{\kern 2.0pt\vrule width 0.4pt height 6.0pt depth -0.2pt\hss}\hbox{$ \smile$}}}}_{A}^{\rm col}B\), then \(q\) is also \(\bar{\delta}\)-compatible._
Proof.: First, we do the case when \(n=1\). Thus, let \(a\in\mathbb{M}\) such that \(a^{\Gamma}\) realizes \(p\). We have to show that there exists \(c\in\mathbb{M}\) such that realizes \(q\). Fix \(\psi(\bar{z})\) be an \(L(B)\)-formula in \(q\), and let \(Z:=\psi(\mathbb{M})\). By saturation, it suffices to to show that there exists \(c\in\mathbb{M}\) such that \(c^{\Gamma}\in Z\). Let
\[d\coloneqq\operatorname{rk}(a^{\Gamma}/A)\in\mathbb{N}\cup\{\infty\},\]
where \(\operatorname{rk}\) is the rank of the matroid \(\operatorname{acl}\). Since \(p\mathop{\mathchoice{\vbox{\hbox{\kern 2.0pt\vrule width 0.4pt height 6.0pt depth -0.2pt\hss}\hbox{$ \smile$}}}{\vbox{\hbox{\kern 2.0pt\vrule width 0.4pt height 6.0pt depth -0.2pt\hss}\hbox{$ \smile$}}}{\vbox{\hbox{\kern 2.0pt\vrule width 0.4pt height 6.0pt depth -0.2pt\hss}\hbox{$ \smile$}}}{\vbox{\hbox{\kern 2.0pt\vrule width 0.4pt height 6.0pt depth -0.2pt\hss}\hbox{$ \smile$}}}}_{A}B\), we have that \(d=\operatorname{rk}(a^{\Gamma}/B)\).
If \(d=\infty\), then \(Z\) is large, and therefore \(c\) exists.
If \(d<\infty\), we define
\[\mathcal{F}\coloneqq\{\gamma\in\Gamma:\gamma b\notin\operatorname{acl}(b^{ \Gamma<\gamma})\}.\]
Notice that \(\mathcal{F}\) is a \(\preceq\)-initial subset of \(\Gamma\); we define \(\mathcal{B}\coloneqq\Gamma\setminus\mathcal{F}\), and \(\mathcal{P}\) the set of \(\preceq\)-minimal elements of \(\mathcal{B}\).
Let \(W\subseteq\mathbb{M}^{\mathcal{F}}\) which is \(L(B)\)-definable and such that (the \(L(B)\)-formula defining \(W\)) is in \(q\). Since \(a^{\Gamma}\) satisfies \(p\) and \(q\mathop{\mathchoice{\vbox{\hbox{\kern 2.0pt\vrule width 0.4pt height 6.0pt depth -0.2pt\hss}\hbox{$ \smile$}}}{\vbox{\hbox{\kern 2.0pt\vrule width 0.4pt height 6.0pt depth -0.2pt\hss}\hbox{$ \smile$}}}{\vbox{\hbox{\kern 2.0pt\vrule width 0.4pt height 6.0pt depth -0.2pt\hss}\hbox{$ \smile$}}}{\vbox{\hbox{\kern 2.0pt\vrule width 0.4pt height 6.0pt depth -0.2pt\hss}\hbox{$ \smile$}}}}_{A}B\), we have that \(W\) must be large.
We define, \(\mathcal{V}\coloneqq\mathcal{F}\cup\mathcal{P}\). Thus, for every \(\gamma\in\Gamma\) there exists \(f_{\gamma}\in\mathbb{Q}(x_{\mathcal{V}\leq\gamma})\) such that \(\gamma a=f_{\gamma}(a^{\mathcal{V}\leq\gamma})\). Let
\[\beta(x_{\mathcal{V}})\coloneqq\psi(f_{\gamma}(x_{\mathcal{V}}):\gamma\in \mathcal{V}).\]
Define \(X\coloneqq\beta(\mathbb{M})\). We have that \(\Pi_{\mathcal{F}}(X)\in q\) and therefore it is large. Thus, if \(T_{g}^{\bar{\delta},?}=T_{g}^{\bar{\delta},nc}\), there exists \(c\in C\) such that \(c^{\Gamma}\) realizes \(\beta\).
If instead \(T_{g}^{\bar{\delta},?}=T_{g}^{\bar{\delta}}\), for every \(\pi\in\mathcal{P}\) we have a polynomial \(p(\bar{x})\in A[x_{\mathcal{V}\leq\pi}]\) such that \(p(b^{\mathcal{V}\leq\pi})=^{\pi}0\). Thus, with have a configuration \(\mathfrak{S}\) given by the data
\[\mathcal{P},\langle p_{\pi}:\pi\in\mathcal{P}\rangle,\beta.\]
Let \(\theta\) and \(F_{\gamma}\) are as in Definition 5.2). For every \(\gamma\in\Gamma\), the \(L(A)\)-formula
\[\sigma_{\gamma}(x_{\mathcal{V}\leq\gamma})\coloneqq\bigwedge_{g,h\in F_{\gamma }}g(x_{\mathcal{V}\leq\gamma})=h(x_{\mathcal{V}\leq\gamma})\]
is satisfied by \(a^{\Gamma}\), and therefore it is in \(p\), and hence also in \(q\). Notice that \(D_{\mathfrak{S}}\) is the set of \(\bar{b}\in\mathbb{M}^{\mathcal{V}\leq\theta}\) satisfying \(\beta\) and all \(\sigma_{\gamma}\) for every \(\gamma\leq\theta\): we have that \(D_{\mathfrak{S}}\) is is \(L(B)\)-definable and in \(q\) (because it is intersection of sets in \(q\)), and therefore it is large. Therefore, there exists \(c\in\mathbb{M}\) satisfying \(\mathfrak{S}\), and hence \(c^{\Gamma}\) satisfies \(\beta\).
We consider now the case when \(n>1\), and proceed by induction on \(n\); we assume that we have already proved the result for \(n-1\).
Let \(\bar{a}=\langle a_{1},\ldots,a_{n}\rangle\in\mathbb{M}^{m}\) such that \(\bar{a}^{\Gamma}\) realizes \(p\). Let \(\bar{b}=\langle\bar{b}_{1}\ldots,\bar{b}_{n}\rangle\in(\mathbb{M}^{\Gamma})n\) be a realization of \(q\). Let \(r\) be the restriction of
to the first \((n-1)\cdot\Gamma\) variables: thus, \(r\) is the \(L\)-type of \(\langle\bar{b}_{1}\ldots,\bar{b}_{n-1}\rangle\). By inductive hypothesis, there exists \(\tilde{c}\in\mathbb{M}^{n-1}\) such that \(\bar{c}^{\Gamma}\) realizes \(r\). Let
\[A^{\prime}\coloneqq A\cup\bar{c}^{\Gamma}\qquad B^{\prime}\coloneqq B\cup\bar{c }^{\Gamma},\]
\[q^{\prime}(\bar{z}_{n})\coloneqq\{\alpha(\bar{c}^{\Gamma},\bar{z}_{n}):\alpha \in q\}\in S^{\Gamma}_{L}(B^{\prime}).\]
_Claim 3_.: \[q^{\prime}\mathop{\mathchoice{\vbox{\hbox{$\smile$}}}{\vbox{ \hbox{$\smile$}}}{\vbox{\hbox{$\smile$}}}{\vbox{\hbox{$ \smile$}}}{\vbox{\hbox{$\smile$}}}}_{A^{\prime}}^{ \mathrm{acl}}B^{\prime}\]
In fact, let \(\bar{e}\) be a realization of \(q^{\prime}\). Then, \(\langle\bar{c}^{\Gamma},\bar{e}\rangle\) is a realization of \(q\). Therefore,
\[\bar{c}^{\Gamma}\bar{e}\mathop{\mathchoice{\vbox{\hbox{$\smile$}}}{ \vbox{\hbox{$\smile$}}}{\vbox{\hbox{$\smile$}}}{\vbox{\hbox{$\smile$}}}}_{A^{\prime}}^{ \mathrm{acl}}B,\]
and thus
\[\bar{e}\mathop{\mathchoice{\vbox{\hbox{$\smile$}}}{\vbox{\hbox{$\smile$}}}{\vbox{\hbox{$\smile$}}}{\vbox{\hbox{$\smile$}}}}_{A^{\bar{c}^{\Gamma}}}^{ \mathrm{acl}}B,\]
and the claim follows.
Let \(p^{\prime}\) be the restriction of \(q^{\prime}\) to \(A^{\prime}\).
_Claim 4_.: \(p^{\prime}\) is \(\bar{\delta}\)-compatible.
In fact, \(a^{\Gamma}\) realizes \(p\). Let
\[\tilde{a}\coloneqq\langle a_{1},\ldots,a_{n-1}\rangle.\]
Notice that both \(\bar{a}^{\Gamma}\) and \(\bar{c}^{\Gamma}\) realize the restriction of \(r\) to \(A\), and therefore they have the same \(L\)-type over \(A\). Since \(\bar{\delta}A\subseteq A\), we have that \(\tilde{a}\) and \(\bar{c}\) have also the same \(L^{\bar{\delta}}\)-type over \(A\). Thus, there exists an automorphism of \(L^{\bar{\delta}}\)-structure of \(\langle M,\bar{\delta}\rangle\) fixing \(A\) pointwise and such that \(\sigma(\tilde{a})=\tilde{c}\). Let \(b_{n}\coloneqq\sigma(a_{n})\). Thus,
\[\bar{a}=\langle\tilde{a},a_{n}\rangle\equiv_{A}^{L^{\bar{\delta}}}\langle\tilde {c},b_{n}\rangle,\]
hence \(\langle\tilde{c},b_{n}\rangle\) realizes \(p\), and therefore \(b_{n}\) realizes \(p^{\prime}\), proving the claim.
By Claims 3 and 4, and by the case \(n=1\), \(q^{\prime}\) is also \(\bar{\delta}\)-compatible. Thus, there exists \(c_{n}\in\mathbb{M}\) such that \(c_{n}^{\Gamma}\) realizes \(q^{\prime}\), and therefore \(\langle\tilde{c},c_{n}\rangle^{\Gamma}\) realizes \(q\).
Proof of Thm. 7.1.: It suffices to show that \(\mathop{\mathchoice{\vbox{\hbox{$\smile$}}}{\vbox{\hbox{$\smile$}}}{\vbox{ \hbox{$\smile$}}}{\vbox{\hbox{$\smile$}}}{\vbox{\hbox{$\smile$}}}}_{A}^{ \mathrm{s}}\) satisfies existence: that is, the following claim.
_Claim 5_.: Let \(\bar{a}\in\mathbb{M}^{n}\), and \(A\subseteq B\) small subsets of \(\mathbb{M}\). Then, there exists \(\bar{a}^{\prime}\in\mathbb{M}^{n}\) such that
\[\bar{a}^{\prime}\equiv_{A}^{L^{\bar{\delta}}}\bar{a}\ \wedge\ \bar{a}^{\prime}\mathop{ \mathchoice{\vbox{\hbox{$\smile$}}}{\vbox{\hbox{$\smile$}}}{\vbox{ \hbox{$\smile$}}}{\vbox{\hbox{$\smile$}}}{\vbox{\hbox{$\smile$}}}}_{A}^{ \mathrm{s}}\ B.\]
W.l.o.g., we may assume that \(\bar{\delta}A\subseteq A\) and \(\bar{\delta}B\subseteq B\). Let \(p\) be the \(L\)-type of \(\bar{a}^{\Gamma}\) over \(A\). Let \(q\) be some non-forking extension of \(p\) to \(B\) (w.r.t. the independence relation \(\mathop{\mathchoice{\vbox{\hbox{$\smile$}}}{\vbox{\hbox{$\smile$}}}{\vbox{ \hbox{$\smile$}}}{\vbox{\hbox{$\smile$}}}}_{\perp}^{\mathrm{ ac}}\)). By assumption, \(p\) is \(\bar{\delta}\)-compatible. Thus, \(q\) is also \(\bar{\delta}\)-compatible: let \(\bar{a}^{\prime}\in\mathbb{M}^{n}\) such that \(\bar{a}^{\prime\Gamma}\) realizes \(q\)
**Corollary 7.4**.: _1) The algebraic closure on \(\langle\mathbb{M};\bar{\delta}\rangle\) is given by_
\[\operatorname{acl}(A^{\Gamma})\]
_for every \(A\subseteq\mathbb{M}\)._
_2) If \(\langle\mathbb{M};\bar{\delta}\rangle\) has geometric elimination of imaginaries, then it is rosy._
Proof.: Let \(\operatorname{acl}^{\delta}\) be the algebraic closure according to \(T_{g}^{\bar{\delta},?}\).
1) If \(a\in\operatorname{acl}^{\delta}B\), then, since \(\mathop{\mathchoice{\kern 1.0pt\hbox{\vrule width 0.4pt height 6.0pt depth -0.
Proof.: For the proof we need to introduce suitable notions of "cells". First, we treat the case when we have only one derivation, and therefore \(T_{g}^{\bar{\delta},?}=T_{g}^{\delta}\).
In this case, we define a cell in the following way. Let \(2\leq i\leq n\in\mathbb{N}\), \(X\subseteq\mathbb{K}^{n}\), and \(\bar{b}\in\mathbb{K}^{i-1}\); we define the fiber of X
\[X(i,\bar{b})\coloneqq\{y\in\mathbb{K}:\langle\bar{b},y\rangle\in\Pi_{i}(X)\}\]
Let \(n\in\mathbb{N}\) and \(\bar{u}\in\{0,1\}^{n}\); a cell of type \(\bar{u}\) is an \(L(\mathbb{K})\)-definable set \(X\subseteq\mathbb{K}^{n}\) such that \(\dim(\Pi_{1}(X))=u_{1}\) and, for every \(2\leq i\leq n\), and \(\bar{b}\in\mathbb{K}^{i-1}\), \(X(i\,bv)\) is either empty or dimension \(u_{i}\). (Notice that when \(\mathbb{K}\) is o-minimal every cell in the o-minimal meaning is also a cell in the above meaning). The pivot of the cell \(X\) is \(\ell(X)\coloneqq\min i\leq n:u_{i}=0\).
It is clear that a \(L(\mathbb{K})\)-definable set \(X\) can be decomposed into finitely many cells "uniformly": that is, for every \(L\)-formula \(\alpha(\bar{x},\bar{y})\) there exist finitely many formulae \(\beta_{1}(\bar{x},\bar{y}),\ldots,\beta_{\ell}(\bar{x},\bar{y})\) such that, for every \(\bar{b}\in\mathbb{K}^{m}\), \(\left(\beta_{1}(\mathbb{K},\bar{b}),\ldots,\beta_{\ell}(\mathbb{K},\bar{b})\right)\) is a cell decompositon of \(\alpha(\mathbb{K},\bar{b})\).
We want to show that \(T_{g}^{\delta}\) is UF. It suffices to show that every \(L(\mathbb{K})\)-definable set \(X\subseteq\mathbb{K}\) is "uniformly finite",: that is, either \(X\) is infinite, or there exists a a uniform finite bound on the cardinality of \(X\) (uniform meaning that it depends only on the \(L^{\delta}\)-formula defining \(X\), not on the parameters). We may assume that
\[X=\{a\in\mathbb{K}:\alpha(\operatorname{Jet}_{\delta}^{n}(a),\operatorname{Jet }_{\delta}(\bar{b})\},\]
where \(\alpha\) is an \(L\)-formula. Let \(Y\coloneqq\alpha(\mathbb{K},\operatorname{Jet}_{\delta}(\bar{b}))\subseteq \mathbb{K}^{n+1}\). If \(n=0\), then \(X=Y\); since \(T\) is UF, \(\Pi_{1}(Y)\) is uniformly finite, and hence \(X\) is also uniformly finite. W.l.o.g., we may assume that \(Y\) is a cell; let \(\bar{u}\in\{0,1\}^{n+1}\) be its type.
If \(u_{0}=0\), then \(\Pi_{1}(Y)\) is uniformly finite, and again we have a uniform bound on \(X\). If \(u_{i}=1\) for every \(i\leq n\), then \(X\) is infinite (by \(\mathtt{(Deep)}\)).
Otherwise, we proceed by induction on the lenght \(\ell\) of \(Y\). If \(\ell=n+1\), then, by \(\mathtt{(Deep)}\), \(X\) is infinite; thus, w.l.o.g. we may assume that \(2\leq\ell\leq n\). Thus, there exist \(L\)-definable functions \(f_{\ell+1},\ldots,f_{n+1}\) such that, for every \(i=\ell+1,\ldots,n+1\), for every \(a\in X\), \(\delta^{i}a=f_{i}(\operatorname{Jet}_{\delta}^{\ell}(a),\operatorname{Jet}_{ \delta}(\bar{b}))\). Thus, we may replace \(Y\) with
\[Z\coloneqq\{\langle\bar{a},\bar{c}\rangle\in\mathbb{K}^{\ell}\times\mathbb{K} ^{n+1-\ell}:\bar{a}\in\Pi_{\ell}(X),b_{i}=f_{\ell+i}(\bar{a}):i=1,\ldots,n- \ell\}.\]
We now decompose \(Z\) into cells (uniformly) \(Z_{1},\ldots,Z_{m}\). Let \(W_{i}\coloneqq\{x\in\mathbb{K}:\operatorname{Jet}_{\delta}^{n}x\in Z_{i}\}\), \(i=1,\ldots,m\). It suffices to show that each \(W_{i}\) is uniformly finite. Let \(i\leq m\) and \(\ell_{i}\) be the lenght of \(Z_{i}\). If \(\ell_{i}<\ell\), then, by inductive hypothesis, \(W_{i}\) is uniformly finite. If \(\ell_{i}=\ell\), then, by \(\mathtt{(Deep)}\), \(W_{i}\) is infinite. In either case, we have that \(W_{i}\) is uniformly finite, and we are done.
We treat now the case when we have \(k\) non-commuting derivations: that is, \(T_{g}^{\bar{\delta},?}=T_{g}^{\bar{\delta},nc}\). We need to modify the notion of cell accordingly. Let \(S\subset\Gamma\) be a nonempty finite \(\leq\)-initial subset. Let \(\bar{u}\in\{0,1\}^{S}\). A cell
of type \(\bar{u}\) is an \(L(\mathbb{K})\)-definable set \(X\subseteq\mathbb{K}^{S}\) such that \(\dim(\Pi_{0}(X))=u_{0}\) and, for every \(0\neq\gamma\in\Gamma\) and \(\bar{a}\in\mathbb{K}^{S<\gamma}\), the set \(\{c\in\mathbb{K}:\langle\bar{a},c\rangle\in\Pi_{S\leq\gamma}(X)\}\) is either empty or of dimension \(u_{\gamma}\). The set ov pivots of a cell \(X\) is the set \(\mathcal{P}(X)\) of \(\preceq\)-minimal elements of \(\{\gamma\in S:u_{\gamma}=0\}\).
We want to show that every set \(X\subseteq\mathbb{K}\) which is \(L^{\bar{\delta}}\)-definable with parameters in \(\mathbb{K}\) is uniformly finite. As before, we may assume that \(X=\{x\in\mathbb{K}:x^{S}\in Y\}\) for some cell \(Y\subseteq\mathbb{K}^{S}\) of type \(\bar{u}\). If \(u_{\gamma}=1\) for every \(\gamma\in S\), then \(X\) is infinite. If \(u_{0}=0\), then \(\Pi_{0}(Y)\) is uniformly finite, and hence \(X\) is uniformly finite. Otherwise, the set \(\mathcal{P}:=\mathcal{P}(X)\) is nonempty and does not contain \(0\): we prove that \(X\) is uniformly finite by _induction_ on \(\mathcal{P}\). In fact, on the family of finite anti-chains \(\mathfrak{A}\) of \(\Gamma\) we can put the partial ordering given by \(\mathcal{P}\preceq\mathcal{P}^{\prime}\) if \(\forall\pi\in\mathcal{P}\,\exists\pi^{\prime}\in\mathcal{P}^{\prime}\;\pi \preceq\pi^{\prime}\). Then, \(\left(\mathfrak{A},\preceq\right)\) is a well-founded partial order (with the empty set as minimum) and \(\mathcal{P}(X)\in\mathfrak{A}\): thus, we can do induction (notice that the set of all anti-chains is not well-founded, because \(\langle\Gamma,\preceq\rangle\) is not a well-partial-order).
Let \(\mathcal{B}\coloneqq\{\gamma\in S:\exists\pi\in\mathcal{P}\,\pi\preceq\gamma\}\) and \(\mathcal{F}\coloneqq S\setminus\mathcal{B}\) and \(\mathcal{V}\coloneqq\mathcal{F}\cup\mathcal{P}\). For every \(\gamma\in\mathcal{B}\) there exists an \(L(\mathbb{K})\)-definable function \(f_{\gamma}\) such that, for every \(a\in X\), \(\gamma a=f_{\gamma}(a^{\mathcal{V}})\).
Thus, we may replace \(Y\) with
\[Z\coloneqq Y\cap\{\bar{a}\in\mathbb{K}^{S}:a_{\gamma}=f_{\gamma}(a_{\mathcal{ V}})\}.\]
We decompose \(Z\) into finitely many cells \(Z_{1},\ldots,Z_{m}\), and define \(W_{i}\coloneqq\{a\in\mathbb{K}:a^{S}\in Z_{i}\colon\,i=1,\ldots,m\}\). It suffices to show that each \(W_{i}\) is uniformly finite. If \(\mathcal{P}(Z_{i})=\mathcal{P}\), then \(W_{i}\) is infinite. Otherwise, \(\mathcal{P}(Z_{i})\prec\mathcal{P}\), and therefore, by inductive hypothesis, \(W_{i}\) is uniformly finite, and we are done.
It remains to treat the case when we have \(k\) commuting derivations, that is \(T^{\bar{\delta},?}_{g}=T^{\bar{\delta}}_{g}\).
The definition of cell \(X\), its type, and its set of pivots \(\mathcal{P}(X)\) is the same as for the commutative case \(T^{\bar{\delta}}_{g}\), except that we use a index set the free commutative monoid \(\Theta\) instead of the free monoid \(\Gamma\). As before, we are reduced to show that, given \(S\) finite nonempty \(\leq\)-inital subset of \(\Theta\), and a cell \(Y\subseteq\mathbb{K}^{S}\)\(L\)-definable with parameters \(\overline{\bar{b}}^{\Theta}\), we have to show that the set
\[X\coloneqq\{a\in\mathbb{K}:a^{S}\in Y\}\]
is uniformly finite. As before, if \(u_{i}=1\) for every \(i\), then \(X\) is infinite; if \(u_{0}=0\), then \(X\) is uniformly finite. Otherwise, let \(\mathcal{P}:=\mathcal{P}(X)\), and let \(\mathcal{V}\) and \(\mathcal{B}\) be defined as in the previous case; we have that \(\mathcal{P}\) is nonempty and does not contain \(0\). We proceed by induction on \(\mathcal{P}(X)\). After further decomposing \(X\), we may assume that, for every \(\pi\in\mathcal{P}\) there exists a polynomial \(p(\bar{x},y)\in\mathbb{K}[x_{\mathcal{V}\leq\mathcal{P}}]\) such that, for every \(a\in X\), \(p(a^{\mathcal{V}\leq\mathcal{P}})=^{\pi}0\). Moreover, for every \(\mu\in\mathcal{B}\) there exists an \(L\)-definable
function \(f_{\mu}\) such that, for every \(a\in X\), \(\mu a=f_{\mu}(a^{\mathcal{V}},\bar{b}^{\Theta})\). We can replace \(Y\) by
\[Z\coloneqq Y\cap\{\bar{a}\in\mathbb{K}^{S}:a_{\mu}=f_{\mu}(a_{\mathcal{V}}), \bar{b}^{\Theta}\}.\]
We then decompose \(Z\) into finitely many cells \(Z=Z_{1}\sqcup\cdots\sqcup Z_{m}\), and, for \(i=1,\ldots,m\), we define \(W_{i}\coloneqq\{a\in K:a^{S}\in Z_{i}\}\). It suffices to show that each \(W_{i}\) is uniformly finite. If \(\mathcal{P}(Z_{i})\prec\mathcal{P}\), then, by inductive hypothesis, \(W_{i}\) is uniformly finite. If instead \(\mathcal{P}(Z_{i})=\mathcal{P}\), we associate to \(W_{i}\) the following configuration \(\mathfrak{S}\). As \(\mathcal{P}\) we take \(\mathcal{P}(Z_{i})\); for every \(\pi\in\mathcal{P}\) we take the polynomial \(p_{\pi}\), and as \(L(\mathbb{K})\)-definable subset of \(\mathbb{K}^{\mathcal{V}}\) we take \(\Pi_{\mathcal{V}}(Z_{i})\). If \(D_{\mathfrak{S}}\) is large, then \(W_{i}\) is infinite. If \(D_{\mathfrak{S}}\) is not large, we can replace \(Z_{i}\) with
\[Z_{i}^{\prime}\coloneqq Z_{i}\cap\{\bar{a}\in\mathbb{K}^{S}:a_{\mathcal{F}} \in D_{\mathfrak{S}}\}.\]
Then we further decompose \(Z_{i}^{\prime}\) into finitely many cells \(V_{1},\ldots,V_{\ell}\). We have that \(\mathcal{P}(V_{j})\prec\mathcal{P}\) for every \(j=1,\ldots,\ell\), and therefore, by inductive hypothesis, \(W_{i}\) is uniformly finite.
A few particular cases of the above theorem were already known; in particular, it was known for the following theories:
* DCF\({}_{0,\mathrm{m}}\) (see [11]);
* \(CODF_{m}\): real closed field with \(m\) commuting derivations (see [10]);
* algebraically closed valued fields, ordered valued fields, \(p\)-adics, with one generic derivation (see [11, Examples 2.2.1]);
Notice that the previously known proof for DCF\({}_{0,\mathrm{m}}\) is quite involved, while the one for \(CODF_{m}\) is quite straightforward (modulo some general nontrivial theorems on open core).
## 10. The field of constants
We give some interesting results on the field of constants. Before we introduce some notations and definitions.
**Notation 10.1**.: From now on, we denote by \(T^{\bar{\delta},?}\) either \(T^{\bar{\delta}}\) or \(T^{\bar{\delta},nc}\), and correspondingly by \(T^{\bar{\delta},?}_{g}\) either \(T^{\bar{\delta}}_{g}\) or \(T^{\bar{\delta},nc}_{g}\).
\(\langle\mathbb{M},\bar{\delta}\rangle\) will be a monster model of \(T^{\bar{\delta},?}_{g}\).
\(\langle\mathbb{K},\bar{\delta}\rangle\) will be some model of \(T^{\bar{\delta},?}_{g}\)
**Theorem 10.2**.: _For every \(\bar{a}\) tuple in \(\mathbb{M}\) and \(B\) subset of \(\mathbb{M}\), the \(L^{\bar{\delta}}\)-type of \(\bar{a}\) over \(B\) is uniquely determined by the \(L\)-tuple of \(\bar{a}^{\Gamma}\) over \(B^{\Gamma}\)._
**Definition 10.3**.: The **field of constants** is the set
\[\mathfrak{C}_{\bar{\delta}}\coloneqq\{a\in\mathbb{K}:\delta_{1}(a)=\cdots= \delta_{k}(a)=0\}\]
**Theorem 10.4**.: \(\mathfrak{C}_{\bar{\delta}}\) _is an elementary \(L\)-substructure of \(\mathbb{K}\)._
_Let \(\bar{b}\in\mathbb{K}^{\ell}\) and \(X\subseteq\mathfrak{C}^{n}_{\bar{\delta}}\) be \(L^{\bar{\delta}}\)-definable with parameters \(\bar{b}\). Then, there exists \(Y\subseteq\mathbb{K}^{n}\) which is \(L\)-definable (in \(\mathbb{K}\)) with parameters \(\bar{b}^{\Gamma}\)
_such that \(X=Y\cap\mathbb{K}^{n}\). If moreover \(\bar{b}\in\mathfrak{C}_{\bar{\delta}}\), then \(X\) is \(L\)-definable in \(\mathfrak{C}_{\bar{\delta}}\) with parameters \(\bar{b}\): equivalently, there exists \(Y\subseteq\mathbb{K}^{n}\) which is \(L\)-definable in \(\mathbb{K}\) with parameters \(\bar{b}^{\Gamma}\) such that \(X=Y\cap\mathbb{K}^{n}\)._
Observe that \(\mathfrak{C}_{\bar{\delta}}\) is algebraically closed in \(\mathbb{K}\) w.r.t. the \(L^{\bar{\delta}}\)-structure.
We can consider more in details the reduct \(\langle\mathbb{K},\mathfrak{C}_{\bar{\delta}}\rangle\) (that is, the expansion of \(\mathbb{K}\) with a unary predicate for \(\mathfrak{C}_{\bar{\delta}}\)). Observe that \(\mathfrak{C}_{\bar{\delta}}\) is **dense** in \(\mathbb{K}\) w.r.t. the matroid \(\operatorname{acl}\): that is, for every \(Z\subseteq\mathbb{K}\) which is \(L\)-definable with parameters in \(\mathbb{K}\) and large, \(Z\) intersects \(\mathfrak{C}_{\bar{\delta}}\): thus, \(\langle\mathbb{K},\mathfrak{C}_{\bar{\delta}}\rangle\) is a lovely pair of geometric structures (in the sense of [1, 1]: see also [10]).
Thus, we can apply the known results (see [1, 1, 1]).
**Definition 10.5**.: A basic formula is a formula of the form
\[\exists\bar{y}\left(\bar{y}\in\mathbb{K}^{\ell}\wedge\psi(\bar{x},\bar{y})\right)\]
where \(\psi\) is an \(L\)-formula. A basic set is a set definable by a basic formula (with parameters from \(\mathbb{K}\)).
**Theorem 10.6**.: _Let \(Z\subseteq\mathbb{K}^{n}\) be definable in \(\langle\mathbb{K},\mathfrak{C}_{\bar{\delta}}\rangle\) with parameters from \(\mathbb{K}\). Then, \(Z\) is a finite Boolean combination of basic sets, with the same parameters as \(Z\)._
**Remark 10.7**.: Let \(\langle A,B\rangle\) be a lovely pair of geometric structures, with \(A\models T\). Then, there exists \(B^{*}\succeq B\) and a derivation \(\delta^{*}\) on \(B^{*}\) such that \(\langle B^{*},\delta^{*}\rangle\models T^{\delta}_{g}\) and \(\langle B^{*},A^{*}\rangle\succeq\langle B,A\rangle\), where \(A^{*}\coloneqq\mathfrak{C}_{\delta^{*}}\)
Proof.: The theory \(T^{\text{lovely}}\) of lovely pairs of models of \(T\) is complete. Let \(\langle B^{*},A^{*}\rangle\succeq\langle B,A\rangle\) be a \(0\)-big (a.k.a. "splendid": see [1]). By bigness, there exists a derivation \(\delta^{*}\) on \(B^{*}\) satisfying the conclusion.
[10, SS5] use a particular case of the above remark to (re-)prove some results about lovely pairs.
For more results (in particular on imaginaries in \(\langle\mathbb{K},\mathfrak{C}_{\bar{\delta}}\rangle\)) see [1, 1, 1].
## 11. Open core
Assume that \(T\) has a definable topology. Assume that this topology satisfies the following conditions:
1. Assumption I of [1];
2. A definable subset of \(\mathbb{K}^{n}\) is large iff it has nonempty interior.
**Theorem 11.1**.: \(T\) _is the open core of \(T^{\bar{\delta},?}_{g}\). That is, for every \(\langle M,\bar{\delta}\rangle\models T^{\bar{\delta},?}_{g}\), for every \(V\subseteq M^{n}\) which is \(L^{\bar{\delta}}\)-definable with parameters \(\bar{b}\), if \(V\) is open, then \(V\) is \(L\)-definable with parameters \(\bar{b}^{\Gamma}\)._
Proof.: We use the criterion in [1][11]. Let \(\bar{b}\) be a finite tuple in \(\mathbb{M}\), and let \(V\subseteq M^{n}\) be \(L^{\bar{\delta}}\)-definable with parameters \(\bar{b}\). Replace \(\bar{b}\) with \(\bar{b}^{\Gamma}\). Let
\[D_{n}\coloneqq\{\bar{a}\in\mathbb{M}^{n}:\bar{\delta}\bar{a}=0\wedge\bar{a} \text{ is algebraically independent over }\bar{b}\}.\]
It is easy to see that:
1. \(D_{n}\) is dense in \(\mathbb{M}^{n}\).
2. For every \(p\in S^{n}_{L}(\bar{b})\), if \(p(\mathbb{M})\) (the set of \(\bar{a}\in\mathbb{M}^{n}\) realizing \(p\)) intersects \(D_{n}\), then \(p(\mathbb{M})\) is open. Therefore, if \(V\subseteq\mathbb{M}^{n}\) is open, and \(p(\mathbb{M})\) intersects both \(V\) and \(D_{n}\), then \(p(\mathbb{M})\cap V\cap D_{n}\) is nonempty.
3. For every \(\bar{a}\in D_{n}\), the \(L^{\bar{\delta}}\)-type of \(\bar{a}\) over \(\bar{b}\) is determined by its \(L\)-type, plus the fact that \(\bar{a}\in D_{n}\).
Note that, with the same \(D_{n}\), also condition (3) of [1][11] is satisfied.
**Proposition 11.2**.: _Assume moreover that the topology satisfies the following condition:_
1. _If_ \(X\) _is_ \(L(\mathbb{K})\)_-definable and nonempty, then_ \(\dim(\overline{X}\setminus X)<\dim(X)\)_, where_ \(\overline{X}\) _is the topological closure of_ \(X\)_._
_Then, \(T^{\bar{\delta},\lx@note{footnote}{A more general result is true. Let $\mathbb{A}:=\langle A,\text{cl}\rangle$ be a finitary matroid. Let $\bar{\delta}$ be a tuple of commuting quasi-endomorphisms of $\mathbb{A}$, in the sense of [10]. Given $X,Y\subseteq A$, define $\bar{\delta}$-$cl${}_{Y}(X)$ as the set of $a\in A$ such that $a^{\Theta}$ is not cl-independent over $X^{\Theta}Y^{\Theta}$. Then, $\bar{\delta}$-cl is a finitary matroid on $A$.}\)_
Proof.: With trivial modifications, the proof suggested by M. Tressl in [10] works.
Particular cases of Theorem 11.1 were already known: see [12, SS6].
## 12. Differential dimension
### The commutative case
Let \(\langle A,\bar{\delta}\rangle\) be a field (of characteristic 0) with \(k\) commuting derivations. The derivations induce a matroid on \(A\). Given \(a\in A\), \(Y\subseteq A\) and \(X\subseteq A\), we define \(a\in\bar{\delta}\)-\(\text{acl}_{Y}(X)\) if \(a^{\Theta}\) is not algebraically independent over \(X^{\Theta}Y^{\Theta}\).
As shown in [10], \(\bar{\delta}\)-\(\text{acl}_{Y}\) is a matroid on \(A\).(2)
Footnote 2: A more general result is true. Let \(\mathbb{A}:=\langle A,\text{cl}\rangle\) be a finitary matroid. Let \(\bar{\delta}\) be a tuple of commuting quasi-endomorphisms of \(\mathbb{A}\), in the sense of [10]. Given \(X,Y\subseteq A\), define \(\bar{\delta}\)-\(cl_{Y}(X)\) as the set of \(a\in A\) such that \(a^{\Theta}\) is not cl-independent over \(X^{\Theta}Y^{\Theta}\). Then, \(\bar{\delta}\)-cl is a finitary matroid on \(A\).
Fix \(\langle\mathbb{M},\bar{\delta}\rangle\) monster model of \(T^{\bar{\delta}}_{g}\). We have the corresponding matroid \(\bar{\delta}\)-\(\text{acl}(X)\coloneqq\bar{\delta}\)-\(\text{acl}_{F}(X)\).
**Theorem 12.1**.: \(\bar{\delta}\)-\(\text{acl}\) _is an existential matroid (in the sense of [11])._
Proof.: We have to prove that \(\bar{\delta}\)-\(\text{acl}\) is definable and it satisfies existence (see [11, SS3]).
The fact that \(\bar{\delta}\)-acl is definable means that, for every \(A\subseteq\mathbb{M}\) and \(b\in\bar{\delta}\text{-acl}(A)\) there exists an \(L^{\bar{\delta}}\)-formula \(\phi(\bar{x},y)\) and \(\bar{a}\in A^{n}\) such that \(\langle\mathbb{M},\bar{\delta}\rangle\models\phi(\bar{a},b)\) and, for every \(\bar{a}^{\prime},b^{\prime}\) in \(\mathbb{M}\), if \(\langle\mathbb{M},\bar{\delta}\rangle\models\phi(\bar{a}^{\prime},b^{\prime})\), then \(b^{\prime}\in\bar{\delta}\text{-acl}(\bar{a}^{\prime})\). We can take as \(\phi\) any formula witnessing that \(b^{\Theta}\) is not algebraically independent over \(A\).
For existence, let \(A\subseteq B\subset\mathbb{M}\) be subsets of small cardinality. Let \(c\in\mathbb{M}\) such that \(c\notin\bar{\delta}\text{-acl}(A)\). We have to show that there exists \(d\in\mathbb{M}\) such that \(c\) and \(d\) have the same \(L^{\delta}\)-type over \(A\) and \(d\notin\bar{\delta}\text{-acl}(B)\).
Since \(\raisebox{-1.72pt}{\scalebox{1.0}{$\bigsqcup$}}^{\text{\tiny{\rm{acl}}},\, \delta}\) satisfies existence, there exists \(d\in\mathbb{M}\) such that \(c\) and \(d\) have the same \(L^{\delta}\)-type over \(A\) and \(d\raisebox{-1.72pt}{\scalebox{1.0}{$\bigsqcup$}}^{\text{\tiny{\rm{acl}}},\, \delta}\). Then, \(d^{\Theta}\) is algebraically independent over \(B^{\Theta}\): therefore, \(d\notin\bar{\delta}\text{-acl}(B)\), proving that \(\bar{\delta}\)-acl is an existential matroid.
Thus, \(\bar{\delta}\)-acl induces a dimension function \(\bar{\delta}\text{- dim}\) on models of \(T_{g}^{\bar{\delta}}\) (see [10]; see also [1]).
**Remark 12.2**.:
1. \(\bar{\delta}\)-acl is not the \(T_{g}^{\bar{\delta}}\)-algebraic closure: the former only contains the latter. For instance, the whole field of constants \(\mathfrak{C}_{\bar{\delta}}\) is in \(\bar{\delta}\text{-acl}(\emptyset)\).
2. \(\raisebox{-1.72pt}{\scalebox{1.0}{$\bigsqcup$}}^{\text{\tiny{\rm{acl}}},\, \delta}\) is not the independence relation induced by \(\bar{\delta}\)-acl, because \(\raisebox{-1.72pt}{\scalebox{1.0}{$\bigsqcup$}}^{\text{\tiny{\rm{acl}}},\, \delta}\) is strict. For instance, if \(a\in\mathfrak{C}_{\bar{\delta}}\setminus\text{acl}(F)\), then \(a\raisebox{-1.72pt}{\scalebox{1.0}{$\bigsqcup$}}^{\text{\tiny{\rm{acl}}},\, \delta}\).
**Lemma 12.3** (See [1]).: _Let \(\langle\mathbb{K},\bar{\delta}\rangle\models T_{g}^{\bar{\delta}}\). Let \(Y\subseteq\mathbb{K}^{n}\) be \(L\)-definable (with parameters). Then, \(\dim(Y)=\bar{\delta}\text{-}\dim(Y)\)._
Proof.: By the properties of dimension functions (see [10]) it suffices to treat the case when \(n=1\) (the general case follows by induction on \(n\)). If \(\dim(X)=0\), then \(X\) is finite, and therefore \(\bar{\delta}\text{-}\dim(X)=0\). If \(\dim(X)=1\), then \((X-X)/(X-X)=\mathbb{K}\), and therefore \(\bar{\delta}\text{-}\dim(X)=1\).
The same proof gives a more general result.
**Proposition 12.4** (Invariance of dimension for fields).: _Let \(L\) be a language expanding the language of rings, and \(L^{*}\) be an expansion of \(L\). Let \(A^{*}\) be an \(L^{*}\)-structure expanding a field, and \(A\) be its restriction to the language \(L\). Assume that \(\dim^{*}\) and \(\dim\) be dimension functions on \(A^{*}\) and \(A\), respectively. Then, for every \(X\subseteq A^{n}\) which is \(L\)-definable (with parameters), \(\dim^{*}(X)=\dim(X)\)._
Unlike in the case of lovely pairs, we cannot approximate \(L^{\delta}\)-definable sets with \(L\)-definable sets.
**Remark 12.5**.: Let \(\langle\mathbb{K},\delta\rangle\models T_{g}^{\delta}\). Let \(X\subseteq\mathbb{K}\) be \(L^{\delta}\)-definable (with parameters). If \(X\) is definable in the lovely pair \(\mathbb{K},\mathfrak{C}_{\bar{\delta}}\) (see SS10, then there exists \(Y\subseteq\mathbb{K}\) which is \(L\)-definable and such that \(\bar{\delta}\text{-}\dim(X\Delta Y)<1\) ([10, Proposition 8.36]). If not, such \(Y\) might not exist: for instance, let \(\mathbb{K}\) be a real closed field, and \(X\coloneqq\{x\in\mathbb{K}:\delta x>0\}\).
### The non-commutative case
The assumption that the derivations commute cannot be dropped.
**Lemma 12.6**.: _If the derivations do not commute, then \(\bar{\delta}\text{-}\mathrm{acl}_{Y}\) is not a matroid, because it is not transitive.1 In fact, let \(k=2\) and \(\langle\mathbb{K},\bar{\delta}\rangle\models T^{\bar{\delta},nc}_{g}\). Then, there exist \(a,b,c\in\mathbb{K}\) such that:_
Footnote 1: Naturally, we use the free monoid \(\Gamma\) instead of the free commutative monoid \(\Theta\) to define \(\bar{\delta}\)-acl in this situation.
1. \(a^{\Gamma}\) _is algebraically independent over_ \(F\)_;_
2. \(\delta_{2}b=0\) _and_ \(\delta_{1}b=\delta_{1}a\)_;_
3. \(c=a-b\)_._
_Notice that \(\delta_{1}c=0\). Then, \(a\notin\bar{\delta}\text{-}\mathrm{acl}(F)\), \(b,c\in\bar{\delta}\text{-}\mathrm{acl}(F)\), but \(a\in\bar{\delta}\text{-}\mathrm{acl}_{F}(b,c)\): thus, transitivity fails._
**Lemma 12.7**.: _For \(k\geq 2\), models of \(T^{\bar{\delta},nc}_{g}\) do not have a dimension function._
Proof.: For simplicity, we do the case when \(k=2\). Let \(\langle\mathbb{K},\bar{\delta}\rangle\models T^{\bar{\delta},nc}_{g}\). Assume, by contradiction, that \(\dim^{\prime}\) is a dimension function on \(\langle\mathbb{K},\bar{\delta}\rangle\). Let \(X\coloneqq\{b\in\mathbb{K}:\delta_{1}b=0\}\). Let \(Y\coloneqq\{c\in\mathbb{K}:\delta_{2}c=0\}\). Notice that \(X\) and \(Y\) are \(L^{\delta}\)-definable subfields of \(\mathbb{K}\) of infinite index inside \(\mathbb{K}\): thus, \(\dim^{\prime}(X)=\dim^{\prime}(Y)=0\).
_Claim 6_.: \(X+Y=\mathbb{K}\).
Let \(a\in\mathbb{K}\). Let \(b\in\mathbb{K}\) such that \(\delta_{1}b=0\) and \(\delta_{2}b=\delta_{2}a\), and let \(c=a-b\). Notice that \(b\in X\) and \(c\in Y\). Thus \(X+Y=\mathbb{K}\).
But \(\dim^{\prime}(X)=\dim^{\prime}(Y)=0\), and therefore \(\dim^{\prime}(\mathbb{K})=0\), while the axioms of dimension require that \(\dim^{\prime}(\mathbb{K})=1\).
## 13. Genericity
We denote by \(\mathbb{K}^{\mathbb{K}}\) the set of all functions from \(\mathbb{K}\) to \(\mathbb{K}\), and by \(\mathrm{Der}_{\mathbb{K}}\subset\mathbb{K}^{\mathbb{K}}\) the set of derivations on \(\mathbb{K}\) extending \(\eta\). The main references for this section is [11], from which our presentation is heavily inspired; for the background notions of descriptive set theory see [10].
For every \(\bar{a},\bar{b}\in\mathbb{K}^{n}\), we define
\[B_{\bar{a},\bar{b}}\coloneqq\{\delta\in\mathbb{K}^{\mathbb{K}}:\delta(\bar{a} )=\bar{b}\}.\]
For every \(L^{\delta}\)-sentence \(\phi\) with parameters in \(\mathbb{K}\), we define
\[U_{\phi}\coloneqq\{\delta\in\mathbb{K}^{\mathbb{K}}:\langle\mathbb{K},\delta \rangle\models\phi\}.\]
The set \(\mathbb{K}^{\mathbb{K}}\) has two "canonical" topologies:
* The pro-discrete topology, whose basis of open sets is given by \[\{B_{\bar{a},\bar{b}}:\bar{a},\bar{b}\in\mathbb{K}^{n},n\in\mathbb{N}\},\] and which we denote by \(\tau_{d}\).
* The "fist-order" topology, whose basis of open sets is given by \[\{U_{\phi}:\phi\ L^{\delta}\text{-sentence with parameters in }\mathbb{K}\}\] and which we denote by \(\tau_{FO}\).
**Remark 13.1**.: Another basis for \(\tau_{d}\) is
\[\{U_{\phi}:\phi\text{ quantifier-free }L^{\delta}\text{-sentence with parameters in }\mathbb{K}\}\]
In fact,
\[B_{\bar{a},\bar{b}}=U_{(\delta a_{1}=b_{1}\wedge\cdots\wedge\delta a_{n}=b_{n} )}.\]
For the remainder of this section, when we don't specify the topology, we mean \(\tau_{d}\).
We say that an \(L^{\delta}\)-sentence \(\phi\) with parmeters \(\bar{a}\) is "relatively quantifier free" if \(\phi=\alpha(\operatorname{Jet}_{\delta}(\bar{a}))\) for some \(L\)-formula without parameters \(\alpha\). Similarly, \(\phi\) is relatively existential if
\[\phi=\exists\bar{x}\alpha(\operatorname{Jet}_{\delta}(\bar{a}),\operatorname{ Jet}_{\delta}(\bar{x}))\]
for some \(L\)-formula without parameters \(\alpha\); similarly, we can define "relatively universal" and "relatively \(\forall\exists\)" \(L^{\delta}\)-sentences with parameters.
**Lemma 13.2**.: _Let \(\phi\) be an \(L^{\delta}\) sentence with parameters._
* _If_ \(\phi\) _is relatively quantifier free, then_ \(U_{\phi}\) _is clopen._
* _If_ \(\phi\) _is relatively existential, then_ \(U_{\phi}\) _is open._
* _If_ \(\phi\) _is relatively universal, then_ \(U_{\phi}\) _is closed._
* _If_ \(\phi\) _is relatively_ \(\forall\exists\)_, then_ \(U_{\phi}\) _is_ \(\mathcal{G}_{\delta}\)_._
Proof.: We do only the case when \(\phi\) is relatively existential: the others are similar. Write \(\phi=\exists\bar{y}\ \alpha(\bar{a},\delta\bar{a},\ldots,\delta^{n}\bar{a}, \bar{b},\delta\bar{b},\ldots,\delta^{m}\bar{b})\), for some \(L\)-formula \(\phi\). Then,
\[U_{\phi}=\bigcup\Bigl{(}B_{\bar{a},\bar{a}_{1}}\cap B_{\bar{a}_ {1},\bar{a}_{2}}\cap\cdots\cap B_{\bar{a}_{n-1},\bar{a}_{n}}\cap B_{\bar{b}, \bar{b}_{1}}\cap B_{\bar{b}_{1},\bar{b}_{2}}\cap\cdots\cap B_{\bar{b}_{m-1}, \bar{b}_{m}}:\bar{a}_{1},\ldots,\bar{a}_{n},\bar{b}_{1},\ldots,\bar{b}_{m}\in \mathbb{K}^{<\omega}\wedge\langle\mathbb{K},\delta\rangle\models\alpha(\bar{a },\bar{a}_{1},\ldots,\bar{a}_{n},\bar{b},\bar{b}_{1},\ldots,\bar{b}_{m}) \Bigr{)}.\]
For the remainder of this section, we assume that \(\mathbb{K}\) and \(L\) are **countable**.
Thus, \(\operatorname{Der}_{\mathbb{K}}\) is \(\tau_{FO}\)-closed and it is a \(\tau_{d}\)-\(\mathcal{G}_{\delta}\) inside \(\mathbb{K}^{\mathbb{K}}\); we use the same names for the induced topologies on \(\operatorname{Der}_{\mathbb{K}}\). Notice that \(\mathbb{K}^{\mathbb{K}}\) is a Polish space: therefore, \(\operatorname{Der}_{\mathbb{K}}\) is also a Polish space (see [10]). Thus, any two dense \(\mathcal{G}_{\delta}\) subsets of \(\operatorname{Der}_{\mathbb{K}}\) always intersect.
Given \(Z\subseteq\mathbb{K}^{n}\times\mathbb{K}^{n}\), we define
\[I_{Z}\coloneqq\{\delta\in\operatorname{Der}_{\mathbb{K}}:\exists\bar{b}\in \mathbb{K}^{n}:\langle\bar{b},\delta\bar{b}\rangle\in Z\}\]
**Lemma 13.3**.: _For every \(Z\subseteq\mathbb{K}^{n}\times\mathbb{K}^{n}\), \(I_{Z}\) is an open subset of \(\operatorname{Der}_{\mathbb{K}}\)._
Proof.: \[I_{Z}=\bigcup\Bigl{(}B_{\bar{a},\bar{b}}:\bar{a}\in\mathbb{K}^{n},\bar{b}\in \mathbb{K}^{n},\langle\bar{a},\bar{b}\rangle\in Z\Bigr{)}.\]
Let \(\mathbb{G}\) be the family of derivations \(\delta\in\operatorname{Der}_{\mathbb{K}}\) such that \(\langle\mathbb{K},\delta\rangle\models T_{g}^{\delta}\). Let \(\mathcal{L}\) be the family of the sets \(Z\subseteq\mathbb{K}^{n+n}\) definable with parameters, such that \(\Pi_{n}(Z)\) is large (for some \(n\in\mathbb{N}\)).
**Lemma 13.4**.: \(\mathbb{G}(M)=\bigcap_{Z\in\mathcal{L}}I_{Z}\)_. Moreover, \(\mathbb{G}\) is a \(\mathcal{G}_{\delta}\)-subset of \(\operatorname{Der}_{\mathbb{K}}\)._
Proof.: By the axiomatization \(T_{\operatorname{wide}}^{\delta}\), \(\mathbb{G}(M)=\bigcap_{Z\in\mathcal{L}}I_{Z}\). Each \(I_{Z}\) is open. By our assumptions, \(\mathcal{L}\) is countable.
**Lemma 13.5**.: _On \(\mathbb{G}\), \(\tau_{FO}\) and \(\tau_{d}\) coincide._
Proof.: By elimination of quantifiers, every \(L^{\delta}\)-sentence is equivalent, modulo \(T_{g}^{\delta}\), to a relatively quantifier-free sentence. The conclusion follows from Lemma 13.2.
**Lemma 13.6**.: _Assume that \(\operatorname{rk}(\mathbb{K}/F)\) is infinite. Then, for every \(\bar{a}\) finite tuple in \(\mathbb{K}\) and every \(W\) large subset of \(\mathbb{K}^{n}\) which is \(L\)-definable with parameters, there exists \(\bar{b}\in W\) which is algebraically independent over \(F\bar{a}\)._
Proof.: By induction on \(n\), it suffices to treat the case when \(n=1\). Let \(b\in\mathbb{K}\setminus\operatorname{acl}(F\bar{a})\). Since \(W\subseteq\mathbb{K}\) is large, then there exists \(b_{1},b_{2},b_{3},b_{4}\in W\) such that \((b_{1}-b_{2})/(b_{3}-b_{4})=b\) and \(b_{3}\neq b_{4}\). Therefore, at least one of the \(b_{i}\) is not in \(\operatorname{acl}(F\bar{a})\).
**Theorem 13.7**.: _There exists \(\mathbb{K}\models T\) which is countable and of infinite rank over \(F\). For any such \(\mathbb{K}\), the set \(\mathbb{G}\) is a dense subset of \(\operatorname{Der}_{\mathbb{K}}\)._
Thus, in a precise topological sense, \(\mathbb{G}\) is a generic set (notice that \(\mathbb{G}\) is \(\tau_{FO}\)-closed in \(\operatorname{Der}_{\mathbb{K}}\)).
Proof.: We have seen that each \(I_{Z}\) is open. It suffices to prove the following claim.
_Claim 7_.: For every \(Z\in\mathcal{L}\), \(I_{Z}\) is dense.
Let \(Z\subseteq\mathbb{K}^{n}\times\mathbb{K}^{n}\). Let \(B_{\bar{a},\bar{b}}\) be a nonempty basic open set. We have to verify that \(I_{Z}\cap B_{\bar{a},\bar{b}}\) is nonempty. Let \(\varepsilon\in B_{\bar{a},\bar{b}}\): that is, \(\varepsilon\in\operatorname{Der}_{\mathbb{K}}\) and \(\varepsilon\bar{a}=\bar{b}\). Let \(\varepsilon_{0}\) be the restriction of \(\varepsilon\) to \(\operatorname{acl}(F\bar{a})\). Let \(\bar{c}\in\Pi_{n}(Z)\) be algebraically independent over \(F\bar{a}\). We can extend \(\varepsilon_{0}\) arbitrarily to \(\bar{c}\); in particular, there exists \(\delta\in\operatorname{Der}_{\mathbb{K}}\) such that \(\delta\) extends \(\varepsilon_{0}\) and \(\delta\bar{c}\in Z\). Thus, \(\delta\in I_{Z}\cap B_{\bar{a},\bar{b}}\).
The following theorem gives a "topological" criterion for when a differential system has a solution in models of \(T_{g}^{\delta}\).
**Theorem 13.8**.: _Let \(\langle\mathbb{K},\varepsilon\rangle\models T^{\delta}\). Assume that \(\mathbb{K}\) countable and of infinite rank over \(F\). Let \(\bar{a}\in\mathbb{K}^{\ell}\). Let \(\operatorname{Der}_{\mathbb{K}}(\bar{a},\varepsilon)\) be the set of derivations \(\delta\) on \(\mathbb{K}\) extending \(\eta\) and such that \(\varepsilon\) and \(\delta\) coincide on \(\operatorname{Jet}_{\varepsilon}^{\infty}(\bar{a})\)._
_Let \(Z\subseteq\mathbb{K}^{n}\times\mathbb{K}^{n}\) be \(L\)-definable with parameters \(\bar{a}\). Let \(\delta\) be a derivation in \(\mathbb{K}\) such that \(\langle\mathbb{K},\delta\rangle\models T^{\delta}_{g}\). T.f.a.e.:_
1. \(I_{Z}\) _is dense in_ \(\operatorname{Der}_{\mathbb{K}}(\bar{a},\varepsilon)\)_;_
2. \(I_{Z}\) _is nonempty;_
3. \(I_{Z}\cap\mathbb{G}\) _is nonempty;_
4. \(\delta\in I_{Z}\)_._
Proof.: First of all it is easy to see that (1) \(\Rightarrow\) (2), and (4) \(\Rightarrow\) (3) are obvious. Moreover (2) is equivalent to (3), since \(I_{Z}\) is open and \(\mathbb{G}\) is dense.
We prove first the case when \(\bar{a}\) is empty (that is, \(Z\) is \(L\)-definable without parameters and \(\operatorname{Der}_{\mathbb{K}}(\bar{a},\varepsilon)=\operatorname{Der}_{ \mathbb{K}}\)).
In this case, we can add another equivalent formulation to (4):
(5) \(\mathbb{G}\subseteq I_{Z}\).
Since \(T^{\delta}_{g}\) is complete, and "\(\delta\in I_{Z}\)" can be expressed as a first-order sentence (without parameters), we have that (4) \(\Rightarrow\) (5), the converse is trivial so (4), (5) are equivalent. Therefore, (1) is equivalent to (2).
Let us consider now the case when \(\bar{a}\) is non-empty. Let \(F^{\prime}\coloneqq F[\operatorname{Jet}_{\varepsilon}^{\infty}(\bar{a})]\) and let \(\eta^{\prime}\) be the restriction of \(\varepsilon\) to \(F^{\prime}\). We denote by \(L^{\prime}\coloneqq L(F)\), and \(T^{\prime}\coloneqq T\cup\operatorname{Diag}(F^{\prime})\).
We can consider the theory \({T^{\prime}}^{\delta}_{g}\) of generic derivations on \(\mathbb{K}\) extending \(\eta^{\prime}\): notice that \(\langle\mathbb{K},\delta\rangle\models{T^{\prime}}^{\delta}_{g}\). We can apply the previous proof to \(T^{\prime}\), since \(Z\) is now \(L^{\prime}\)-definable in \(K\) without parameters (notice that we need to modify the definition of \(\mathbb{G}\), since we are restricting the space of derivations to those extending \(\eta^{\prime}\): however, we already proved the equivalence between (2) and (3)). We conclude in this way the proof.
Barbina and Zambella [1] deal with a similar situation: however, we cannot use their result, since to apply them to our setting we would need that \(\mathbb{K}\) is countable and saturated. Maybe there could be a common refinement if one could weaken their assumption to \(\mathbb{K}\) resplendent (since every countable consistent theory has a countable resplendent model: see [10]).
## 14. Pierce-Pillay axioms
We give now an extra axiomatization for \(T^{\delta}_{g}\), in the "geometric" style of Pierce and Pillay [11]. We won't use this axiomatization, but it may be of interest.
Let \(\langle\mathbb{K},\delta\rangle\models T^{\delta}\). Let \(V\subseteq\mathbb{K}^{n}\) be an algebraic variety defined over \(\mathbb{K}\). We define the torsor bundle \(\tau V\) of \(V\) w.r.t. \(\delta\) in the same way as
in [11] (see also [14], where it is called "prolongation"). The axiom scheme generalizing Pierce-Pillay to \(T^{\delta}_{g}\) is the following:
1. Let \(V\subseteq\mathbb{K}^{n}\) be an algebraic variety which is defined over \(\mathbb{K}\) and \(\mathbb{K}\)-irreducible. Let \(U\subseteq\tau V\) be an \(L(\mathbb{K})\)-definable set, such that the projection of \(U\) over \(V\) is large in \(V\) (i.e., of the same dimension as \(V\)). Then, there exists \(\bar{a}\in V\) such that \(\langle\bar{a},\delta\bar{a}\rangle\in U\).
Notice that, by [10] (see also [11]), "\(V\) is irreducible over \(\mathbb{K}\)" is a definable property of the parameters of the formula defining \(V\), and therefore the above is a first-order axiom scheme.
**Theorem 14.1**.: \(T^{\delta}_{\mathrm{PP}}\coloneqq T^{\delta}\cup\text{(PP)}\) _is an axiomatization of \(T^{\delta}_{g}\)_
Proof.: Since we can take \(V=\mathbb{K}^{n}\), it is clear that (PP) implies (Wide).
We have to prove the opposite. Since \(T^{\delta}_{g}\) is complete, it suffices to show that \(T^{\delta}_{\mathrm{PP}}\) is consistent. W.l.o.g., we may assume that \(T\) has elimination of quantifiers. To show that \(T^{\delta}_{\mathrm{PP}}\) is consistent, it suffices to prove the following
_Claim 8_.: Let \(\langle A,\delta\rangle\models T^{\delta}\). Let \(V\subseteq A^{n}\) be a \(A\)-definable and \(A\)-irreducible algebraic variety. Let \(U\subseteq\tau V\) be \(L(A)\)-definable and such that \(\Pi_{V}(U)\) is large inside \(V\), where \(\pi_{V}:\tau V\to V\) is the canonical projection. Then, there exist \(\langle B,\varepsilon\rangle\models T^{\delta}\) extending \(\langle A,\delta\rangle\) and \(\bar{b}\in B^{n}\) such that \(\langle\bar{b},\varepsilon\bar{b}\rangle\in U\).
Let \(B\succ A\) which is \(|A|^{+}\)-saturated. Let \(\bar{b}\in U\) such that \(\bar{b}\) is generic in \(V\) over \(A\) (that is, \(\mathrm{rk}(\bar{b}/A)=\dim(V)\). Let \(\bar{c}\in B^{n}\) such that \(\langle\bar{b},\bar{c}\rangle\in U\). By known results (see [10, Theorem VIII.5.1], [11], [12, Section60, Lemma 1.1]), there exists a derivation \(\varepsilon\) on \(B\) extending \(\delta\) amd such that \(\varepsilon\bar{b}=\bar{c}\).
Giving the analogue axiomatization for \(T^{\bar{\delta},nc}_{g}\) is quite easy, and we leave it as an exercise for the reader.
On the other hand, we won't try to give a similar axiomatization for \(T^{\bar{\delta}}_{g}\), since already when \(T=ACF\) it is an arduous task: see [13, 14, 15].
## 15. Conjectures and open problems
We conclude the paper with a list of open problems, remarks and some idea.
### Elimination of imaginaries
**Conjecture 15.1**.: \(T^{\bar{\delta},?}_{g}\) _has elimination of imaginaries modulo \(T^{eq}\)._
A few particular cases are known, when \(T^{\bar{\delta},?}_{g}\) is one of the following:
1. \(\mathrm{DCF}_{0,\mathrm{m}}\): see [14];
* \(RCF\) with \(m\) commuting generic derivations: see [13, 14] for a proof based on M. Tressl's idea, see also [1, 15] for different proofs;
* \(\operatorname{DCF}_{0,\operatorname{m,nc}}\) (see [16]).
We have seen that the above conjecture holds for certain topological structures (see SS11). Using the known techniques, it is quite plausible that the above conjecture could be also proved when \(T\) is simple (see [16]). For the general case, we think new ideas are needed (but see [1]).
### Definable types
Let \(\langle\mathbb{K},\bar{\delta}\rangle\models T_{g}^{\bar{\delta},?}\). Given a type \(p\in S_{L^{\delta}}^{n}(\mathbb{K})\), let \(\bar{a}\) be a realization of \(p\); we define \(\tilde{p}\in S_{L}^{n\times\Gamma}(\mathbb{K})\) as the \(L\)-type of \(\bar{a}^{\Gamma}\) over \(\mathbb{K}\).
**Open problem 15.2**.: _Is it true that \(p\) is definable iff \(\tilde{p}\) is definable? We conjecture that it is true when \(T_{g}^{\bar{\delta},?}=T_{g}^{\bar{\delta}}\)._
### Zariski closure
Given \(X\subseteq\mathbb{K}^{n}\), denote by \(X^{Zar}\) be the Zariski closure of \(X\).
**Open problem 15.3** (See [13]).: _1) Let \(\left(X_{i}:i\in I\right)\) be an \(L\)-definable family of subsets of \(\mathbb{K}^{n}\). Is \(\left(X_{i}^{Zar}:i\in I\right)\) also \(L\)-definable?_
_2) Assume that 1) holds for \(\mathbb{K}\). Let \(\langle\mathbb{K},\bar{\delta}\rangle\models T_{g}^{\bar{\delta},?}\). Let \(\left(X_{i}:i\in I\right)\) be an \(L^{\delta}\)-definable family of subsets of \(\mathbb{K}^{n}\). Is \(\left(X_{i}^{Zar}:i\in I\right)\) also \(L^{\delta}\)-definable?_
### Kolchin polynomial
Let \(\langle\mathbb{M},\bar{\delta}\rangle\) be a monster model of \(T_{g}^{\bar{\delta}}\). Let \(\bar{a}\in\mathbb{M}^{n}\), \(B\subseteq\mathbb{M}\) such that \(\bar{\delta}B\subseteq B\). There exists a polynomial \(p_{\bar{a}|B}(t)\) such that, for \(n\) large enough, \(\operatorname{rk}(\bar{a}^{\Theta(n)}\mid B)=p_{\bar{a}|B}(n)\), where \(\Theta(n)=\{\mu\in\Theta:|\mu|\leq n\}\) (see [15]). The degree of the polynomial is at most \(k\); denote by \(\mu(\bar{a}\mid B)\) the leading monomial of \(p_{\bar{a}|B}\). Let \(X\subseteq\mathbb{K}^{n}\) be \(L^{\delta}\)-definable with parameters \(\bar{b}\): define
\[\mu(X) \coloneqq\sup\Bigl{(}\mu(\bar{a}\mid\bar{b}^{\Theta}):\bar{a} \in X\Bigr{)}\] \[\omega(X) \coloneqq\sup\Bigl{(}p_{\bar{a}|\bar{b}^{\Theta}}:\bar{a}\in X \Bigr{)}.\]
Notice that, by SS7, \(\mu(X)\) and \(\omega(X)\) are well-defined (that is, they not depend on the choice of the parameters \(\bar{b}\)).
**Conjecture 15.4** (See [13, 14]).: \(\omega\) _and \(\mu\) are definable in families. That is, for every \(L^{\delta}\)-definable family \(\left(X_{i}:i\in I\right)\) there exists a partition of \(I\) into finitely many definable set \(I=I_{1}\sqcup\cdots\sqcup I_{m}\) such that \(\mu(X_{i})\) and \(\omega(X_{i})\) are constant on each \(I_{j}\)._
To prove the above conjecture for \(\mu\) it should be enough to treat the case when \(X_{i}\subseteq\mathbb{M}\).
**Open problem 15.5**.: _What is the "geometric" meaning of \(\mu(X)\)? Notice that, up to a multiplicative constant, the \(k^{\text{th}}\) coefficient of \(\omega(X)\) is equal to \(\bar{\delta}\)-\(\dim(X)\)._
If Conjecture 15.4 is true, then the function \(X\mapsto\mu(X)\) behaves like a dimension on \(L^{\delta}\)-definable sets (with the difference that the values of \(\mu\) are not natural numbers, but monomials).
**Conjecture 15.6**.: _Assume that \(\mathbb{M}\) is endowed with a topology \(\tau\) satisfying some suitable conditions. Let \(\tau_{\bar{\delta}}\) be the topology on \(\mathbb{M}\) induced by the embedding \(\mathbb{M}\to\mathbb{M}^{\Theta}\), \(x\mapsto x^{\Theta}\) (where \(\mathbb{M}^{\theta}\) is endowed with the product topology induced by \(\tau\)). Denote by \(\overline{X}^{\tau_{\bar{\delta}}}\) the \(\tau_{\bar{\delta}}\)-closure of \(X\). Then, for every \(X\subseteq\mathbb{M}^{n}\) which is \(L^{\delta}\)-definable and nonempty, \(\overline{X}^{\tau_{\bar{\delta}}}\) is also \(L^{\delta}\)-definable, and \(\mu(\overline{X}^{\tau_{\bar{\delta}}}\setminus X)<\mu(X)\)._
### Monoid actions
Let \(\Lambda\) be a monoid generated by a \(k\)-tuple \(\bar{\delta}\): we consider \(\Lambda\) as a quotient of the free monoid \(\Gamma\). We can consider actions of \(\Lambda\) on models of \(T\) such that each \(\delta_{i}\) is a derivation: we have a corresponding theory \(T^{\Lambda}\) whose language is \(L^{\delta}\) and with axioms given by \(T\), the conditions that each \(\delta_{i}\) is a derivation, and, for every \(\gamma,\gamma^{\prime}\in\Gamma\) which induce the same element of \(\Lambda\), the axiom \(\forall x\,\gamma x=\gamma^{\prime}x\).
**Open problem 15.7**.: _Under which conditions on \(\Lambda\) the theory \(T^{\Lambda}\) has a model completion?_
**Conjecture 15.8**.: _Let \(\Gamma_{\ell}\) be the free monoid in \(\ell\) generators, and \(\Theta_{k}\) be the free commutative monoid in \(k\) generators. Then, for \(\Lambda\) equal either to \(\Gamma_{\ell}\times\Theta_{k}\) or to \(\Gamma_{\ell}*\theta_{k}\), \(T^{\Lambda}\) has a model completion (where \(\times\) is the cartesian product, and \(*\) is the free product). More generally, for \(\Gamma\) equal to a combination of free and cartesian product of finitely many copies of \(\mathbb{N}\), \(T^{\Lambda}\) has a model completion._
Maybe the following conditions on \(\Lambda\) suffice for \(T^{\Lambda}\) to have a model completion:
Let \(\preceq\) be the canonical quasi ordering on \(\Lambda\) given by \(\alpha\preceq\beta\alpha\) for every \(\alpha,\beta\in\Lambda\); we assume that:
* \(\preceq\) is a well-founded partial ordering;
* for every \(\lambda\in\Lambda\), the set \(\{\alpha\in\Lambda:\alpha\preceq\lambda\}\) is finite;
* for every \(\alpha,\beta\in\Lambda\), if they have an upper bound, then they have a least upper bound;
* let \(X\subset\Lambda\) be finite; assume that \(X\) is \(\preceq\)-initial in \(\Lambda\); then, \(\Lambda\setminus X\) has finitely many \(\preceq\)-minimal elements;
* if \(\alpha_{1}\delta_{1}=\alpha_{2}\delta_{2}\) for some \(\alpha_{i}\in\Lambda\) and \(\delta_{i}\in\bar{\delta}\), then \(\delta_{1}\) and \(\delta_{2}\) commute with each other; moreover, there exists \(\beta\in\Lambda\) such that \(\alpha_{1}=\delta_{2}\beta\) and \(\alpha_{2}=\delta_{1}\beta\).
|
2310.20277 | Towards a Structural Equation Model of Open Source Blockchain Software
Health | The widespread use of GitHub among software developers as a communal platform
for coordinating software development has led to an abundant supply of publicly
accessible data. Ever since the inception of Bitcoin, blockchain teams have
incorporated the concept of open source code as a fundamental principle, thus
making the majority of blockchain-based projects' code and version control data
available for analysis. We define health in open source software projects to be
a combination of the concepts of sustainability, robustness, and niche
occupation. Sustainability is further divided into interest and engagement.
This work uses exploratory factor analysis to identify latent constructs that
are representative of general public interest or popularity in software, and
software robustness within open source blockchain projects. We find that
interest is a combination of stars, forks, and text mentions in the GitHub
repository, while a second factor for robustness is composed of a criticality
score, time since last updated, numerical rank, and geographic distribution.
Cross validation of the dataset is carried out with good support for the model.
A structural model of software health is proposed such that general interest
positively influences developer engagement, which, in turn, positively predicts
software robustness. The implications of structural equation modelling in the
context of software engineering and next steps are discussed. | Jeff Nijsse, Alan Litchfield | 2023-10-31T08:47:41Z | http://arxiv.org/abs/2310.20277v1 | # Towards a Structural Equation Model of Open Source Blockchain Software Health
###### Abstract.
The widespread use of GitHub among software developers as a communal platform for coordinating software development has led to an abundant supply of publicly accessible data. Ever since the inception of Bitcoin, blockchain teams have incorporated the concept of open source code as a fundamental principle, thus making the majority of blockchain-based projects' code and version control data available for analysis.
We define health in open source software projects to be a combination of the concepts of sustainability, robustness, and niche occupation. Sustainability is further divided into interest and engagement. This work uses exploratory factor analysis to identify latent constructs that are representative of general public _Interest_ or popularity in software, and software _Robustness_ within open source blockchain projects. We find that _Interest_ is a combination of stars, forks, and text mentions in the GitHub repository, while a second factor for _Robustness_ is composed of a criticality score, time since last updated, numerical rank, and geographic distribution. Cross validation of the dataset is carried out with good support for the model.
A structural model of software health is proposed such that general interest positively influences developer engagement, which, in turn, positively predicts software robustness. The implications of structural equation modelling in the context of software engineering and next steps are discussed.
blockchain, software health, GitHub, structural equation modelling, exploratory factor analysis +
Footnote †: 2021
Footnote †: 2021
## 1. Introduction
Software health is a multifaceted and elusive concept, drawing parallels with biological and environmental health as well as business health and their respective ecosystems. Studies in software health have explored various perspectives, from examining natural ecosystems (Hirsch et al., 2016; Wu et al., 2017) to contributor motivation (Bahdan et al., 2017; Krizhevsky et al., 2017), software communities (Krizhevsky et al., 2017; Krizhevsky et al., 2017) and their wider ecosystems (Krizhevsky et al., 2017; Krizhevsky et al., 2017) in order to derive representative models. Generally, these models refer to the overall well-being of a software system, which encompasses its performance, reliability, maintainability, and other related factors. Just like human health, software health is a critical aspect that affects the functionality and longevity of software systems.
Understanding and maintaining software health is important for the success of software systems and the satisfaction of software developers and end-users. By measuring and improving software health, organizations can ensure the long-term viability of their software systems and avoid costly downtime and maintenance issues. When software systems are healthy, developers can work more efficiently, productively, and with fewer errors. This, in turn, can lead to increased satisfaction and morale, which can have a positive impact on developer retention and recruitment. Similarly, end-users are more likely to be satisfied with a software product that is healthy and responsive to their needs, which can lead to increased user adoption and loyalty.
However, measuring the health of software can be challenging due to several factors. For instance, there is a lack of consensus on a definition of exactly what is meant by "health," and different stakeholders such as developers and end-users may have different perspectives. Additionally, version control data may be limited for proprietary software and corporations, combined with a dearth of user-friendly tools to assist with measurement. Finally, it can be difficult to identify
constructs such as _robustness_ and _engagement_ that are subjectively critical to health, but objectively hard to measure.
GitHub is the largest open-source software (OSS) community with over 254 million repositories by 73 million developers (Zhou et al., 2017; Zhang et al., 2018; Zhang et al., 2019). The platform has been pivotal in the rise of cryptocurrencies and blockchain projects, with 98.6% of the top 419 public repositories in this field hosted on GitHub (Zhou et al., 2017). The open-source ethos of decentralization has played a significant role in the rapid innovation and iteration seen in this field. Unfortunately, with technical innovation in blockchain can come social harm. 2022 was the worst year on record for the crypto industry with over $3 billion USD hacked, stolen, or lost from crypto-related projects and exchanges (Zhou et al., 2017). Therefore, understanding and maintaining software health is critical to prevent such losses and ensure the long-term viability of blockchain software systems.
By analysing the health of OSS in the scope of blockchain projects, this work contributes to the larger theme of software health and highlights the importance of measuring and maintaining software health in various domains, including emerging technologies like blockchain. Through this approach, researchers and stakeholders can develop more comprehensive models and strategies to ensure software health.
The objective of this work is to develop a model of software health in OSS blockchain projects using publicly available data from GitHub. Our hypothesis is that identifiable factors that contribute to the overall health of a software project can be measured and analysed. To achieve this, the statistical methods exploratory factor analysis (EFA) and structural equation modelling (SEM) are utilized to derive factors that inform the model of software health. With this model, researchers and practitioners can identify areas of strength and weakness within blockchain software to maintain high-quality, reliable, and efficient software systems.
### Research Questions
Given that metrics can be derived from publicly available version control software, the following research questions are investigated with respect to software health in the blockchain domain.
* What facets make up a high-level definition of software health, without using or defining specific metrics?
* Given that software robustness is a contributing component of health, what are the factors that contribute to software robustness?
* How does community interest or, general popularity, fit into the definition of health and what are the component metrics?
* What is the nature of the relationship between components in the definition of software health?
In order to address the research questions posed, this study utilizes a combination of methods including a literature review to inform a definition of health, and applying a framework for analysing open source data via exploratory factor analysis and structural equation modelling. Through these methods, the study aims to derive specific indicator metrics that contribute to software health. The literature review provides a foundation for the study, while the framework for health operationalization guides the collection and analysis of open source data. The use of EFA and SEM allows for the identification of underlying factors and the exploration of relationships between variables. Overall, this approach provides a rigorous and systematic method for investigating the research questions at hand.
### Contributions to the Field
The results of this study have several contributions to the broader scientific fields of information systems and software engineering.
1. Enhancing the existing literature on software health by providing a quantifiable definition of software health based on factors identified through exploratory factor analysis and structural equation modelling. This is an important contribution as a clear and measurable definition of software health can help researchers and practitioners to evaluate and compare software systems, and make data-driven decisions related to software development and maintenance.
2. A novel application of latent factor analysis to identify the factors that contribute to both community interest and software robustness in open source blockchain software projects. This approach provides a more comprehensive understanding of how these factors influence software health.
3. The proposed structural equation model offers a new way of thinking about software health and its various components, which can help project managers and developers identify areas for improvement and optimize resource allocation. This model represents an advancement in the field of software health analysis and provides the groundwork for future research.
4. In addition, the study makes a valuable contribution to the field by providing a publicly available dataset, which can be used by researchers and practitioners to further investigate software health in open source blockchain software projects. This dataset includes the entire set of metrics related to developer engagement, software robustness, and community interest; factors that influence software health model found in the study.
The remainder of the article is structured as follows: In Section 2 the background literature and related work in the fields of software health are discussed. This provides the necessary context for understanding the research questions and the contributions of the study. The scope is narrowed to blockchain software in Section 3. Section 4 presents research methodology, which includes a detailed description of the framework used to analyse open source data, the statistical methods employed in the study, and the data collection details are provided, including the sources of data, the criteria used for selection, and the steps taken to clean and preprocess the data. Results are presented in Section 5, including the findings from the latent factor analysis and the structural equation model. Implications of the results are discussed in Section 6, including the practical applications of the findings for stakeholders, the limitations of the research, and provides suggestions for future work in this area. Finally, the conclusion (Section 7) summarizes the main results and contributions of the paper.
## 2. Software Health
Before we get to the background literature and defining software health, we will briefly mention open source software and what it means in the present context.
### Open Source Software
The term 'open source' emerged from free and open source software (FOSS) in 1998 when Netscape chose to release its web browser Netscape Communicator as FOSS1[(23)]. Bruce Parens was at Netscape during the code release and worked extensively to write a new definition of open source2,
the essence of which is that OSS (i) is free to distribute without royalties, (ii) has published source code, (iii) and can be modified by anyone with few conditions (Kumar et al., 2017). The more verbose ten-point version can be found on the opensource.org website3.
Footnote 3: [https://opensource.org/osd.html](https://opensource.org/osd.html)
In the context of this study there must be source code and version tracking information available for analysis. See Section4.3 for data collection details. It is noted that there are different types of open source licenses, and this is not of concern to the present work. We do not make any assumptions or exclusions based on a projects' software license. For example Bitcoin was published under the MIT/X11 license, while Ethereum's source code is licensed under the GNU Lesser GPL. Both licenses allow for copying (forking) and republishing of code.
### Ecosystem Health as a Metaphor for Software Health
Defining health in the context of software, and further blockchain software, will first benefit from a view of health as seen in the life sciences. The health of natural ecosystems and their components, such as soil, water, flora, and fauna, is a pressing concern for the entire biosphere. As we seek to understand what constitutes natural ecosystem health, we may find it useful to draw on metaphors, even those from human medicine (Hernandez et al., 2017; Hernandez et al., 2017). Metaphor has a legitimate place in science as it can stimulate associations between seemingly unrelated phenomena and highlight their structural identity (Kumar et al., 2017).
The ecological metaphor has been used extensively to relate natural ecosystems to both business ecosystems (Hernandez et al., 2017; Hernandez et al., 2017; Hernandez et al., 2017), and software ecosystems (Hernandez et al., 2017; Hernandez et al., 2017; Hernandez et al., 2017; Hernandez et al., 2017), and allows parallels to be drawn on the basis of health. Both natural and software ecosystems are composed of interrelated components, such as species in natural ecosystems and projects in software ecosystems that exist in a competitive environment. Both ecosystems rely on biodiversity in order to thrive, and an underlying principle in both is that of adaptation and evolution of the components within the system in order to ensure its survival and continued success. By learning from the relationships in natural ecosystems, we can identify factors that are crucial to the sustainability and overall health in software.
A conceptual map illustrating the different ways the idea of health is defined across natural, business, software, and open source ecosystems is presented in Table1. This map shows the key terms used to describe health in each ecosystem and categorizes them into three groups: sustainability, robustness, and niche fit. The terms were identified through a comparative analysis of ecosystem-specific literature, and provide insights into the challenges present when defining health.
Three concepts have been synthesized from the literature and will serve as our definition: Health, in the open source software context is composed of three broad components: sustainability of day-to-day operations, robustness to stress, and niche occupation within the software's ecosystem.
### Sustainability
Sustainability in a natural ecosystem is also referred to as stability (Hernandez et al., 2017), vigour (Kumar et al., 2017), and productivity (Kumar et al., 2017), and generally refers to the ecosystem's ability to carry out the basic functions necessary for metabolism and growth. Indicators that an ecosystem is functioning include: primary productivity - how much growth is occurring, the nutrient base available, the diversity of species present, the amount of instability, disease prevalence, diversity of size spectra, and levels of contaminants (Hernandez et al., 2017). The list provides a first draft of metrics biologists can track to asses sustainability in a natural ecosystem.
If a natural ecosystem is an interrelated collection of species, a business ecosystem is an interrelated collection of businesses across industries that are both in competition and cooperation with each other [37]. Further, the health of that business ecosystem is an aggregate of the stability needed to be profitable and the stability needed to grow. Sustainability here is the productivity that comes from general tasks employees undertake to maintain business operations [30]. Just as base metabolic resources are required to keep species in an ecosystem in competition, healthy financial resources can sustain a business ecosystem to provide the opportunity for innovation and growth [27].
Drawing from the definition by Jansen et al. [32], a software ecosystem (SECO) is a group of stakeholders "functioning as a unit and interacting with a shared market for software and services." Software ecosystems as a class of business ecosystems often operate through a common technological platform, such as Apple's iOS, and participate in the affiliated markets, such as Apple's App Store.
Sustainability within a SECO is doing enough of the minimum viable activity to run day-to-day operations. When done well this productivity allows a software business to compete and possibly thrive. When done poorly a lack of productivity will result in losing market share to a competitor. A productive SECO's outputs include software development activities such as writing code, reviewing, and feature implementation [34]. Ensuring the sustainability of the SECO is a complex process that requires significant community effort and resources from the ideation stage to the version release
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Ecosystem & Classifier 1 & Classifier 2 & Classifier 3 & Year & Source \\ \hline Natural & Productivity & Absence of disease & Diversity & 1988 & [52] \\ & Sustainability & Integrity; Stress capacity & & 1989 & [47] \\ & Vigour & Resilience & Organization & ’92,’98 & [10, 48] \\ \hline Business & Growth & Profitability & Stable & 1993 & [37] \\ & Productivity & Robustness & Niche creation & 2004 & [30] \\ & Financial health & Centrality; Visibility & Variety of partners & 2013 & [27] \\ \hline SECO\({}^{*}\) & Productivity & Robustness & Niche creation & 2010 & [59] \\ & Productive & Endure & Variable & 2013 & [34] \\ \hline OSS\({}^{\dagger}\) & Liveness of users/devs & & & 2007 & [60] \\ & Software development & Long-term & & 2010 & [7] \\ & Vigour & Resilience & AMI\({}^{\star}\) & 2012 & [45] \\ & Sustainability; & Resource health & Network health; & 2014 & [18] \\ & Maintenance capacity & & Process maturity & & \\ & Healthy community & Healthy commons & & 2015 & [39] \\ & Community & Code; Resources & & 2018 & [33] \\ & Sustainability & Survivability & & 2021 & [22] \\ \hline _Concept_ & _Sustainability_ & _Robustness_ & _Niche Fit_ & & \\ \hline \hline \end{tabular} \({}^{*}\) software ecosystem
\({}^{\dagger}\) open source software
\({}^{\star}\) average mutual information
\end{table}
Table 1: Conceptual map illustrating the different ways in which the concept of health is defined across natural, business, software, and open source ecosystems. This map shows the key terms used to describe health in each ecosystem and categorizes them into three groups: sustainability, robustness, and niche fit.
stage (Kumar et al., 2017). In modern software development, a product is no longer viewed as a static entity after its initial release; rather, there is a constant need for user feedback, bug fixing, and iteration, all of which are activities that contribute to sustainable open source development (Kumar et al., 2017; Kumar et al., 2017).
Not only does OSS require financial resources and software management as in business ecosystems, but there is the added component of having the project sustained by the community (Bahdan et al., 2017). Failure in any of these areas can leave a project abandoned and thus sustainability is a base component of OSS development. Thus, any efforts that ensure ongoing day-to-day software development and its related outputs can be viewed as sustainable (Kumar et al., 2017; Kumar et al., 2017).
The metrics pertaining to sustainability have been organized in Table 2. Language variability that is present in analysing the literature show the terms productivity and engagement indicating the same construct. Here we use _Engagement_ (as shown in Figure 1) to mean any activity that is undertaken to sustain the software project in day-to-day operations. The second sub-classification of sustainability is _General Interest_ or popularity. Interest has no parallel in the natural ecosystem literature, but emerges as important to health and success within the software domain in many studies (Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017).
Tables 2, 3, and 4 show the metrics from the literature review, but not all of these are collected and used in the present study. Many of the metrics are lacking a suitable definition, have no hypothesized operationalization using version control information, or are not applicable to the present study of OSS health through publicly available data. Additionally, some metrics concentrate on motivations of individuals that are very difficult to determine without targeted survey data, or are components of a business process such as marketing and financial data that does not apply or is proprietary when limiting the scope to OSS. The selection of specific metrics is a primary goal of this study and discussed further in Section 4 - Methods.
### Robustness
Once sustainability has been established the ecosystem can remain productive over time only if it can sustain shocks that threaten its viability. Biodiversity is important to natural ecosystems and helps to enable recovery from shocks. This is through many species performing the same function such as photosynthesis or decomposition, and also by individual species having unique environmental response to threats when compared to their close relatives (Kumar et al., 2017). Costanza (Costanza, 2010) and Rapport et al. (Rapport et al., 2017) both identify resilience as the ability of an ecosystem to overcome disruption to its local environment.
Environmental factors affect all members of the ecosystem and are generally out of control of the community itself. External risks are more difficult to identify such as misaligned product-market
Figure 1. Summarizing the language used to define health as composed of sustainability, robustness, and niche fit. Further, sustainability is composed of engagement or productivity and interest or popularity.
fit, the competitor landscape, technological innovation, and the regulatory and legal landscape [7]. Many of these dramatic shocks are difficult to quantify and out of scope when considering project level software.
Table 3 lists the metrics from the literature contributing to robustness within a SECO. Robustness in software includes demographic factors: age and size of the population, or of the software organization. Both here are positive indicators - long time contributors are more likely to keep contributing, and long-standing projects have managed to survive likely absorbing prior shocks. Resilience can be in the form of geographic location to avoid some of the environmental risks affecting all participants; and market share information to gauge external validation for the project. Validation from within the community comes from others using, incorporating, and then depending on your software, which can be quantified with a criticality measure [4].
### Niche Fit
A third characteristic in the definition of health is that of occupying a relevant niche in the ecosystem. Called organization in the natural ecosystem [48] meaning a species is present at all levels and functions of the aggregate system.
This is paralleled as niche creation in business ecosystems [30], and software ecosystems [31]. In areas with ongoing competition among similar products, projects must pivot and identify a unique niche to achieve success. At the software project level it has been suggested that product fit can be quantified by audience niche, programming language niche, and operating system niche [7]. To
\begin{table}
\begin{tabular}{l l l} _Engagement \& Productivity_ & & \\ \hline Metric & Description and alternative labels & Supporting literature \\ \hline Bug fix rate & How quickly bugs are noted and fixed; also issues or rate of & [7, 12, 22, 31, 40, 42, 45, 49, 51, 60] \\ Comments & Volume; including code review, pull request, \& issues-based & [22, 28, 31, 49, 58, 60] \\ Commits & Count of commits to codebase; also lines of code, method & [19, 22, 31, 40, 42, 55] \\ & count, token count, project size & \\ Contributors & Count of contributors, includes developers; also community & [7, 12, 22, 31, 51, 58, 62] \\ Pull requests & Count of pull requests initialized; also PRs merged, or closed & [7, 49, 58] \\ Hours worked* & Productivity measure of contributors’ hours spent & [22] \\ Developer churn* & Developers entering and leaving the project/SECO & [19, 55] \\ Releases* & Count of version releases published & [19, 45] \\ Financial resources* & Related to business operations & [22, 31] \\ \hline \multicolumn{3}{l}{_Interest \& Popularity_} \\ \hline Dependencies & The number of software dependencies a project has, for & [51] \\ & example Bitcoin relies on the GCC compiler collection & \\ Forks & Count of number of times the software has been forked & [31, 40, 42, 51] \\ Rank & Ranking of the project in the broader web, for example number of search engine hits, or Alexa page ranking; also & [12, 19, 22, 31, 51, 60] \\ & & \\ Stars & Count of total stars on GitHub; also tags or watchers & [40, 42, 51, 58] \\ \hline \end{tabular}
* Metric is considered out of scope and not collected in the present study.
\end{table}
Table 2. Metrics relating to sustainability in the OSS health literature can be split into engagement & productivity, and interest & popularity. Often metrics have difference descriptions in the literature, for example the number of downloads can produce a popularity ranking, as can a web ranking.
have the best chance at occupying a niche, a project may also choose to support multiple natural languages, push applicability to a variety of markets, and be open to various contributor roles [(31)]. These niche metrics are shown in Table 4.
Although Table Table 4 lists items that are reasonably simple to tabulate such as programming language and operating systems, it is not clear how this helps position a project within its local ecosystem niche. Additionally, it is difficult to contextualize a project in the wider landscape when focussing on individual projects and thus niche fit is an aggregate measure that requires a broad view of the whole ecosystem. Therefore, niche related metrics are not within the scope of the study.
### Summary of Software Health
To conclude on open source software health, a high level definition is that health is composed of sustainability, robustness, and the niche occupation of the project as shown in Figure 1. This is
\begin{table}
\begin{tabular}{l l l} _Niche Fit_ & \\ \hline Metric & Description and alternative labels & Literature \\ \hline SECO project variety & Variety in types of projects in the ecosystem demonstrating & [(31)] \\ & available niches & \\ Platform variety & Support for a variety of languages allows for new contributors & [(7; 31)] \\ & to participate, both natural and programming; also variety in & \\ & operating systems supported & \\ Market variety & Cross over applicability of the project to different markets & [(31)] \\ Contributor types & Variation in available contributor roles & [(31)] \\ Average mutual information & A measure of task specialisation and the coordination of specialists & [(45)] \\ & cialists & \\ Niche size & More member organizations within a niche add legitimacy & [(7; 22)] \\ \hline \end{tabular}
\end{table}
Table 4. Metrics relating to the niche occupancy of a project within an ecosystem in the health literature. Although part of the health literature, all of these metrics are out of scope for collection in the present study.
\begin{table}
\begin{tabular}{l l l} _Robustness_ & & Literature \\ \hline Metric & Description and alternative labels & Literature \\ \hline Developer longevity & Time spent contributing to a single project; also project age & [(7; 22; 42)] \\ Geographic distribution & Global geographic distribution of the contributors & [(58)] \\ Market share & Ratio of a project’s share to the total local ecosystem & [(31)] \\ Project criticality & Risk associated with project centrality and dependency; also truck & [(22; 31; 58)] \\ & factor & \\ Business metrics* & Including management, process development, and systems development; also switching costs & [(22; 31)] \\ End user metrics* & Including count, longevity, loyalty, and satisfaction & [(22; 31; 62)] \\ Contributor metrics* & Including centrality, reputation, satisfaction, cross org participation; & [(31; 60)] \\ & also measures of centrality in wider SECO and partnerships & \\ Code quality* & As relates to code metrics such as cyclomatic complexity & [(22; 42)] \\ Knowledge creation* & Knowledge added to SECO and artefact creation & [(31)] \\ \hline \end{tabular}
* Metric is considered out of scope and not collected in the present study.
\end{table}
Table 3. Metrics relating to robustness in the OSS health literature. Many robustness measures are difficult to gauge or subjective in nature such as code quality and knowledge creation.
similar to the health of a natural ecosystem depending on its productivity and growth, resilience, and organisation, respectively.
Within the OSS class of ecosystems, sustainability involves the engagement of developers and other contributors to sustain the ongoing operations necessary to produce software, and also a public interest component where a popular project can attract and retain new talent. The goal of this work is to determine how the metrics available from OSS version control software relate to the definition of health in the scope of blockchain software.
## 3. Blockchain Health
Bitcoin has inspired a whole new industry in blockchain software development since its release in 2008 (Krishnan et al., 2017). This has been tracked extensively by the website CoinMarketCap 4 (CMC) beginning in 2012. Presently, CMC tracks over 20,000 tokens and projects, all that have a blockchain pedigree.
Footnote 4: [https://coinmarketcap.com](https://coinmarketcap.com)
Blockchain software can be defined collectively by the myriad projects listed on CMC including: cryptocurrencies, platforms, protocols, Web3.0 applications, stablecoins, and support libraries of smart contracts. This collection of software is markedly different from the surrounding software industries in the landscape, for example, utilities, databases, web, and mobile.
Blockchain software presents a stronger emphasis on security and reliability than non-blockchain projects (Birn et al., 2017). Blockchain-based projects may be more likely to prioritize security and reliability in their development processes. This focus on quality may be driven by the high cost of defects in blockchain software due to direct financial risk of failure, as well as the complex and unique tools used in blockchain development. Additionally, blockchain developers have to work in a more decentralised environment, they skew younger in age, are more educated, and more male than in other software industries (Birn et al., 2017).
Bitcoin and newer blockchain projects are highly open-sourced; with the present study finding 69.8% of the top 600 blockchain projects having publicly visible code repositories on GitHub. This open-sourced, decentralised ethos affects contributor's preferences too, as most projects are started entirely by volunteers playing by tokenised incentives (as opposed to commercial ones (Birn et al., 2017)) and these developers will self-select what project to contribute to based on similar values (Birn et al., 2017).
Health in blockchain projects and blockchain ecosystems has been studied in a limited manner. One study investigated the health of the Bitcoin ecosystem as defined through the categories: popularity, complexity, activity, and age (Krishnan et al., 2017). The authors include code-based metrics within the complexity category which are left out of many other studies on health, however their primary indicators are heuristic-based, chosen without rationale. There are no known studies on health of other blockchain projects such as Ethereum or Solana, or a wider collections of projects or their ecosystems. This absence of research in relation to blockchain software both at the individual developer level and at the software project level is the focus of the present study.
### Developer Engagement
Developer engagement is a component of software sustainability as shown in Figure 1 and can be described as the day-to-day operations to create and maintain software. In the business sense this is called productivity, and in natural ecosystems involves the necessary metabolic processes for growth.
Developer engagement as a stand alone construct is known to be important (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017) but does not itself have a clear index of the components that make up engagement. Software community engagement is a subjective term referring to how people interact and contribute to a project, and
includes activities both coding and non-coding related: managing the community, development, documentation, and participating in discussions (Krishnan et al., 2017).
Previous work with the focus on OSS blockchain engagement was completed by the authors finding that the latent factor of developer engagement can be determined with four indicator metrics: commits, comments, pull-requests, and authors (Krishnan et al., 2017). The present work extends the _Engagement_ dimension (see Figure 2) to include metrics for _Interest_ and _Robustness_ using similar methods-factor analysis, and new methods-confirmatory factor analysis. The research methodology is discussed presently.
## 4. Methods
The research methodology applies an Open Source Ecosystem Health Operationalization (OSEHO) framework by Jansen (Jansen, 2017) modified to assess individual projects within the blockchain ecosystem. This framework is integrated with the work of Goemminne and Mens (Goemminne and Mens, 2017) on Analysing Open Source Software Ecosystems and involves the steps summarized in Table 5.
First the goal of the study is selected to find the metrics that can form the constructs of _Robustness_ and _Interest_. Next, the scope is narrowed to blockchain projects. Step Three is to determine the possible metrics that relate to the concepts through literature review (Section 2). Notably, most of these metrics will be out of scope, and so in Step Four the practical selection process is made given
\begin{table}
\begin{tabular}{l l l} \hline \hline Step & Action & Description \\ \hline
1 & Set goals & Determine indicator metrics for _Interest_ and _Robustness_. \\
2 & Select ecosystem scope & Limited to blockchain open source software. \\
3 & Select metrics & As by literature review in Section 2: Software Health. \\
4 & Assess available data & Table 6 shows the collected metrics. \\
5 & Data extraction & See Section 4.3: Dataset. \\
6 & Data post-processing & See Section 4.3.1: Data Processing. \\
7 & Statistical analysis & See Section 5.1: Exploratory Factor Analysis, and Section 5.2: Structural Equation Modelling. \\
8 & Reporting & See Section 5: Results. \\ \hline \hline \end{tabular}
\end{table}
Table 5. Research methodology combining Frameworks by Jansen (Jansen, 2017) and Goemminne and Mens (Goemminne and Mens, 2017).
Figure 2. Exploratory factor analysis applied to metrics representing developer engagement. The number of pull requests as a monthly average over the previous three months has the strongest influence on engagement with a factor loading of 0.96.
resource constraints such as available data and time. These are detailed in Table 6. Steps Five and Six are to collect the data and prepare the dataset; details of which such as sourcing and cleaning are in Section 4.3-4.3.1. The analysis is done via statistical methods in Section 5.1 and Section 5.2. Lastly, reporting of results follows in Section 5 Results.
### Exploratory Factor Analysis
Exploratory factor analysis (EFA) is a multivariate statistical technique used for determining underlying constructs that are present in a dataset composed of a large number of variables. The constructs, or factors, can represent groupings within the data that are hypothesized, or known, but not directly observable (Bickel and Rubin, 1986). In the present context this is the idea of _Robustness_ and _Interest_ as applied to an open source software project, and the large number of variables are the possible set of metrics identified in Tables 2-4. _Latent variables_ and _factors_ are terms used interchangeably and represent inherent characteristics identified by the researcher that do not have a well known associated metric. EFA is common in fields like psychology and economics where participants are given a survey and the results are analysed (Krause et al., 2017) but little work has been done applying the technique to software engineering. OSS is a human-lead directive combining social coordination with technical innovation, and as a socio-technical field is fit for application of these techniques to capture inherent structure such as identified by _Engagement_ in Figure 2.
EFA assumes that the latent construct (e.g. _Robustness_) is responsible for the correlation of the indicator variables. In practice this allows the researcher to conclude on a statement of influence, i.e.: "developer engagement is positively related to pull requests." This method does not assume perfect measurement of observed variables and allows the factor to explain what the indicators have in common, and what is not held in common is due to measurement error (Bickel and Rubin, 1986).
The EFA approach is used here because our goal is identify latent constructs of _Interest_ and _Robustness_ for building a theory of OSS blockchain health (Krause et al., 2017). Additionally, EFA allows for correlation between latent variables which is to be expected in the confirmatory factor analysis stage of structural equation modelling (next). We follow the approach of Hair (Hair, 1975) for EFA by following the six stage framework, and continue with the six stages for SEM.
### Structural Equation Modelling
Exploratory factor analysis can be used to find underlying structure in a collection of variables that possibly represent _Interest_ and _Robustness_. To assess the nature of the relationship between these two latent factors, structural equation modelling (SEM) can be applied. SEM is employed both for the development of the measurement model and the evaluation of its structural efficacy. The structure of the relationship emerges similar to a regression, and determines the strength of the relationships between constructs, although with SEM there is no predetermined directionality. Taking empirical data-from GitHub-and testing it against a theoretical model is the key benefit of this technique (Bickel and Rubin, 1986).
Although prevalent in sociology, economics, and psychology, very few multivariate statistical approaches such as EFA and SEM have been applied to OSS. Chengalur-Smith et al. (2017) used SEM to model longitudinal project sustainability based on an ecological model; Abdulhassan Alshomali (2018) modelled trends in GitHub programming languages via SEM; Raja and Tretter (2018) developed a regression model for software viability; and Schroer and Hertel (2018) used SEM via partial least squares path analytic models to investigate the structure of engagement tasks of Wikipedia volunteers. These studies indicate the applicability of SEM applied to OSS data collection, as well as the opportunity based on the gap in the literature.
### DataSet
Data for public blockchain projects are readily available for collection and analysis from GitHub through the web interface, programmatically through the application programming interface (API), and in raw archival form from the GitHub Archive.
All GitHub data from February 2011 has been archived in JSON Lines format (JSON object on every line) and is available for public download from the GHArchive5 project. Every JSON object contains the metadata and payload for one GitHub event. For example, when a repository is starred an event is emitted of type: stargazer. Similarly an event is created when a pull request (PR) is created, and the JSON contains all the details about who created it, when it was created, and the contents of the PR object.
Footnote 5: [https://www.gharchive.org/](https://www.gharchive.org/)
GitHub currently has 17 event types6, seven of which are used to collect relevant data: WatchEvent, ForkEvent, PushEvent, PullRequestEvent, IssueCommentEvent, CommitCommentEvent, and PullRequestReviewCommentEvent. All the events have metadata with author username, date, and time which can be used for further metrics.
Footnote 6: [https://docs.github.com/en/developers/webhooks-and-events/events/github-event-types](https://docs.github.com/en/developers/webhooks-and-events/events/github-event-types)
GHArchive data was downloaded from 01-February-2011 up to 26-March-2022, consisting of 2.36 TiB in total information. This forms the basis of the GHArchive-sourced metrics shown in Table 6. The compressed JSON is then inserted into a single-table ClickHouse database (Shen et al., 2019). ClickHouse7 is an open source column-oriented database management system designed for online analytical processing. This is ideal for large datasets that involve mostly read-only queries and batch updating. The database contains 5.6 billion records and is 430 GiB and is accessed with structured query language (SQL) queries through a command line or a Python module. This is running on a dedicated Linux Ubuntu (version 20.04.4) machine with ClickHouse's command line client and server (versions 22.3.3.44) installed.
Footnote 7: [https://clickhouse.com/](https://clickhouse.com/)
To identify relevant blockchain projects the top 600 are gathered using the CoinMarketCap API by ranking of market capitalisation as of March-2022. Details retrieved include project _name_, _rank_, _website_, and _location of source code_ if available. The data is collated into a Pandas dataframe (version 1.4.2) for Python (version 3.8.10) via JupyterLab (version 3.3.3). The CMC data provides a rank based on total market of a given project which can be a proxy for financial resources. The website information is then used with Amazon's Alexa API to get a global web ranking called Alexa Traffic Rank8.
Footnote 8: Run by Amazon’s subsidiary Alexa Internet, Inc., the service was shuttered on May 01, 2022.
Two further metrics are pulled from the Open Source Software Foundation (OSSF): project criticality score, and the number of mentions. The criticality score is a metric that identifies how critical a project is within the open source ecosystem (Brands, 2011). Scored in the range \([0,1]\) a project of score 0 relies on no external software, among other factors, while a project of score 1 is deemed critically important. For example, the highest criticality project overall is Linux while the highest criticality blockchain project is Bitcoin. Mentions is a metric to gauge what projects are popular among contributors and is based on a count of the number of times a project appears in the text through comments of the commit messages. A Python script is written to access GitHub data through the API via the Criticality Score9 command line tool (version 1.0.7).
Footnote 9: [https://github.com/ossf/criticality_score](https://github.com/ossf/criticality_score)
Footnote 10: [https://www.gharchive.org/](https://www.gharchive.org/)
Geographic distribution was introduced as a measure of robustness from Table 3. This is derived from timezone data retrieved from git history using Perceval (Percival, 2014) (version 0.17.0) via a Python script. The timezone data represents the times of software commits made by the project's contributors and produces a mapping of activity based on coordinated universal time (UTC). To evaluate a
project's geographic distribution, we compare it to a median distribution of the top 100 blockchain projects over the previous six months. This median distribution is a representation of the typical geographic distribution of the top 100 blockchain projects in terms of software commit activity. The comparison is done by calculating the root mean squared error (RMSE), which is a measure of the difference between the project's geographic distribution and the median distribution. Projects with a low RMSE have a distribution that matches the community and are less prone to geographic shocks, which refer to unexpected events that could disrupt the project's contributors' ability to work together. Projects with a high RMSE likely indicates a project operates in a single timezone and could exhibit single point of failure risks, which refer to the risk that the project's development could be disrupted if a key contributor is unable to work.
#### 4.3.1. Data Processing
In many cases the source code location is incorrectly reported and so all project source code locations are manually verified. Where the location points to an organization on GitHub, the repository (repo) with the _reference_, or _core_, or _node_ implementation is chosen. If this is not the case (perhaps it is not a blockchain), then the _contract_ repository is chosen. To disambiguate between competing repos, the one with the most stars is chosen. Often the core repo is also the one with the most stars. When there are two implementations in different languages (e.g., GO and Rust) highest stars takes preference. Forked libraries are not considered, if the original library is in the top 600, it is counted.
As an example of verification, the Sythetix ecosystem has six pinned repositories and is listed by CoinMarketCap as having code at [https://github.com/Synthetixio](https://github.com/Synthetixio) however this is the organization landing page and contains links to all repos. The main platform is hosted at [https://github.com/Synthetixio/synthetix](https://github.com/Synthetixio/synthetix), which is manually verified.
The top 600 blockchain projects are the starting point and the dataset is cleaned with the following exclusions:
1. The version control data cannot be accessed for analysis. Eight have a repo that's missing (404 error) indicating it has been deleted or moved; 78 are listed but private; 83 do not have a repo listed (and are likely private).
\begin{table}
\begin{tabular}{l l l} \hline \hline Metric & Operationalization & Source \\ \hline Stars & Total count of stars since the project’s inception & GHArchive\({}^{\star}\) \\ Forks & Total count of forks since the project’s inception & GHArchive \\ Alexa rank & Amazon’s Alexa global web rank based on the project’s website & Alexa \\ CMC rank & CoinMarketCap’s rank based on the project’s market capitalisation & CMC\({}^{\star}\) \\ Mentions & Total count of project mentions in the commit history & OSSF\({}^{\dagger}\) \\ Geographic distribution & Activity based on timezone distribution & custom \\ Criticality score & Score based on the project’s influence and importance & OSSF \\ Longevity & Average number of days the developers have been involved & GHArchive \\ Last updated & Number of months since the project has been updated & GHArchive \\ Median response time & Median number of days for issues to be closed & GHArchive \\ Average response time & Average number of days for issues to be closed & GHArchive \\ \hline \hline \end{tabular} \({}^{\star}\) GHArchive archives all GitHub data.
\({}^{\star}\) CMC is CoinMarketCap.
\({}^{\dagger}\) OSSF is the Open Source Software Foundation.
\end{table}
Table 6. Metrics used for the exploratory factor analysis (Section 4.1), brief description of the operationalization and the data source.
2. Four are hosted on GitLab10, and two on Bitbucket11. These GitLab and Bitbucket ones are excluded because they are a small percentage of the whole (1.4%) and would require separate infrastructure to access the code bases. Footnote 10: [https://gitlab.com/](https://gitlab.com/)
3. Six are doubles where the project points to the same code base for a related project, e.g. KavaSwap (SWP) and Kava (KVA); only the highest ranked project is included.
Of the 419 publicly available repos on GitHub, 26 of these have no contribution history indicating the repo was created or the code was copied there and never updated. These are excluded as they can be considered dead by ecological standards or stagnant by software measures.
The dataset under investigation contains missing data for 15 projects, spanning three categories: last updated, mentions, and criticality score. Given that these missing values represent 3.9% of the entire sample (15/384), implementing a data imputation strategy can be justified. Two additional projects have a missing Alexa Rank (0.5%). The chosen method for this task is mean substitution, a common and widely recognized technique for dealing with missing data (Kav
The second research questions, **RQ2**: Given that software _Robustness_ is a contributing component of health, what are the factors that contribute to software robustness? and **RQ3**: How does general popularity, or _Interest_ fit in? are answered in the exploratory factor analysis in Section 5.1.
**RQ4** investigated the nature of the relationship between the components contributing to a definition of software health in Section 5.2. Finally, the model validation is in Section 5.2.1.
### EFA for Interest & Robustness
Starting with the data in Table 7 a Scree plot and parallel analysis have been examined to see the number of proposed factors. Both a standard scree and parallel analysis indicate preference for a two factors with eigenvalues \(\lambda_{1}=3.11\) for the first factor and \(\lambda_{2}=1.84\) for the second. A parallel analysis provides more robust reasoning as it will calculate the eigenvalues of the observed data and compare them to the eigenvalues of randomly generated data. Significant deviation means there is grounding for grouping by factors. The simulated groups from PA have \(\lambda_{1}=0.52\), and \(\lambda_{2}=0.23\)
Factor analysis was carried out with the Psych package (version 2.2.3) in **R** (version 4.0.2). The computational method used to estimate the factors is maximum-likelihood, or ML, known to perform well when the factor-variable relationships are strong. The principle-axis method was also used for comparison purposes as it is ideal for non-normality and small sample sizes (Kolmogorov, 1995) to no significant difference. Factor rotation is done with the GPArotation package (version 2022.4-1). The Varimax factor rotation method maximizes the variances of the loadings within the factors. This can help with structure for two or more factors.
A first iteration of EFA is carried out and both _average_- and _median-response time_ do not load onto Factor 1, having a mild negative influence. They load strongly on an independent factor consisting just of themselves which does not meet the criteria for inclusion as they measure roughly the same thing. They are then excluded as part of the EFA iteration process. This is discussed further in Section 5.1. The Bayesian information criterion (BIC) is a comparator between models, and the BIC improves significantly from 276.252 to \(-36.737\) with their exclusion from the analysis. Table 8 shows the EFA results.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Indicator & Factor 1 & Factor 2 & \(h^{2}\) \\ \hline Forks & **0.988** & 0.137 & 0.995 \\ Stars & **0.970** & 0.166 & 0.968 \\ Mentions & **0.885** & 0.076 & 0.790 \\ Criticality score & 0.135 & **0.988** & 0.995 \\ Last updated & \(-0.015\) & **0.705** & 0.498 \\ CMC rank & 0.169 & **0.373** & 0.167 \\ Geographic distribution & 0.082 & **0.369** & 0.143 \\ Longevity & 0.075 & 0.237 & 0.062 \\ Alexa rank & 0.163 & 0.104 & 0.037 \\ \hline _SS loadings_ & _2.787_ & _1.868_ & \\ _Cumulative variance_ & _0.310_ & _0.517_ & \\ _Proportion explained_ & _0.599_ & _0.401_ & \\ \hline \hline \end{tabular}
\end{table}
Table 8. Exploratory factor analysis loadings for two latent factors 1 and 2, and common variance, \(h^{2}\). Strong indicator relationships are in bold. The variables _longevity_ and _Alexa rank_ do not exhibit enough influence to be included in either latent construct.
With EFA, all measured variables are related to every factor by a factor loading estimate where \(-1\) is a strong negative, 0 is neutral, and +1 is strong positive relationship. The significant loads for each factor are shown in boldface along with their loading on the secondary factor. There is strong support for a first factor of _Forks_, _Stars_, and _Mentions_, with a second factor of _Criticality score_, _Last updated_, _CMC rank_, and _Geographic distribution_. The chosen cutoff point for loading estimates is 0.3, and thus _Longevity_ is out of range and does not have enough influence to describe either factor. The same applies to _Alexa rank_, both of which are excluded for the measurement model (Section 5.2).
The communalities (\(h^{2}\) in Table 8) or common variance refers to the proportion of variance in the observed variables that is explained by the corresponding latent factor. It represents the proportion of the variance in the observed variable that can be attributed to the factor, after accounting for measurement error. For instance, Stars has an \(h^{2}\) of 0.968 for the Factor 1, indicating that 96.8% of the variance in stars data can be explained by the first latent factor.
_SS loadings_ represents the amount of variance in the observed variables that is accounted for by each factor. Here, the SS loadings for Factor 1 and Factor 2 are 2.783 and 1.920, respectively, indicating that Factor 1 accounts for 2.787 units of variance in the observed variables, and Factor 2 accounts for 1.868 units of variance. The _Cumulative variance_ refers to the total amount of variance in the observed variables that can be explained by the factors up to that point. For two factors in the data over half of variance (0.517) is accounted for in the model with 59.9% of it coming from Factor 1. The remaining 40.1% is from Factor 2.
Figure 3 shows the EFA diagram with strong loadings in black and secondary loadings in grey. This model exhibits fit statistics in the range of standard thresholds: the Tucker Lewis Index of factoring reliability (TLI) = 0.953, and the root mean square error of approximation (RMSEA) index = 0.089. See Table 9 for these in context of the model validation. More on fit statistics is discussed in Section 6.
Figure 3. Exploratory factor analysis applied to metrics representing _Robustness_ and _Interest_. The primary loadings are the solid arrow in the first tier; the secondary loadings the dotted arrows in the second tier. Note: _median_ and _average response time_ have already been excluded from the model. _Longevity_ and _Web Rank_ are excluded at this stage.
#### 5.1.1. EFA Validation
Model validation is by two mechanisms. First cross-validation of the EFA by randomly separating the dataset into a training and testing segment and comparing model structure. Secondly, confirmatory factor analysis is applied to the measurement model (subsection 4.2).
Cross validation is necessary to avoid the situation where the model ends up being overfit to the data, affecting generalizability. Constraints in the data collection process on the number of available projects limit collecting an entire new dataset and so the original is split into two groups. Random allocation is performed using the Caret package (version 6.0.86) with 51% split to produce two groups-one to build the model and one to test the model. This split allows for half the data to be at the minimum sample size threshold of 200.
Exploratory factor analysis is used to define the latent constructs of _Interest_ and _Robustness_. Table 9 shows the validation results. The insignificant loadings are not shown for readability and to highlight that the same factor structure is present across the models. The training and testing models are both as well fit as the hypothesized Model based on \(\chi^{2}\), TLI, and RMSEA, with slight deviations being acceptably close considering the sample size limitation.
At this stage we can officially rename the latent constructs to be representative of the underlying indicator variables, thus Factor 1 becomes _Interest_, and Factor 2 becomes _Robustness_.
### CFA Model
We now have latent variables representing _Engagement_, _Interest_, and _Robustness_. To investigate the nature of the relationship between the components (**RQ4**), a measurement model is first hypothesized on the basis of the previous EFA and validated using confirmatory factor analysis (CFA).
The EFA model from Figure 3 is combined with the Engagement model (Figure 2) to produce the measurement model seen in Figure 4. The CFA and SEM analysis was carried out with the Lavaan package (version 0.6.13) in **R** (version 4.0.2). During the EFA _response time_ was eliminated, and at this stage two more indicators are removed: _longevity_ and _web rank_ because they fell below our
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Model} & \multicolumn{2}{c}{Training} & \multicolumn{2}{c}{Testing} \\ \cline{2-5} Factor & 1 & 2 & Tr1 & Tr2 & Te1 & Te2 \\ \hline Forks & 0.988 & & 0.983 & 0.964 & \\ Stars & 0.970 & & 0.976 & 0.962 & \\ Mentions & 0.885 & & 0.924 & 0.910 & \\ Criticality & & 0.988 & & 0.994 & 0.980 \\ Last updated & & 0.705 & & 0.707 & 0.711 \\ CMC rank & & 0.373 & & 0.425 & 0.292 \\ Geographic distribution & & 0.369 & & 0.398 & 0.342 \\ Longevity & & 0.237 & & 0.288 & 0.197 \\ Alexa rank & 0.163 & & 0.07 & 0.419 & \\ \hline \(n\) & 388 & & 213 & 171 \\ \(\chi^{2}\) & 2373.7 & & 1680.1 & & 938.8 \\ TLI & 0.953 & & 0.972 & 0.969 \\ RMSEA & 0.089 & & 0.077 & 0.067 \\ \hline \hline \end{tabular}
\end{table}
Table 9. Cross-validation for the Exploratory Factor Analysis model showing equivalent factor structure for the testing model as compared to the baseline EFA. Model grouping 1 is _Interest_ and grouping 2 is _Robustness_. Insignificant loadings (\(<0.3\)) are not shown except where appropriate for structure comparison.
0.3 threshold. There are no cross-loadings in the CFA since it limits the loading to the theoretical construct only, setting any remaining loadings to zero.
A correlation path between Forks and Stars has been freed to improve the model by eliminating a potential Heywood case. A Heywood case is a factor that has a negative error variance estimate and can result in an improper computational solution (Krishnan, 2017). Less than zero error is illogical as it means that more than 100% of the variance in the data is due to the factor structure. The alternate remedy here is to remove Forks altogether as a metric, however, we chose to keep the variable because first it allows for three indicators on _Interest_ (dropping to two indicators is considered under-fit), and second it is an important concept in OSS.
#### 5.2.1. Measurement Model Validation
The second method to validate the model is by confirmatory factor analysis on the measurement model (Figure 4). Here the data shows good support for the hypothesized model indicating the latent constructs are represented by their indicator variables. However, the goodness of fit statistics indicate there is room for model improvement. The Comparative Fit Index (CFI) is 0.84 and the Tucker Lewis Index (TLI) is 0.78 both of which are under a 0.90 heuristic threshold, and the RMSEA is 0.214 (\(>\) 0.07) and SRMR is 0.106 (\(>\) 0.08), both of which are over heuristic thresholds meaning there is evidence for good model fit, but not great. These fit statistics must be used cautiously as there is evidence that with sample sizes approaching 400 the maximum likelihood estimator becomes sensitive to changes in the data resulting in poor fit (Krishnan, 2017). This is to be explored further in Section 6.2-Limitations.
Internal consistency reliability is measured to ensure that the items in our scale are measuring the same construct consistently which increases our confidence in the validity of the scale. Estimates of internal consistency reliability for each scale based on Cronbach's alpha (Krishnan, 2017) and McDonald's omega (Krishnan, 2017) coefficient shows high levels of internal consistency reliability for all scales with alpha coefficients ranging from 0.69 to 0.97, indicating good to excellent reliability. The omega coefficients are also high, ranging from 0.73 to 0.95, indicating good to excellent general factor saturation. These estimates suggest that the scales are reliable measures of the constructs they are intended to measure. in other words the data from the selected metrics are accurately captured in the latent variable.
Figure 4. Measurement model baseline showing correlations between latent constructs and forks & stars. Loadings are standardized across all eleven indicator variables.
### Proposed Structural Model
The proposed structural model is derived from the definition of software health (Section 2.2) as composed of our three latent constructs found in the EFA. Figure 4(a) shows a structural model where general _Interest_ leads to software _Robustness_ and an independent path of _Engagement_\(\rightarrow\)_Robustness_. These paths are the hypothesized relationships. \(H_{1}\) says that an increase in Interest will positively inform Robustness. \(H_{2}\) is that there is a positive influence of Engagement on Robustness.
SEM results show a high correlation between _Interest_ and _Engagement_ suggesting there might be an underlying structural relationship between them. As interest represents the lighter touch activities such as starring a repository, it is reasonable to suggest that this activity predicts engagement or more developer-centric activities such as committing to a code repository. The structural relationship is seen in the revised model in Figure 4(b).
The SEM results in Figure 4(b) show there is almost no effect of _Interest_ on _Robustness_ (loading of \(-0.06,p>0.05\)) and thus this relationship, \(H_{1}\), is not supported. The second hypothesis, \(H_{2}\), has strong support that _Engagement_ predicts _Robustness_ (loading of \(0.54,p<0.001\)). The third result here is that general Interest is a leading indicator of developer Engagement (loading of \(0.59,p<0.001\)). The model was revised to remove the path relationship \(H_{1}\) and results were similar showing _Interest_\(\rightarrow\)_Engagement_ strength \(0.58\), and _Engagement_\(\rightarrow\)_Robustness_ strength \(0.50\).
#### 5.3.1. Structural Model Validation
The structural model shown in Figure 4(b) specifies the path relationship between the constructs. At this point the nomological validity is evaluated to see if the test measures what it should be measuring. We have already used the literature to inform the factors, and so this test is a conceptual check on the outcome relationships. After removing the path between _Interest_ and _Robustness_, there are two remaining paths:
1. _Interest_\(\rightarrow\)_Engagement_ The interest construct is made up of Forks, Stars, and Mentions which are closer to social metrics than traditional software development. For example, someone that is interested in Bitcoin is likely to first star the repository as a bookmarking method, then fork it if they are further curious. These activities happen before any discussion
Figure 5. Structural model showing insignificant effect of _Interest_ on _Robustness_. _Interest_ is a contributing factor to _Engagement_, which, in turn is a contributing factor to _Robustness_. Indicators and loadings are not shown and are consistent with Figure 4. *** is statistically significant at \(p<0.001\).
about bugs or changes, and before any new code is written and a pull request submitted. So, in the procedural sense of contributing to OSS, interest leads, or predicts, engagement.
2. _Engagement \(\rightarrow\) Robustness_ Going back to our ecology metaphor, an organism must be able to sustain its base metabolic needs for survival before it can grow and thrive in its environment. Only once the basic needs are met can it become strong enough to survive shocks and adapt to changes in the environment. In software, this base survival is the day-to-day operations and involves communicating with community members, writing code, submitting code reviews, and attending to comments. If these needs are met through the contributors' motivation and satisfied working conditions, etc., then the project can be in a position to strengthen against unknown future disruptions.
## 6. Discussion
Niche Fit MetricsFrom the baseline definition of software health that has a parallel in ecosystem health (Section 2) the local context of a species or project in the ecosystem is deemed important. This niche occupancy has sound logic: if a software project fills a specific market gap and has no competition it is positioned to thrive, and, more likely to be healthy. The metrics in Table 4 present the researcher with the complex issue of how exactly to identify the niche and quantify it. The perspective taken here is that of the individual software project which limits the context required to determine if it fits a niche (is unique) or not (has strong competitive alternatives, etc.). In other words, we are not analysing the local environment to see if what the project delivers fills a niche, rather the empirical approach is content agnostic, and seeks to determine health without the subjective approach of determining market fit, or other such niche indicators. As such, there is limited research to operationalize niche metrics in OSS.
Chengalur-Smith et al. (2017) defined the construct of niche through audience niche, programming language niche, and operating system niche. What audience niche means is unclear, however language and OS niche are if the project has support for less popular languages and platforms. The study found that none of these metrics had a significant effect on attraction or sustainability, with niche size path estimates of less than 0.04. While this suggests that more research is needed to determine the relationship between niche occupation and software health, it also highlights the complexity of measuring and interpreting these metrics in the context of collaborative software engineering.
This presents as a limitation to the current study, as we have no indicators for niche, our overall model for health might be under-identified, and possibly contribute to the weak model fit indices in Section 5.2.1.
Where does general interest or popularity fit into software health?Jansen (2017) categorizes interest as part of robustness whereas we have found interest to be a part of sustainability and found it has no direct influence on robustness. The _Interest \(\rightarrow\) Robustness_ loading is insignificant at \(-0.06\) (Section 5.3).
This raises questions about the impact of popularity metrics on software health. While it seems interest can contribute to the robustness of a project by increasing its popularity and attracting more developers, it is more likely that increased popularity increases engagement which then affects robustness. One of the benefits of structural equation modelling is being able to disambiguate this relationship.
Forks and stars are strong interest metrics, with both being used by Osman and Baysal (2017) to define _popularity_, and by Abdulhassan (2017) to define _repo interest_. Additionally, Negoita et al. (2018) have stars as a sole definition of _sustainability_. As Jansen points out, once a competitor emerges, users may shift their attention to a more promising alternative, potentially causing long-term
damage to the original project. In this sense, the concept of a popular project may align more closely with sustainability, rather than robustness.
_What happened to bug-fix time?_ Based on the literature, bug fix rate is one of the most cited measures in software health (Table 2; as a part of sustainability). In the present study there is no evidence within this blockchain dataset that time to fix bugs, as an independent measure, is crucial. The original exploratory factor analysis data includes two metrics to gauge their influence: the median time to fix bugs based on the time delta between issues being opened and closed, as well as the average value. Both median and average were investigated as it was thought median value would be more beneficial to account for the long tail in issues that are not critical waiting until someone has the time to explore them. That model did not incorporate median and average time into a structural relationship, rather those two indicators stood alone in an independent factor. As they approximately measure the same concept (improving software by responding to issues) the indicators should not both be included as an under-identified factor. Thus all direct measurement of issue close rate is absent. One explanation is that attending to issues is one of the prime activities that contributors work on, especially new members that looking for a place to start contributing (Krishnan et al., 2017). To get started in a new community they can easily browse the list of open issues to see what needs to be done, thus engaging with the project by fixing a bug. Its not that this activity is not important, rather it is already captured within engagement through commits, comments, and pull requests. From the researcher's point of view this yields a more parsimonious model by reducing indicator redundancy.
_Is healthy software robust software?_ Goggins et al. (2019) agree with Chengalur-Smith et al. (2017) in their definitions of health as a combination of _sustainability_ and _survivability_ as shown in Table 1.
Our concept mapping chose to use the term robustness rather than survivability, and if we use these as synonyms for a moment we can see the path structure. A project can be sustainable but not survive. However, a project cannot survive without being sustainable. Surviving projects must therefore also be sustainable. This is supported by our path relationship _Engagement_ (part of sustainability) \(\rightarrow\)_Robustness_ i.e. survivability (Figure 5b).
Health as a latent construct is not part of the model as there are no direct indicator metrics that asses health, rather as we have shown the latent variables of interest, engagement, and robustness. So taken together, these represent a picture of health. It is reasonable to assume that robust software is also healthy software, but there is more to it; robustness itself is composed of indicator metrics and an endogenous construct. A more accurate picture of health is in Figure 6.
### Implications
It is our goal here to solidify some of the research into OSS health in a manner that can provide a clear definition of, and metrics to asses, software health. As Goggins et al. (2019) say, "There is a considerable amount of research constructing and presenting indicators of open source project
Figure 6. Healthy software is a combination of latent factors, including _Interest_, _Engagement_, and _Robustness_.
activity, but a lack of consensus about how indicators derived from trace-data might be used to represent a coherent view of open source project health and sustainability."
The study results allow stakeholders such as future and current OSS contributors, researchers, and project managers to identify areas for improvement in their software projects. By understanding the factors that contribute to software health, project managers can make informed decisions about where to allocate resources to improve software based on operationalized metrics attributed to interest, engagement, and robustness. The study provides a definition of software health and an introductory structural equation model in the field of blockchain software health. This model can be used as a starting point for future research in this area and can help to guide the development of more comprehensive models of software health. A clear next step will be extension beyond blockchain OSS (Section 6.3).
With regards to research into blockchain software, there are a few points to present. The nature of open source contributions allows developers to self-select projects that have an ideological fit (Srivastava et al., 2017). Perhaps this has implications for the wider software industry, as it suggests that developers are more likely to contribute to projects that align with their personal beliefs and values. In the case of blockchain-based projects, for example, research has shown that developers are more likely to cite motives for contributing based on a "bitcoin ideology" than developers in non-blockchain domains (Bahdan et al., 2017; Bansal et al., 2017). This suggests that the blockchain industry may attract developers with a particular set of values and beliefs, which could guide newcomers looking to contribute to blockchain open source software.
In addition to ideological factors, blockchain-based projects often rely on token incentives to motivate and reward developers. The suggestion here is that incentive-based participation may be more effective than purely voluntary contributions. While most OSS projects are built on voluntary contributions, grants, and scholarships, blockchain-based projects have the additional incentive of compensation directly through or indirectly through the token economy. This creates a link between the quality of a developer's contribution and a potential financial reward, which may encourage more developers to contribute and improve the overall quality of the project.
### Limitations
Beginning with data collection, a few limitations are notable. First, the data is only sourced from GitHub. This is due to the prominence of blockchain projects being hosted here, but must be considered as there are alternatives. Secondly, repository owners could have moved and renamed repositories in the time between manual verification of the code location and the database query time. Although we have no known instances of this, there is the possibility, for example, that individual projects recorded no or little activity because they were moved from the tagged location.
The study sample size (\(n=384\)) was acceptable, but for cross validation of the exploratory factor analysis (Section 5.1.1) the testing/training split was under 200 as recommended by Hair et al. (Hair et al., 2017). Increasing the sample size may be beneficial, but it's important to note that simply adding more blockchain projects does not yield an increase in active projects, as old repositories will remain stagnant and their data available long after the project has been deemed dead. Another factor to consider when increasing the sample size is the detrimental effects on the fit indices as the maximum likelihood estimator is sensitive to variations in the dataset resulting in poor fit.
For statistical validation the fit indices of the confirmatory factor analysis discussed in Section 5.2.1 are, as a rule of thumb, used to proceed to the structural equation model that estimates the path relationship coefficients. There is substantial literature in the social sciences regarding fit statistics (Bahdan et al., 2017; Bansal et al., 2017; Bansal et al., 2017), and it is best to keep in mind that individual fit statistics are not hard rules, nor provide enough evidence to be used in isolation and therefore it is recommended to use a suite of tests for comparison. Also of note considering the context of software engineering, the
recommended thresholds must not be considered law since they were developed based on normally distributed data (Kumar et al., 2017) that was usually collected from surveys designed by researchers. Acceptance or rejection of models should not be based on fit statistics, rather on the ability of the model to provide structure to the data.
Lastly, in the health literature there is varied consensus regarding whether or not the category of niche occupation is a component of health. There is clear agreement of its importance in natural ecosystems, and this is mirrored in business ecosystems. When transitioning into software, not as many studies include niche fit, and of the open source ones in Table 1, only two of seven include a related concept. So without much prior work or benchmarking in this area it is difficult to conclude if the absence of niche occupation yields an accurate representation. This highlights one area for future work in software health. Other future directions are mentioned presently.
### Future Work
A good structural model (and measurement model) allows for generalizability, which can be tested across different OSS industries, such as mobile, web, tools, finance, and others. By testing the applicability of a structural model across multiple industry domains, researchers can assess the robustness and generalizability of the model, as well as identify any industry-specific factors that may impact software health. This can help to ensure that the model is widely applicable and can provide useful insights for practitioners across a range of collaborative software engineering contexts.
Beyond validating a sound structural model across industries, the direct application of software projects assessed against the model as a predictor of health is a long term goal. By using SEM to model the relationships between these endogenous concepts of _Interest_, _Engagement_, and _Robustness_, it is possible to make predictions about health based on the model. This can be particularly useful in the software development process, as it allows developers to identify which factors are most important for achieving desirable outcomes and to adjust their processes accordingly, perhaps even identifying successful projects.
## 7. Conclusion
In this paper we aimed to investigate the definition of software health in the context of open source software (OSS) and define operational measures that can be used to determine health. Our investigation identifies health as a three-pronged concept comprising sustainability, robustness, and niche occupation. Sustainability is further made up of general interest and engagement. We applied exploratory factor analysis to a dataset to find latent constructs for _Interest_ and _Robustness_, which extends the previous work of the authors on _Engagement_. The latent factor of _Robustness_ is composed of software criticality score, time since it was updated, market capitalisation ranking, and geographic distribution, while _Interest_ is made up of forks, stars, and project mentions. A measurement model was created and validated using confirmatory factor analysis. We proposed a structural model that suggests _Interest_ informs _Engagement_ (positively), which in turn informs _Robustness_ (positively) and estimated the path coefficients. While there is good support for the EFA, further work is required to improve the model fit of the proposed structural model. Overall, this research provides insights into the intricacies of OSS health and lays the foundation for future research in this area.
###### Acknowledgements.
We would like to thank software developers for continuing to create open source software. |
2305.19640 | Fine-grained analysis of non-parametric estimation for pairwise learning | In this paper, we are concerned with the generalization performance of
non-parametric estimation for pairwise learning. Most of the existing work
requires the hypothesis space to be convex or a VC-class, and the loss to be
convex. However, these restrictive assumptions limit the applicability of the
results in studying many popular methods, especially kernel methods and neural
networks. We significantly relax these restrictive assumptions and establish a
sharp oracle inequality of the empirical minimizer with a general hypothesis
space for the Lipschitz continuous pairwise losses. Our results can be used to
handle a wide range of pairwise learning problems including ranking, AUC
maximization, pairwise regression, and metric and similarity learning. As an
application, we apply our general results to study pairwise least squares
regression and derive an excess generalization bound that matches the minimax
lower bound for pointwise least squares regression up to a logrithmic term. The
key novelty here is to construct a structured deep ReLU neural network as an
approximation of the true predictor and design the targeted hypothesis space
consisting of the structured networks with controllable complexity. This
successful application demonstrates that the obtained general results indeed
help us to explore the generalization performance on a variety of problems that
cannot be handled by existing approaches. | Junyu Zhou, Shuo Huang, Han Feng, Puyu Wang, Ding-Xuan Zhou | 2023-05-31T08:13:14Z | http://arxiv.org/abs/2305.19640v2 | # Optimal Estimates for Pairwise Learning with Deep ReLU Networks
###### Abstract
Pairwise learning refers to learning tasks where a loss takes a pair of samples into consideration. In this paper, we study pairwise learning with deep ReLU networks and estimate the excess generalization error. For a general loss satisfying some mild conditions, a sharp bound for the estimation error of order \(O((V\log(n)/n)^{1/(2-\beta)})\) is established. In particular, with the pairwise least squares loss, we derive a nearly optimal bound of the excess generalization error which achieves the minimax lower bound up to a logrithmic term when the true predictor satisfies some smoothness regularities.
## 1 Introduction
In many classical learning problems like classification and regression, the aim is to learn an estimator or a predictor \(f:\mathcal{X}\rightarrow\mathcal{Y}\) based on an observed sample \(\{(X_{i},Y_{i})\}_{1}^{n}\) from the input-output space \(\mathcal{X}\times\mathcal{Y}\), where \(\mathcal{X}\) is a compact metric space and \(\mathcal{Y}\subset\mathbb{R}\). Learning algorithms are often implemented by Empirical Risk Minimization (ERM), i.e. minimizing the empirical risk \(\frac{1}{n}\sum_{i=1}^{n}l(Y_{i},f(X_{i}))\) over a hypothesis space. Here \(l:\mathcal{X}\times\mathbb{R}\rightarrow\mathbb{R}\) is the loss function to estimate the performance of a predictor \(f\) which is taken over the hypothesis space \(\mathcal{H}\). Since the loss \(l(f(x),y)\) takes only one sample point \((x,y)\) into consideration, the framework can be categorized as pointwise learning and has been well studied in recent decades [10, 12].
In this paper, we consider another class of learning problems where the loss \(l(f(x,x^{\prime}),y,y^{\prime})\) takes a pair of samples \((x,y),(x^{\prime},y^{\prime})\) into consideration, which is referred to as pairwise learning. General speaking, pairwise learning tasks can be formulated in two types according to whether the predictor \(f\) depends on the order of the sample pair \((x,x^{\prime})\) or not. For the metric and similarity learning [7, 5, 8, 21], our aim is to learn a predictor \(f\) to measure the metric or similarity between the sample pair \((x,x^{\prime})\), which is the same as that between the reverse order sample pair \((x^{\prime},x)\), then it is natural to choose \(f\) independent of the order of \((x,x^{\prime})\). While for ranking [9, 18, 1] and pairwise least squares [25], the predictor \(f\) depends on the sample pair order \((x,x^{\prime})\) since it actually reflects an order or a rank of the input samples. We will demonstrate them in detail as follows.
* For the ranking problem, the aim is to learn a predictor \(f(\cdot,\cdot)\) which is capable to predict an order of objects based on their observed features. Given objects \(x,x^{\prime}\), if \(f(x,x^{\prime})\geq 0\), then we predict that \(x\) has a higher rank than \(x^{\prime}\) and vice versa. Similar to pointwise classification problems, the performance of the predictor \(f\) is often measured by the \(0-1\) misclassification loss \(l(f(x,x^{\prime}),y,y^{\prime})=1_{\{(y-y^{\prime})f(x,x^{\prime})<0\}}\), where \(1_{\{\cdot\}}\) is the indicator function taking the value \(1\) if the argument holds and \(0\) otherwise. If the ordering of \(y\) and \(y^{\prime}\) coincides with the ordering predicted by \(f\), i.e. \((y-y^{\prime})\) and \(f(x,x^{\prime})\) have the same sign, then \((y-y^{\prime})f(x,x^{\prime})>0\), which leads to zero loss, and if \(f\) misranks on the sample pair \((x,x^{\prime})\), then the loss equals \(1\). Due to the nonconvexity and discontinuity of the indicator function, the optimization problems involved in ERM algorithms are often intractable. To overcome this difficulty, the indicator function \(1_{\{t<0\}}\) is often replaced by a convex surrogate loss \(h:\mathbb{R}\rightarrow\mathbb{R}^{+}\), a widely-used choice is the hinge loss \(l(f(x,x^{\prime}),y,y^{\prime})=\max\left\{0,1-sgn(y-y^{\prime})f(x,x^{\prime })\right\}\), where \(sgn(t)=1\) for \(t>0\), \(sgn(t)=-1\) for \(t<0\) and \(sgn(t)=0\) for \(t=0\).
* For the pairwise least squares regression problem, the loss is chosen to be \(l(f(x,x^{\prime}),y,y^{\prime})=(f(x,x^{\prime})-(y-y^{\prime}))^{2}\), which is an analog of pointwise least squares regression \((f(x)-y)^{2}\) and measures how well the predicted value \(f(x,x^{\prime})\) approximates the output value difference \(y-y^{\prime}\). Furthermore, due to the regularities of least squares loss, it inherits some nice properties from pointwise least squares schemes. For instances, the pairwise regression \(f_{\rho}(x,x^{\prime})=\tilde{f}_{\rho}(x)-\tilde{f}_{\rho}(x^{\prime})\), where \(\tilde{f}_{\rho}(x)=\mathbb{E}[Y|X=x]\) is the regression function of pointwise schemes, and the excess generalization error is equal to the square of the \(L^{2}\) norm of the predictor minus the regression function i.e. \(\|\hat{f}-f_{\rho}\|_{L^{2}_{X\times X}}^{2}\) as seen in [25].
In this paper, we are mainly interested in generalization analysis of the pairwise learning problems that depend on the order of the sample pair. To characterize this order dependency, we define a general loss that satisfies a symmetric property (Assumption 1). For generalization analysis, it is well-known that the excess generalization error can be decomposed into the estimation error and the approximation error [10, 12]. Due to the double-index form of the pairwise loss, we employ a Hoeffding decomposition in the error decomposition, which breaks the estimation error into an i.i.d. term and a degenerate U-statistic term. Then, the techniques in empirical process and U-process can be applied. We first get a sharp upper bound of the estimation error with a general loss satisfying some mild assumptions and then derive a generalization error bound with the specific least squares loss. Our main contributions and novelty can be stated as follows.
* We derive under some mild conditions (Assumptions \(1-3\) and a variance-expectation bound), a sharp bound of the estimation error with a general loss in terms of the pseudo-dimension of the hypothesis spaces \(\mathcal{H}\) (Theorem 1): the excess generalization error can be bounded by a term of order \(O((V\log(n)/n)^{1/(2-\beta)})\) plus an approximation term, where \(V\) is the pseudo-dimension of \(\mathcal{H}\) and \(\beta\in(0,1]\) is the parameter of the variance-expectation bound (Definition 3). When the hypothesis space contains the true predictor, the approximation error then vanishes, and the excess generalization bound is of order \(O((V\log(n)/n)^{1/(2-\beta)})\), which is significantly better than the classical rate \(O\left(\sqrt{\frac{V}{n}}\right)\).
* We show the true predictor must be anti-symmetric i.e. \(f_{\rho}(x,x^{\prime})=-f_{\rho}(x^{\prime},x)\) provided the loss is symmetric (Proposition 1). According to this special anti-symmetric structure, the novel hypothesis space \(\mathcal{H}\) is designed to be generated by two parallel sub-networks sharing the same weights with a truncation operation at the last layer, that is to say, any function from \(\mathcal{H}\) is of the form \(\pi(\bar{f}(x,x^{\prime})-\bar{f}(x^{\prime},x))\), where \(\bar{f}\) is the sub-network and \(\pi\) is the truncation operator which is defined in Section 2. Therefore, the predictors in \(\mathcal{H}\) are all endowed with anti-symmetric structure, i.e. \(f(x,x^{\prime})=-f(x^{\prime},x)\). Furthermore, the networks are designed without any boundedness constraints on their parameters except for the last several layers (since we represent the minus and truncation operator by additional layers, then the predictors can be viewed as deep networks), which are supposed to have better approximation abilities than those of networks with bounded parameters in many network approximation literature.
* We consider the approximation error and then derive a nearly optimal upper bound for the excess generalization error with the least squares loss. When the true predictor belongs to the Holder space \(\mathcal{C}^{r-1,1}\), then the upper bound of the approximation error can be derived directly (Theorem 2) and by a trade-off of estimation and approximation error, we get a nearly optimal bound of order \(O(\log^{\tau}(n)\,n^{-\frac{2r}{2r+d}})\) (Theorem 3), which achieves the minimax lower bound up to the logarithmic term \(\log^{\tau}(n)\).
* For the technical part in our paper especially for the estimation error, by using a Hoeffding decomposition (subsection 3.1), the i.i.d. term is estimated by local empirical process and the bound is then reduced to the estimates of the fixed point of some sub-root functions. For the degenerate U-statistics term, a concentration inequality of the supremum of a U-process is employed, and hence the bound is reduced to estimates of the Rademacher complexity and chaos.
The rest of the paper is organized as follows. In the next section, we introduce the setup of the problem and design the novel hypothesis space \(\mathcal{H}\). Section 3 presents the main results of this paper and is divided into three parts. Subsection 3.1 states the error decomposition by applying a Hoeffding decomposition. We derive the main results of the estimation error with the general loss in subsection 3.2 and the approximation error and hence the
generalization error with least squares loss in subsection 3.3. Comparison and conclusion are given in Section 4. In Section 5, we give the sketch of proof of Theorem 1. In Appendix, we give the proofs of some technical details.
## 2 Problem Setup and Notations
Let \(\rho\) be an unknown probability distribution on \(\mathcal{Z}:=\mathcal{X}\times\mathcal{Y}\), where the input space \(\mathcal{X}\subset\mathbb{R}^{d}\) is compact and the output space \(\mathcal{Y}\subset\mathbb{R}\). Denote by \(\rho_{X}\) the marginal distribution of \(\rho\) on \(\mathcal{X}\) and \(\rho(\cdot|x)\) the conditional distribution of \(Y\) given \(X=x\). For the sample \((X,Y)\), we set \(Z=(X,Y)\). Throughout the paper, we let \(\mathcal{X}=[0,1]^{d}\) for the sake of simplicity.
For a pair of samples \((x,y),(x^{\prime},y^{\prime})\) and a predictor \(f:\mathcal{X}^{2}\to\mathbb{R}\) taken from the hypothesis space \(\mathcal{H}\), we define the loss \(l(f(x,x^{\prime}),y,y^{\prime})\in\mathbb{R}^{+}\).
**Assumption 1**.: _For any \(f\in\mathcal{H}\), \(l(f(X,X^{\prime}),Y,Y^{\prime})\in L^{2}(\mathcal{Z}\times\mathcal{Z})\) and the loss is symmetric i.e. for any possible predicted value \(t\in\mathbb{R}\) and \(y,y^{\prime}\in\mathcal{Y}\), we have_
\[l(t,y,y^{\prime})=l(-t,y^{\prime},y).\]
**Remark**.: _For many pairwise learning tasks that depend on the order of the sample pair, the predicted value \(t=f(x,x^{\prime})\) is often suggested to take the opposite sign if we reverse the input order of the sample pair. Thus, it is reasonable to make an assumption that the loss is symmetric with respect to the predicted value \(t\) and the label \((y,y^{\prime})\), i.e. the loss takes the same values on the symmetric sample pair \((t,y,y^{\prime}),(-t,y^{\prime},y)\). One can easily verify that the hinge loss \(l(t,y,y^{\prime})=\max\{0,1-sgn(y-y^{\prime})t\}\) and the least squares loss \(l(t,y,y^{\prime})=(t-(y-y^{\prime}))^{2}\) both satisfy this property._
The generalization error associated with a predictor \(f\) and the loss \(l\) is defined as
\[\mathcal{E}(f):=\int_{\mathcal{Z}}\int_{\mathcal{Z}}l(f(x,x^{\prime}),y,y^{ \prime})\ d\rho(z)d\rho(z^{\prime}). \tag{2.1}\]
Denote by \(f_{\rho}\) the true predictor which minimizes the above generalization error over the space of all measurable functions, i.e. \(f_{\rho}=\arg\min\mathcal{E}(f)\).
**Assumption 2**.: _There exists a constant \(\eta>0\) such that for any \(f\in\mathcal{H}\),_
\[\|f\|_{\infty}\leq\eta\text{ and }\|f_{\rho}\|_{\infty}\leq\eta.\]
**Remark**.: _For the least squares loss function, similar to the standard pointwise least squares regression problem, one can easily verify that \(f_{\rho}(x,x^{\prime}):=\tilde{f}_{\rho}(x)-\tilde{f}_{\rho}(x^{\prime})\)[25], where \(\tilde{f}_{\rho}(x):=\mathbb{E}[Y|X=x]\) is the regression function of pointwise least squares problem. We know \(f_{\rho}\) is immediately uniformly bounded if the distribution of \(Y\) is bounded almost surely._
_For the hinge loss function, [14] showed that \(f_{\rho}(x,x^{\prime})=sgn(T(x,x^{\prime})-T(x^{\prime},x))\), where \(T(x,x^{\prime})=Prob\{Y>Y^{\prime}|x,x^{\prime}\}\). Then we can take \(\eta=1\) due to the fact \(sgn(t)\in\{-1,0,1\}\). In this case, we do not make any constraints on the distribution, because in the hinge loss, only the sign of the difference \((y-y^{\prime})\), instead of the value, plays a role. Therefore the loss is bounded for any fixed \((x,x^{\prime})\) and hence the true predictor is consequently uniformly bounded._
**Assumption 3**.: _There exists constants \(L,\eta>0\), such that for any \(t_{1},t_{2}\in[-\eta,\eta]\), and \(y,y^{\prime}\in\mathcal{Y}\), we have the Lipschitz property_
\[|l(t_{1},y,y^{\prime})-l(t_{2},y,y^{\prime})|\leq L\left|t_{1}-t_{2}\right|.\]
**Remark**.: _Similar to the remark of Assumption 2, one can prove that the hinge loss satisfies this condition with \(L,\eta\) both equal to \(1\). For the least squares loss, we will prove in subsection 3.3 (Lemma 3) that the above three assumptions are all satisfied if the distribution of \(Y\) is bounded almost surely._
Since \(f_{\rho}\) minimizes (2.1) and the distribution \(\rho\) is unknown, the true predictor \(f_{\rho}\) can not be solved directly and hence our goal is to find a predictor \(f\) to learn \(f_{\rho}\) from available data \(\{(X_{i},Y_{i})\}_{1}^{n}\) over the hypothesis space \(\mathcal{H}\). We assume that the data \(\{(X_{i},Y_{i})\}_{i=1}^{n}\) are independently drawn by \(\rho\). Similarly, we can define the empirical error associated with \(f\) as
\[\mathcal{E}_{z}(f):=\frac{1}{n(n-1)}\sum_{i\neq j}^{n}l(f(X_{i},X_{j}),Y_{i},Y_ {j}). \tag{2.2}\]
The predictor minimizing empirical error (2.2) over hypothesis space \(\mathcal{H}\) would be used to learn \(f_{\rho}\). Throughout the paper, for simplicity we assume that the minimizers of errors \(\mathcal{E}(f)\) and \(\mathcal{E}_{z}(f)\) over hypothesis \(\mathcal{H}\) always exist, denoted by \(f_{\mathcal{H}}\) and \(\hat{f}_{z}\) respectively. i.e. \(f_{\mathcal{H}}=\arg\min_{f\in\mathcal{H}}\mathcal{E}(f)\), \(\hat{f}_{z}=\arg\min_{f\in\mathcal{H}}\mathcal{E}_{z}(f)\).
In the situations of ranking and least squares regression, the true predicted value \(t=f_{\rho}(x,x^{\prime})\) measures how well \(x\) is better than \(x^{\prime}\) under the best model \(f_{\rho}\). Intuitively, the predicted value is expected to change the sign if we compare the instance \(x\) with \(x^{\prime}\) in a reverse order, i.e. the true predictor \(f_{\rho}\) is anti-symmetric, \(-t=f_{\rho}(x^{\prime},x)\). From above we know the true predictor \(f_{\rho}(x,x^{\prime})=sgn(T(x,x^{\prime})-T(x^{\prime},x))\) and \(f_{\rho}(x,x^{\prime})=\tilde{f}(x)-\tilde{f}(x^{\prime})\) with respect to hinge loss and least squares loss both satisfy this property. Then, naturally we ask, under what conditions, \(f_{\rho}\) is anti-symmetric. The following simple proposition shows that the symmetry of the loss function with respect to the predicted value and its corresponding labels is sufficient.
**Proposition 1**.: _If the loss satisfies Assumption 1, then the true predictor \(f_{\rho}\) is anti-symmetric, i.e. for any \(x,x^{\prime}\in\mathcal{X}\)_
\[f_{\rho}(x,x^{\prime})=-f_{\rho}(x^{\prime},x),\]
_and consequently we have_
\[f_{\rho}(x,x)=0.\]
Proof.: By Fubini's Theorem, it is easy to show
\[f_{\rho}(x,x^{\prime}) =\arg\min_{t\in\mathbb{R}}\int_{\mathcal{Y}\times\mathcal{Y}}l(t,y,y^{\prime})\,d\rho(y|x)\,d\rho(y^{\prime}|x^{\prime})\] \[=\arg\min_{t\in\mathbb{R}}\int_{\mathcal{Y}\times\mathcal{Y}}l(-t,y^{\prime},y)\,d\rho(y^{\prime}|x^{\prime})\,d\rho(y|x)\] \[=-\arg\min_{-t\in\mathbb{R}}\int_{\mathcal{Y}\times\mathcal{Y}}l( -t,y^{\prime},y)\,d\rho(y^{\prime}|x^{\prime})\,d\rho(y|x)\] \[=-f_{\rho}(x^{\prime},x).\]
where the first-to-second step uses the symmetric property of the loss.
Conversely, if a predictor \(f\) is anti-symmetric, then for any sample pair \((x,y),(x^{\prime},y^{\prime})\in\mathcal{X}\times\mathcal{Y}\), it is trivial that \(l(f(x,x^{\prime}),y,y^{\prime})=l(f(x^{\prime},x),y^{\prime},y)\).
In the following, we are going to design the hypothesis space \(\mathcal{H}\). The above proposition indicated that the predictors in \(\mathcal{H}\) are supposed to be endowed with the same anti-symmetric structure for better performance. In order to realize this structure, we take the difference of a function \(\tilde{f}\) on sample pair \((x,x^{\prime})\) and its reverse order pair \((x^{\prime},x)\) respectively, i.e. \(f(x,x^{\prime}):=\tilde{f}(x,x^{\prime})-\tilde{f}(x^{\prime},x)\). Then the anti-symmetry of \(f\) is obviously observed. Now, the hypothesis space \(\mathcal{H}\) is constructed as follows.
Firstly, we define the sub-network for approximating \(\tilde{f}(x,x^{\prime})\). Let \(\Sigma\) be the space of ReLU neural networks with \(J\) layers with \(T_{j}\in\mathbb{R}^{w_{j}\times w_{j-1}}\), \(b_{j}\in\mathbb{R}^{w_{j}}\) being the connection matrix and the bias in \(j\)-th layer respectively, where \(w_{j}\in\mathbb{N}\) for \(j=1,...,J\) represents the width of \(j\)-th layer and \(w_{0}=2d\) is the dimension of the input space. The output of \(j\)-th layer is given by
\[h_{j}(x,x^{\prime})=\sigma(T_{j}\cdot h_{j-1}(x,x^{\prime})+b_{j}),\]
where \(\sigma(t)=\max\{t,0\}\) is the ReLU activation function acting componentwise on vectors and \(h_{0}(x,x^{\prime})=(x,x^{\prime})\in\mathbb{R}^{2d}\). We call \(\Sigma\) the primitive network space and is defined as
\[\Sigma=\left\{c\cdot h_{J}(x,x^{\prime}):c\in\mathbb{R}^{w_{J}}\right\}. \tag{2.3}\]
According to the specific anti-symmetric structure of \(f_{\rho}\), we define the following space of functions induced by symmetric parallel networks
\[\Lambda:=\left\{\tilde{f}(x,x^{\prime})-\tilde{f}(x^{\prime},x):\tilde{f}\in \Sigma\right\}.\]
The network function space \(\Lambda\) is unbounded since we do not impose any boundedness constraints on the weights. Then the capacity of \(\Lambda\) might be very large (\(e.g.\) the covering number of \(\Lambda\) with infinite norm \(\|\cdot\|_{\infty}\) and radius \(\epsilon>0\) is infinite since unbounded set cannot be totally bounded), which might result in extremely large estimations of error. Recall that Assumption 2 implies the true predictor \(\|f_{\rho}\|_{\infty}\leq\eta\), then it is reasonable to do a truncation on functions in \(\Lambda\). Denote by \(\pi_{\eta}:\mathbb{R}\rightarrow[-\eta,\eta]\) the truncation operator
\[\pi_{\eta}(t):=\begin{cases}\begin{array}{rl}\eta,&t>\eta\\ t,&t\in[-\eta,\eta]\\ -\eta,&t<-\eta\end{array}\end{cases}\]
For any \(f\in\Lambda\), \(|\pi_{\eta}(f(x,x^{\prime}))-f_{\rho}(x,x^{\prime})|\leq|f(x,x^{\prime})-f_{ \rho}(x,x^{\prime})|\). We define now the hypothesis space \(\mathcal{H}\) generated by truncated anti-symmetric neural networks as
\[\mathcal{H}=\left\{\pi_{\eta}(f):f\in\Lambda\right\}. \tag{2.4}\]
Since the truncation operator is piecewise linear, one can see the predictors \(\pi_{\eta}(\tilde{f}(x,x^{\prime})-\tilde{f}(x^{\prime},x))\) in \(\mathcal{H}\) can be exactly represented by deep ReLU networks with two parallel sub-networks sharing the same weights.
Now, we introduce some notions to characterize the capacity of the network function space. For \(f\in\mathcal{F}\) with \(L\) layers and parameters \(c,T_{j},b_{j}\), denote by \(J(f),W(f),U(f)\) the number of layers (depth), nonzero weights and computation units (nodes) of \(f\) respectively, i.e. \(J(f)=L\), \(W(f)=\|c\|_{0}+\sum_{j=1}^{L}\|T_{j}\|_{0}+\|b_{j}\|_{0}\) and \(U(f)=\sum_{j=1}^{L}w_{j}\), where \(\|\cdot\|_{0}\) represents the number of nonzero entry of a vector or a matrix.
We call a network function space \(\mathcal{F}\) is with the number of layers \(J\), nonzero weights \(W\) and computation units \(U\) iff \(\mathcal{F}\) consists of all network functions \(f\) such that \(J(f)\leq J,W(f)\leq W\) and \(U(f)\leq U\).
**Notation for constants:** _For any parameters \(\beta_{1},...,\beta_{n}\), denote by \(C_{\beta_{1},...,\beta_{n}}\) the constant only depending on these parameters, and denote by \(C\) an absolute constant. Throughout this paper, the constants may differ from line to line and are assumed to be greater than \(1\)._
## 3 Main Results and Fast Learning Rates
In the classical statistical learning theory, to bound the excess generalization error, we often have to analyze the random term \(\left|\frac{1}{n}\sum_{i=1}^{n}f(Z_{i})-\mathbb{E}[f]\right|\) over some hypothesis space, where the independence of \(\{f(Z_{i})-\mathbb{E}[f(Z)]\}\) plays a vital role in the classical empirical process. However, for pairwise learning, the terms \(\{l(f(X_{i},X_{j}),Y_{i},Y_{j})\}\) in the double-index summation (2.2) are indeed dependent. Thus, standard tools for pointwise learning like classification and regression cannot be applied directly. In order to overcome this dependency difficulty, we employ the Hoeffding decomposition for U-statistics, which was first proposed by Hoeffding [13]. Hoeffding decomposition breaks the centered U-statistic \(U_{n}-\mathbb{E}U_{n}\) into the summation of an i.i.d. term and a degenerate U-statistic term. Let \(U_{n}=\frac{1}{n(n-1)}\sum_{i\neq j}q(Z_{i},Z_{j})\), where \(q(\cdot,\cdot):\mathcal{Z}\times\mathcal{Z}\rightarrow\mathbb{R}\) is a symmetric kernel and \(Z_{1},...,Z_{n}\) are drawn independently. Then we have the following decomposition
\[U_{n}-\mathbb{E}U_{n}=2T_{n}+W_{n},\]
where
\[T_{n}=\frac{1}{n}\sum_{i=1}^{n}h(Z_{i}),\]
\[h(Z_{i})=\mathbb{E}[q(Z_{i},Z)|Z_{i}]-\mathbb{E}U_{n},\]
\[W_{n}=\frac{1}{n(n-1)}\sum_{i\neq j}\hat{h}(Z_{i},Z_{j}),\]
\[\hat{h}(Z_{i},Z_{j})=q(Z_{i},Z_{j})-h(Z_{i})-h(Z_{j})-\mathbb{E}U_{n}.\]
Notice that \(W_{n}\) is a degenerate U-statistic, i.e.
\[\mathbb{E}[\hat{h}(Z_{i},Z_{j})|Z_{i}]=0.\]
Therefore, this decomposition allows us to bound the double-index summation \(|\mathcal{E}(f)-\mathcal{E}_{z}(f)|\) by other two terms, where the first i.i.d. term \(T_{n}\) could be bounded by standard techniques and the second degenerate term \(W_{n}\) could be bounded by the decoupling methods of U-statistics [11, 9].
### Generalization analysis and error decomposition
For \(f\in\mathcal{H}\), we define the shifted loss as \(q_{f}(z,z^{\prime}):=l(f(x,x^{\prime}),y,y^{\prime})-l(f_{\rho}(x,x^{\prime}), y,y^{\prime})\). Since the loss \(l\) is symmetric and the predictor \(f\in\mathcal{H}\) is anti-symmetric, \(q_{f}\) is a symmetric kernel. By applying the Hoeffding decomposition to \(\mathcal{E}_{z}(f)-\mathcal{E}(f)\), we denote by \(U_{n}^{f},h_{f},T_{n}^{f},\hat{h}_{f},W_{n}^{f}\) the corresponding decomposition terms associated with \(q_{f}\). That is, \(U_{n}^{f}=\frac{1}{n(n-1)}\sum_{i\neq j}q_{f}(Z_{i},Z_{j}),\ h_{f}(Z_{i})= \mathbb{E}[q_{f}(Z_{i},Z)|Z_{i}]-\mathbb{E}U_{n}^{f}\) and so on. We can then decompose the excess generalization error as
\[\mathcal{E}(\hat{f}_{z})-\mathcal{E}(f_{\rho}) =\mathcal{E}(\hat{f}_{z})-\mathcal{E}_{z}(\hat{f}_{z})+\mathcal{ E}_{z}(\hat{f}_{z})-\mathcal{E}_{z}(f_{\mathcal{H}})+\mathcal{E}_{z}(f_{ \mathcal{H}})-\mathcal{E}(f_{\mathcal{H}})+\mathcal{E}(f_{\mathcal{H}})- \mathcal{E}(f_{\rho})\] \[\leq\{\mathcal{E}(\hat{f}_{z})-\mathcal{E}(f_{\rho})+\mathcal{E} _{z}(f_{\rho})-\mathcal{E}_{z}(\hat{f}_{z})\}+\{\mathcal{E}_{z}(f_{\mathcal{H }})-\mathcal{E}_{z}(f_{\rho})+\mathcal{E}(f_{\rho})-\mathcal{E}(f_{\mathcal{H} })\}\] \[\quad+\mathcal{E}(f_{\mathcal{H}})-\mathcal{E}(f_{\rho})\] \[=\{EU_{n}^{\hat{f}_{z}}-U_{n}^{f_{z}}\}+\{U_{n}^{f_{\mathcal{H}}} -EU_{n}^{f_{\mathcal{H}}}\}+\mathcal{E}(f_{\mathcal{H}})-\mathcal{E}(f_{\rho})\] \[=-2T_{n}^{\hat{f}_{z}}-W_{n}^{\hat{f}_{z}}+2T_{n}^{f_{\mathcal{H }}}+W_{n}^{f_{\mathcal{H}}}+\mathcal{E}(f_{\mathcal{H}})-\mathcal{E}(f_{\rho})\] \[=\{-2T_{n}^{\hat{f}_{z}}+2T_{n}^{f_{\mathcal{H}}}\}+\{-W_{n}^{ \hat{f}_{z}}+W_{n}^{f_{\mathcal{H}}}\}+\mathcal{E}(f_{\mathcal{H}})-\mathcal{ E}(f_{\rho})\] \[=:S_{1}(\mathcal{H})+S_{2}(\mathcal{H})+\mathcal{D}(\mathcal{H}),\]
where \(S_{1}(\mathcal{H})+S_{2}(\mathcal{H})\) is the estimation error and \(\mathcal{D}(\mathcal{H})\) is the approximation error and the first-to-second step uses the property of ERM algorithm that \(\mathcal{E}_{z}(\hat{f}_{z})-\mathcal{E}_{z}(f_{\mathcal{H}})\leq 0\).
All the above error terms involve the hypothesis space \(\mathcal{H}\). One can see larger capacity of \(\mathcal{H}\) results in smaller approximation error but larger estimation error because \(\hat{f}_{z}\) is taken from a richer class, which is more sensitive to the noise and hence leads to larger variance. This is the well-known bias-variance dilemma [10, chap. 1]. To get an optimal learning rate, we have to balance the bias and the variance. In this paper, the capacity of \(\mathcal{H}\) is estimated by the pseudo-dimension and the empirical covering number.
### Sharp bounds of estimation error
Before giving our main results, we define some terminologies to characterize the capacity of the hypothesis space [12, 3].
**Definition 1**.: _Let \(\mathcal{F}\) be a class of functions \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\) and \(\mathcal{F}^{+}:=\{(x,t):f(x)>t,f\in\mathcal{F}\}\) be its subgraph set, then the pseudo-dimension \(Pdim(\mathcal{F})\) of \(\mathcal{F}\) is defined as_
\[Pdim(\mathcal{F}):=VC(\mathcal{F}^{+}),\]
_where \(VC(\mathcal{F}^{+})\) is the VC-dimension of \(\mathcal{F}^{+}\). Furthermore, If \(Pdim(\mathcal{F})<\infty\), then we call \(\mathcal{F}\) a VC-class._
**Definition 2**.: _Let \((T,d)\) be a metric space. Consider a subset \(K\subset T\) and let \(\epsilon>0\). A subset \(\mathcal{N}\subset K\) is called an \(\epsilon\)-net of \(K\) if every point in \(K\) is within a distance \(\epsilon\) of some point of \(\mathcal{N}\), i.e._
\[\forall x\in K,\ \exists x_{0}\in\mathcal{N}:d(x,x_{0})\leq\epsilon.\]
_The smallest possible cardinality of an \(\epsilon\)-net of \(K\) is called the covering number of \(K\) and is denoted by \(\mathcal{N}(K,d,\epsilon)\)._
The following lemma reveals a relation between the covering number and pseudo-dimension [22, Theorem 2.6.7] and will be used frequently later.
**Lemma 1**.: _For a VC-class \(\mathcal{F}\) of functions with uniform bound \(F\) and \(r\geq 1\), one has for any probability measure \(Q\),_
\[\mathcal{N}(\mathcal{F},\epsilon F,L_{Q}^{r})\leq C\,Pdim(\mathcal{F})(16e)^{ Pdim(\mathcal{F})}\left(\frac{1}{\epsilon}\right)^{r(Pdim(\mathcal{F})-1)}\]
_for an absolute constant \(C>0\) and \(0<\epsilon<1\), where \(L_{Q}^{r}\) denotes the \(L^{r}\) norm with respect to the measure \(Q\)._
As [9] proved, without any information of the second-order moment, the learning rate of the estimation error is \(O\left(\sqrt{\frac{V}{n}}\right)\). However, this bound is often quite loose, since one only uses the first-order property (uniform boundedness) of the hypothesis function, while the variance plays no role in the analysis. Nevertheless, we will show that the rate can be improved significantly if the following property about the variance (second-order moment) is satisfied.
**Definition 3** (Variance-expectation bound).: _We say that \(\mathcal{F}\) has a variance-expectation bound with parameter pair \((\beta,M)\), if for any \(f\in\mathcal{F}\),_
\[\mathbb{E}[f^{2}]\leq M(\mathbb{E}[f])^{\beta}.\]
Since \(\mathbb{E}[f^{2}]\geq 0\), this property requires \(\mathbb{E}[f]\geq 0\) and it is obvious that the shifted class \(\{l(f(x,x^{\prime}),y,y^{\prime})-l(f_{\rho}(x,x^{\prime}),y,y^{\prime}):f\in \mathcal{H}\}\) satisfies this condition. By convention, we often assume \(\beta\in(0,1]\), and one can see this property always holds for \(\beta=0\) if \(\mathcal{F}\) is uniformly bounded. However, this trivial bound does not provide any further information since the variance is just bounded by its uniform bounds, where only the first-order condition is considered in reality. There are numerous examples of loss functions that this condition could be verified, for misranking loss, [9] presented that under some low-noise conditions, the existence of variance-expectation bound is guaranteed. [4] showed that if the modulus of convexity of the loss \(\delta(t)\geq ct^{r}\), then it can also be verified.
The following Theorem is our main result on fast learning rates of convergence of the estimation error in terms of the pseudo-dimension. We prove this theorem in section 5.
**Theorem 1**.: _Under Assumptions \(1-3\), let \(V=Pdim(\mathcal{H})\) be the pseudo-dimension of the hypothesis space \(\mathcal{H}\) defined by (2.4) and suppose that the shifted class \(\{l(f(x,x^{\prime}),y,y^{\prime})-l(f_{\rho}(x,x^{\prime}),y,y^{\prime}):f\in \mathcal{H}\}\) has a variance-expectation bound with parameter pair \((\beta,M)\). Then for any \(\delta\in(0,1/2)\), with probability at least \(1-\delta\), we have_
\[\mathcal{E}(\hat{f_{z}})-\mathcal{E}(f_{\rho})\leq C_{\eta,L,M,\beta}\left( \frac{V\log(n)\log^{2}(4/\delta)}{n}\right)^{\frac{1}{2-\beta}}+2(1+\beta) \mathcal{D}(\mathcal{H}),\]
_where \(\mathcal{D}(\mathcal{H})=\inf_{f\in\mathcal{H}}\int_{Z\times Z}l(f(x,x^{ \prime}),y,y^{\prime})-l(f_{\rho}(x,x^{\prime}),y,y^{\prime})d\rho(z)d\rho(z^{ \prime})\) is the approximation error._
**Remark**.: _The anti-symmetric structure of the hypothesis space \(\mathcal{H}\) and the true predictor and the symmetry of the loss (Assumption 1) enable us to construct a symmetric kernel of some U-statistics, where this symmetry plays a vital role in the proof._
_Actually, only under Assumptions 2 and 3, this theorem can still be applied to any other hypothesis space \(\tilde{\mathcal{H}}\) and the loss \(l\) if for any \(f\in\tilde{\mathcal{H}}\cup\{f_{\rho}\}\), \(x,x^{\prime}\in\mathcal{X}\) and \(y,y^{\prime}\in\mathcal{Y}\) we have \(l(f(x,x^{\prime}),y,y^{\prime})=l(f(x^{\prime},x),y^{\prime},y)\). For example, consider the metric and similarity learning with the loss \(\left(1+\tau(y,y^{\prime})(f(x,x^{\prime})-b)\right)_{+}\), where \(\tau(y,y^{\prime})=1\) for \(y=y^{\prime}\) and \(=-1\) otherwise and \(b>0\) is the bias term. On the contrary to the anti-symmetric structure, we can construct the \(\tilde{\mathcal{H}}\) with symmetric predictors and the property is then immediately verified._
**Remark**.: _If \(\mathcal{H}\) is a VC-class which contains the true predictor \(f_{\rho}\), then the approximation term \(\mathcal{D}(\mathcal{H})\) vanishes and the learning rate of the excess generalization bounds is of order \(O((\log(n)n^{-1})^{1/(2-\beta)})\)._
### Generalization bounds for pairwise learning with deep ReLU networks
In this subsection, we will derive an explicit generalization bound for the pairwise least squares loss, i.e. \(l(f(x,x^{\prime}),y,y^{\prime})=(f(x,x^{\prime})-(y-y^{\prime}))^{2}\).
[3] derived a nearly-tight pseudo-dimension bound for networks with piecewise-polynomial activation functions. Since the ReLU activation function is piecewise-polynomial with degree \(1\) and one breakpoint, by using this fact we can derive a pseudo-dimension bound for \(\mathcal{H}\) directly. The following Lemma shows that the pseudo-dimension bound of the hypothesis space \(\mathcal{H}\) and the primitive network space \(\Sigma\) (2.3) are of the same order.
**Lemma 2**.: _If the primitive network space \(\Sigma\) (2.3) is with the number of layers \(J\), nonzero weights \(W\) and computation units \(U\), then_
\[Pdim(\mathcal{H})\leq CJW\log U.\]
Proof.: For \(\pi_{\eta}(f)\in\mathcal{H}\), where \(f(x,x^{\prime})=\tilde{f}(x,x^{\prime})-\tilde{f}(x^{\prime},x)\) with \(\tilde{f}\in\Sigma\). Let \(T_{1}\) be the connection matrix of the first layer of \(\tilde{f}\) and \(A_{0}:=\begin{bmatrix}0&I_{d}\\ I_{d}&0\end{bmatrix}\in\mathbb{R}^{2d\times 2d}\) be the rotation matrix, i.e. \(A_{0}(x,x^{\prime})^{T}=(x^{\prime},x)^{T}\). By substituting \(T_{1}\) with \(T_{1}A_{0}\) we can get a new network \(f^{*}(x,x^{\prime})\) with the same architecture and \(f^{*}(x,x^{\prime})=\tilde{f}(x^{\prime},x)\). Furthermore, we can show that the difference operator \(\mathbb{R}^{2}\ni(x,y)\mapsto x-y\) and the truncation operator \(\pi_{\eta}\) can be expressed by ReLU networks with fixed parameters. Define
\[h_{1}(x,y)=\sigma(x-y)-\sigma(y-x),\] \[h_{2}(x)=\sigma(x+\eta)-\sigma(x-\eta)-\eta.\]
Then it is easy to know, \(h_{1}(x,y)=x-y\) and \(h_{2}=\pi_{\eta}\). Hence, we have
\[\pi_{\eta}(f)(x,x^{\prime})=h_{2}\circ h_{1}\left(\tilde{f}(x,x^{\prime}),f^{* }(x,x^{\prime})\right).\]
Since the additional layers \(h_{2},h_{1}\) have fixed parameters and \(\tilde{f}\) and \(f^{*}\) are of the same architecture, we know \(\pi_{\eta}(f)\) has the same order of number of layers, nonzero weights and computation units as that of \(\tilde{f}\). By Theorem 7 in [3], we know \(Pdim(\Sigma)\leq CJW\log U\) and then we get the desired result.
In general, the pseudo-dimension of a truncated class \(\mathcal{F}_{\eta}:=\{\pi_{\eta}(f):f\in\mathcal{F}\}\) is less than that of the original one \(\mathcal{F}\). (For more details, see [12, page 173])
Now, we turn to estimating the approximation error. For the least squares loss, we have a nice property
\(\mathcal{E}(f_{\rho})=\|f-f_{\rho}\|_{L^{2}(\mathcal{X}\times\mathcal{X})}^{2}\), then
\[\mathcal{D}(\mathcal{H}) =\inf_{\pi_{(}f)\in\mathcal{H}}\int_{\mathcal{X}\times\mathcal{X}} \left|\pi_{\eta}(f)(x,x^{\prime})-f_{\rho}(x,x^{\prime})\right|^{2}d\rho_{X}(x) d\rho_{X}(x^{\prime})\] \[\leq\inf_{f\in\Sigma}\int_{\mathcal{X}\times\mathcal{X}}\left| \tilde{f}(x,x^{\prime})-\bar{f}(x^{\prime},x)-\left(\tilde{f}_{\rho}(x)-\tilde {f}_{\rho}(x^{\prime})\right)\right|^{2}d\rho_{X}(x)d\rho_{X}(x^{\prime})\] \[\leq 2\inf_{\tilde{f}\in\Sigma}\left\{\int_{\mathcal{X}\times \mathcal{X}}\left|\tilde{f}(x,x^{\prime})-\frac{1}{2}\left((\tilde{f}_{\rho}( x)-\tilde{f}_{\rho}(x^{\prime})\right)\right|^{2}\right.\] \[\left.\quad+\left|\tilde{f}(x^{\prime},x)-\frac{1}{2}\left( \tilde{f}_{\rho}(x^{\prime})-\tilde{f}_{\rho}(x)\right)\right|^{2}d\rho_{X}( x)d\rho_{X}(x^{\prime})\right\}\] \[=4\inf_{\tilde{f}\in\Sigma}\int_{\mathcal{X}\times\mathcal{X}} \left|\tilde{f}(x,x^{\prime})-\frac{1}{2}\left((\tilde{f}_{\rho}(x)-\tilde{f}_ {\rho}(x^{\prime})\right)\right|^{2}d\rho_{X}(x)d\rho_{X}(x^{\prime}),\] ( \[\ast\] )
where the first-to-second step uses the inequality \(|\pi_{\eta}(f)(x,x^{\prime})-f_{\rho}(x,x^{\prime})|\leq|f(x,x^{\prime})-f_{ \rho}(x,x^{\prime})|\) and the fact \(f_{\rho}(x,x^{\prime})=\tilde{f}_{\rho}(x)-\tilde{f}_{\rho}(x^{\prime})\), and the second-to-third step uses the trivial bound \((a+b)^{2}\leq 2(a^{2}+b^{2})\) inside the integral. In order to get a good approximation, it is natural to endow the regression function \(\tilde{f}_{\rho}\) with some regularities. Thus, we assume that \(\tilde{f}_{\rho}\in\mathcal{C}^{r-1,1}([0,1]^{d})\) with \(\|\tilde{f}_{\rho}\|_{\mathcal{C}^{r-1,1}([0,1]^{d})}\leq B\), where the Holder space \(\mathcal{C}^{r-1,1}([0,1]^{d})\) consists of functions with continuous partial derivatives up to order \(r-1\) such that all their partial derivatives of order \(r-1\) are Lipschitz continuous. The Holder norm is defined to be
\[\|f\|_{\mathcal{C}^{r-1,1}([0,1]^{d})}:=\max_{\alpha\in\mathbb{N}^{d},|\alpha |_{1}\leq r}\|D^{\alpha}f\|_{L^{\infty}([0,1]^{d})}.\]
Note the \(L^{2}\) norm is dominated by the \(L^{\infty}\) norm, the prominent work [23] derived a tight error estimate of approximating Holder functions by deep ReLU networks in \(L^{\infty}\) norm. It was proved that in order to achieve the approximation accuracy \(\epsilon\), the number of nonzero weights and computation units are required to be of order \(O(\epsilon^{-\frac{\rho}{r}}\log(1/\epsilon))\), where \(D\) is the input dimension. In pairwise learning, the dimension of input \((x,x^{\prime})\in\mathbb{R}^{2d}\) is double that in pointwise learning \(x\in\mathbb{R}^{d}\), then the parameter of input dimension \(D\) in the above rate is supposed to be \(2d\) at first glance. However, by the specific form of the true predictor \(f_{\rho}(x,x^{\prime})=\tilde{f}_{\rho}(x)-\tilde{f}_{\rho}(x^{\prime})\), we can show \(2d\) can be reduced to \(d\) by constructing parallel networks to approach each \(\tilde{f}_{\rho}(x),\tilde{f}_{\rho}(x^{\prime})\) respectively, which means the intrinsic dimension of the input is \(d\). Then the convergence order of approximation error can be improved significantly.
**Theorem 2**.: _Suppose that the regression function \(\tilde{f}_{\rho}\in\mathcal{C}^{r-1,1}([0,1]^{d})\) with \(\|\tilde{f}_{\rho}\|_{\mathcal{C}^{r-1,1}([0,1]^{d})}\leq B\) and \(r\in\mathbb{N}^{+}\), then for any \(\epsilon>0\), there exists a neural network \(f(x,x^{\prime})\) with the number of layers at most \(C_{d,r,B}\log(1/\epsilon)\) and at most \(C_{d,r,B}\epsilon^{-\frac{d}{r}}\log(1/\epsilon)\) nonzero weights and computation units, such that_
\[\|f-f_{\rho}\|_{L^{\infty}(\mathcal{X}\times\mathcal{X})}=\sup_{x,x^{\prime}\in \mathcal{X}}|f(x,x^{\prime})-f_{\rho}(x,x^{\prime})|\leq\epsilon. \tag{3.1}\]
_Furthermore, if \(\Sigma\) is the primitive network space with the number of layers \(C_{d,r,B}\log(1/\epsilon)\), nonzero weights and computation units \(C_{d,r,B}\epsilon^{-\frac{d}{r}}\log(1/\epsilon)\), then_
\[\mathcal{D}(\mathcal{H})\leq\epsilon^{2}.\]
Proof.: By Theorem 1 in [23], we know there exists a neural network \(g\) with the number of layers at most \(C_{d,r,B}\log(1/\epsilon)\) and at most \(C_{d,r,B}\epsilon^{-\frac{d}{r}}\log(1/\epsilon)\) nonzero weights and computation units such that
\[\left\|g-\frac{\tilde{f}_{\rho}}{2}\right\|_{L^{\infty}(\mathcal{X})}\leq\frac{ \epsilon}{4}.\]
Let \(h(x,y)=\sigma(x-y)-\sigma(y-x)=x-y\) be the difference operator and \(\tilde{f}(x,x^{\prime})\) be a network defined as \(\tilde{f}(x,x^{\prime})=h(g(x),g(x^{\prime}))\), then we can see
\[\tilde{f}(x,x^{\prime}) =g(x)-g(x^{\prime}),\] \[f(x,x^{\prime}) =\tilde{f}(x,x^{\prime})-\tilde{f}(x^{\prime},x)=2(g(x)-g(x^{ \prime})).\]
Therefore, we have
\[\left\|f-f_{\rho}\right\|_{L^{\infty}(X\times X)}\leq 2\sup_{x,x^{\prime}\in \mathcal{X}}\left|\tilde{f}(x,x^{\prime})-\frac{1}{2}(\tilde{f}_{\rho}(x)- \tilde{f}_{\rho}(x^{\prime}))\right|\leq 4\left\|g-\frac{\tilde{f}_{\rho}}{2} \right\|_{L^{\infty}(\mathcal{X})}\leq\epsilon.\]
Since \(f(x,x^{\prime})\) is constructed by connecting the output of two parallel networks \(g(x),g(x^{\prime})\), then \(f\) has the same order of number of layers, nonzero weights and computation units as \(g\) and the first part is proved. When \(\Sigma\) has the same capacity, by noting that the \(L^{2}\) norm can be controlled by \(L^{\infty}\) norm and the second part is then proved.
If \(\Sigma\) has the number of layers \(J=\left\lceil\log W\right\rceil\) which is associated with its number of nonzero weights \(W\). Then applying Theorem 2, we have \(\mathcal{D}(\mathcal{H})\leq C_{d,r,B}\left(\frac{\log W}{W}\right)^{\frac{2 r}{d}}\) and the pseudo-dimension \(V=Pdim(\mathcal{H})\leq CW\log^{2}W\). By plugging them into the approximation error and estimation error respectively (Theorem 1), the excess generalization error is in terms of \(W\), hence by choosing a proper asymptotic rate of \(W\) we will get an optimal convergence rate.
Now that the loss \(l\) and the hypothesis space \(\mathcal{H}\) are given specifically, then we naturally want to check whether Assumptions 1\(-\)3 are satisfied. We will show that the following simple assumption of the distribution \(\rho\) is sufficient.
**Assumption 4**.: _There exists a constant \(B>0\) such that_
\[Prob\{|Y|\leq B\}=1.\]
**Lemma 3**.: _If \(l\) is the least squares loss and the hypothesis space \(\mathcal{H}\) is constructed as in (2.4). Then Assumption 4 implies Assumptions 1\(-\)3 and the shifted class \(\{l(f(x,x^{\prime}),y,y^{\prime})-l(f_{\rho}(x,x^{\prime}),y,y^{\prime}):f\in \mathcal{H}\}\) has a variance-expectation bound with parameter pair \((\beta,M)\). Furthermore, their corresponding constants \(\eta,L,M\) only depend on \(B\) and \(\beta=1\), therefore, \(C_{\eta,L,M,\beta}=C_{B}\)._
Proof.: Since \(P(|Y|\leq B)=1\), we know \(|\tilde{f}_{\rho}(x)|\leq\int_{\mathcal{Y}}|y|d\rho(y|x)\leq B\) almost surely and hence \(|f_{\rho}(x,x^{\prime})|=|\tilde{f}_{\rho}(x)-\tilde{f}_{\rho}(x^{\prime})| \leq 2B=:\eta\). Recall that \(f\in\mathcal{H}\) is obtained by truncation of \(\pi_{\eta}\), then \(\|f\|_{\infty}\leq\eta\), and hence Assumption 2 is satisfied. Also, notice that \((f(X,X^{\prime})-Y+Y^{\prime})^{2}\leq 16B^{2}\) almost surely and the symmetry is obvious, then Assumption 1 is satisfied.
For \(t_{1},t_{2}\in[-\eta,\eta]\),
\[|l(t_{1},y,y^{\prime})-l(t_{2},y,y^{\prime})|= \left|\left(t_{1}-(y-y^{\prime})\right)^{2}-(t_{2}-(y-y^{\prime} ))^{2}\right|\] \[\leq|t_{1}-t_{2}|\cdot|t_{1}+t_{2}-2y+2y^{\prime}|\] \[\leq 8B\left|t_{1}-t_{2}\right|.\]
Hence, by setting \(L:=8B\), Assumption 3 is then satisfied. Let now \(q_{f}=(f(x,x^{\prime})-y+y^{\prime})^{2}-(f_{\rho}(x,x^{\prime})-y+y^{\prime} )^{2}\). Then
\[\mathbb{E}[q_{f}^{2}] =\mathbb{E}[\{\left(f(X,X^{\prime})-Y+Y^{\prime})^{2}-(f_{\rho}(X,X^{\prime})-Y+Y^{\prime})^{2}\right\}^{2}]\] \[=\mathbb{E}[(f(X,X^{\prime})-f_{\rho}(X,X^{\prime}))^{2}(f(X,X^{ \prime})+f_{\rho}(X,X^{\prime})-2Y+2Y^{\prime}))^{2}]\] \[\leq 64B^{2}\mathbb{E}[(f(X,X^{\prime})-f_{\rho}(X,X^{\prime}))^{2}]\] \[=64B^{2}\mathbb{E}[q_{f}].\]
Therefore, \(\{(f(x,x^{\prime})-y+y^{\prime})^{2}-(f_{\rho}(x,x^{\prime})-y+y^{\prime})^{2 }:f\in\mathcal{H}\}\) has a variance-expectation bound with parameter pair \((1,64B^{2})\), where \(\beta=1\) and \(M=64B^{2}\). Thus we have \(C_{B,L,M,\beta}=C_{B}\).
**Theorem 3**.: _Under Assumption 4, let \(l\) be the least squares loss and the hypothesis space \(\mathcal{H}\) be constructed as in (2.4). If \(\Sigma\) is the primitive network space (2.3) with the number of weights and computation units \(W\) and layers
\(\log(W)\) and the true predictor \(f_{\rho}(x,x^{\prime})=\tilde{f}_{\rho}(x)-\tilde{f}_{\rho}(x^{\prime})\) satisfies the smoothness property that \(\tilde{f}_{\rho}(x)\in\mathcal{C}^{r-1,1}([0,1]^{d})\) and \(\|\tilde{f}\|_{\mathcal{C}^{r-1,1}}\leq B\). Then for any \(\delta\in(0,1/2)\), with probability at least \(1-\delta\), we have_
\[\mathcal{E}(\hat{f}_{z})-\mathcal{E}(f_{\rho})\leq C_{d,r,B}\left\{\frac{W\log ^{2}(W)\log(n)\log^{2}(4/\delta)}{n}+\left(\frac{\log(W)}{W}\right)^{\frac{2r}{ d}}\right\}.\]
_By setting \(W=n^{\frac{d}{2r+d}}\), with probability at least \(1-\delta\), we have the learning rate_
\[\mathcal{E}(\hat{f}_{z})-\mathcal{E}(f_{\rho})\leq C_{d,r,B}\log^{2}(4/\delta )\log^{\tau}(n)\,n^{-\frac{2r}{2r+d}}.\]
_where \(\tau=\max\{3,\frac{2r}{d}\}\)._
Proof.: By Lemma 2, \(V\leq CW\log^{2}(W)\). Now, by applying Lemma 3 and hence Theorem 1 and Theorem 2, we will get the first result. Setting \(W=n^{\frac{d}{2r+d}}\) and the second result is proved immediately.
## 4 Related Results and Comparison
We state the related works through different algorithms employed in pairwise learning tasks.
The ERM algorithm is frequently employed in generalization analysis, for instances in the ranking problems. [9] proved the bound of estimation error is of order \(O\left(\sqrt{\frac{V}{n}}\right)\), where \(V\) is the VC dimension of the hypothesis space which consists of binary-valued ranking rules \(r:\mathcal{X}\times\mathcal{X}\rightarrow\{-1,1\}\). They also showed that the rate can be improved to \(O((V\log(n)/n)^{1/(2-\beta)})\) if the shifted hypothesis class satisfies some variance conditions. In [18], for a hypothesis space consisting of convex ranking rules whose modulus of convexity satisfies \(\delta(t)\geq Ct^{p}\) on an closed interval, the author showed the estimation error bound is of order \(O\left(\frac{1}{n}\right)\). For regularized metric and similarity learning, [7] derived a concentration bound for how the empirical risk of the empirical minimizer deviates form its expectation. The above works all applied the tools from U-processes, the basic idea is to break the double-index dependent terms into pointwise i.i.d. terms and a degenerate U-statistic.
For other algorithms like online learning employed in pairwise learning tasks, instead of considering the empirical minimizer as the predictor, we take the predictor \(f_{T}\) produced by some iterative online algorithms, where \(T\) is the iterations. Representative works like [25, 16] derived the excess generalization bounds in a reproducing kernel Hilbert space under some assumptions of the approximation error.
Compare our main results with the existing literature on pairwise learning with neural networks. This paper presents an sharp bound of the estimation error (Theorem 1) involving general symmetric loss functions which is much better than the classical rate \(O\left(\sqrt{\frac{V}{n}}\right)\). Our idea is to apply the Hoeffding decomposition to the excess risk in order to overcome the dependency difficulty in the double-index summation. The variance-expectation bound plays a key role in getting fast learning rates in our analysis by Lemma 4. For the approximation error, we consider the least squares loss for specificity. Because of the specific form of the true predictor \(f_{\rho}(x,x^{\prime})=\tilde{f}_{\rho}(x)-\tilde{f}_{\rho}(x^{\prime})\), we construct parallel networks which are able to approximate the Holder smooth functions well. Thus, by choosing the proper order of the number of layers, weights and computation units to compromise the estimation and approximation error, we get a nearly optimal learning rate.
## 5 Proof of Main Results
In this section, we prove Theorem 1, which gives the upper bound of the estimation error \(S_{1}(\mathcal{H})+S_{2}(\mathcal{H})\). To this end, we estimates the i.i.d. term \(S_{1}(\mathcal{H})\) and the degenerate U-statistics term \(S_{2}(\mathcal{H})\) in the following two subsections.
### Bounding the i.i.d. term \(S_{1}(\mathcal{H})\)
From the error decomposition in subsection 3.1, the first term \(S_{1}(\mathcal{H})=-2T_{n}^{f_{s}}+2T_{n}^{f_{\mathcal{H}}}\) contains two parts, where the first part involves the empirical minimizer \(\hat{f}_{z}\) which will be estimated by local complexities and the fixed point [6], while \(f_{\mathcal{H}}\) in the second part is independent of the sample, thus it could be bounded by using Bernstein concentration inequality directly. Let \(\mathcal{G}:=\{\mathbb{E}[l(f(x,X),y,Y)-l(f_{\rho}(x,X),y,Y)|x,y]:f\in\mathcal{ H}\}\) be the conditional expectation on the shifted hypothesis class and define now the star-shaped class around \(0\).
**Definition 4**.: \(\mathcal{F}\) _is called a star-shaped class around \(0\) if for any \(f\in\mathcal{F}\) and \(\alpha\in[0,1]\), \(\alpha f\in\mathcal{F}\)._
Intuitively, a star-shaped class around \(0\) contains all the line segments between \(0\) and any point in \(\mathcal{F}\). Denote by \(\mathcal{G}^{*}=\{\alpha g:\alpha\in[0,1],g\in\mathcal{G}\}\) the star-hull of \(\mathcal{G}\) around \(0\), one can see \(\mathcal{G}\subset\mathcal{G}^{*}\) and \(\mathcal{G}^{*}\) is a star-shaped class around \(0\). By standard techniques in learning theory, it is easy to show that \(S_{1}(\mathcal{H})\leq 2\sup_{g\in\mathcal{G}^{*}}|\frac{1}{n}\sum_{i=1}^{n}g(Z_{i} )-\mathbb{E}[g]|\), i.e. the estimation error \(S_{1}(\mathcal{H})\) can be bounded by the supremum of the empirical process over \(\mathcal{G}^{*}\) (actually, over \(\mathcal{G}\)). It was then indicated that this supremum is closely related to the capacity (size) of \(\mathcal{G}^{*}\). However, this approach takes the whole class into account and only gives global estimates, and it does not reflect how a learning algorithm explores the hypothesis space and interacts with examples. What's more, such an estimate controls the deviation of generalization errors from empirical errors simultaneously over the whole class, which might be much more larger than \(\mathcal{E}(\hat{f}_{z})-\mathcal{E}_{z}(\hat{f}_{z})\) obtained by ERM algorithm. Therefore, bounds based on this approach are often quite loose. Furthermore, if all the estimators in the whole class \(\mathcal{G}^{*}\) satisfy some mild conditions about the variance, then we could consider the empirical process over a small subset of the whole class. Hence we take into consideration a subset \(\mathcal{G}^{*}_{r}\) which consists of functions with a bounded second moment, where \(\mathcal{G}^{*}_{r}:=\{g\in\mathcal{G}^{*}:\mathbb{E}[g^{2}]\leq r\}\). Define now a function of the expectation of a local complexity which is indexed by \(r\) as
\[\phi(r)=\mathbb{E}\left[\sup_{g\in\mathcal{G}^{*}_{r}}\left|\frac{1}{n}\sum_{ i=1}^{n}g(Z_{i})-\mathbb{E}[g]\right|\right]. \tag{5.1}\]
We now introduce a class of functions [2] that is closely related to this local complexities provided the index class is star-shaped.
**Definition 5**.: _A function \(\psi:[0,\infty)\to[0,\infty)\) is sub-root if it is non-negative, nondecreasing and if \(r\mapsto\psi(r)/\sqrt{r}\) is nonincreasing for \(r>0\)._
Intuitively, sub-root functions increase slowly, and it was shown by [2] that they are continuous and have a unique positive fixed point \(r^{*}\). i.e. \(\psi(r^{*})=r^{*}.\) It is trivial that \(\phi(r)\) in (5.1) is nondecreasing on \((0,\infty)\). Since \(\mathcal{G}^{*}\) is star-shaped, then similar to the proof of Lemma 3.4 in [2], we know \(\phi\) is also a sub-root function, hence has a unique fixed point \(r^{*}.\) The following lemma is the main tool for estimating \(S_{1}(\mathcal{H})\), based on local complexity and its corresponding fixed point provides a sharp bound for the Bernstein class which can be found in [6, Theorem 5.4].
**Lemma 4**.: _Let \(\mathcal{F}\) be a star-shaped class around \(0\) and has a variance-expectation bound with parameter pair \((\beta,M)\). Assume \(\mathcal{F}\) is uniformly bounded by \(b>0\). Let \(r^{*}\) be the unique fixed point of the sub-root function_
\[\phi(r):=\mathbb{E}\left[\sup_{f\in\mathcal{F}:\mathbb{E}[f^{2}]\leq r}\left| \frac{1}{n}\sum_{i=1}^{n}f(Z_{i})-\mathbb{E}[f]\right|\right].\]
_Then, for any \(\delta\in(0,1)\) and \(K>1\), with probability \(1-\delta\) we have_
\[\forall f\in\mathcal{F},\ \mathbb{E}[f]\leq\frac{K}{K-1}\frac{1}{n}\sum_{i=1}^{n}f (Z_{i})+C_{K,M,\beta}\left((r^{*})^{\frac{1}{2-\beta}}+\left(\frac{b\log(1/ \delta)}{n}\right)^{\frac{1}{2-\beta}}\right).\]
In order to get an explicit convergence rate of the estimation error bound, we have to study the fixed point \(r\).
**Theorem 4**.: _If \(r^{*}\) is the fixed point of the local complexity (5.1), then_
\[r^{*}\leq C_{L,\eta}\frac{V\log(n)}{n},\]
_where \(V=Pdim(\mathcal{H})\)._
With this explicit upper bound of the fixed point, the following proposition shows that by using local complexity estimates (Lemma 4), an upper bound for \(S_{1}(\mathcal{H})\) has the order \(O\left(\left(\frac{V\log(n)}{n}\right)^{\frac{1}{2-\beta}}\right)\), which is much better than the bound \(O\left(\sqrt{\frac{V\log(n)}{n}}\right)\) via global complexity. The proof is given in the appendix.
**Proposition 2**.: _Under the conditions in Theorem 1, with probability at least \(1-\delta/2\), we have_
\[S_{1}(\mathcal{H})\leq C_{\eta,L,M,\beta}\left(\left(\frac{V\log(n)}{n} \right)^{\frac{1}{2-\beta}}+\left(\frac{\log^{2}(4/\delta)}{n}\right)^{\frac{1 }{2-\beta}}\right)+\frac{\mathcal{E}(\hat{f}_{z})-\mathcal{E}(f_{\rho})}{2}+ \beta\mathcal{D}(\mathcal{H}).\]
The proof of the above proposition is also given in the appendix.
### Estimating the degenerate term \(S_{2}(\mathcal{H})\)
For the degenerate term, we will apply the consequence of the moment inequality [9] of U-statistics and the decoupling methods of U-processes [24].
**Lemma 5**.: _Define the supremum of a degenerate U-statistic over the hypothesis space \(\mathcal{H}\) as_
\[Z=\sup_{f\in\mathcal{H}}\left|\sum_{i\neq j}\hat{h}_{f}(Z_{i},Z_{j})\right|.\]
_Then there exists an absolute constant \(C>0\) such that for all \(n\) and \(t>0\),_
\[P\{Z>C\mathbb{E}[Z_{\epsilon}]+t\}\leq\exp\left(-\frac{1}{C}\min\left(\left( \frac{t}{\mathbb{E}[U_{\epsilon}]}\right)^{2},\frac{t}{\mathbb{E}[M]+Fn}, \left(\frac{t}{F\sqrt{n}}\right)^{2/3},\sqrt{\frac{t}{F}}\right)\right),\]
_where \(\epsilon_{1},...,\epsilon_{n}\) are i.i.d. Rademacher variables and_
\[Z_{\epsilon} =\sup_{f\in\mathcal{H}}\left|\sum_{i\neq j}\epsilon_{i}\epsilon_ {j}\hat{h}_{f}(Z_{i},Z_{j})\right|.\] \[U_{\epsilon} =\sup_{f\in\mathcal{H}}\sup_{\alpha:\|\alpha\|_{2}\leq 1}\sum_{i,j }\epsilon_{i}\alpha_{j}\hat{h}_{f}(Z_{i},Z_{j}).\] \[M =\sup_{f\in\mathcal{H},k=1,...,n}\left|\sum_{i=1}^{n}\epsilon_{i} \hat{h}_{f}(Z_{i},Z_{k})\right|.\] \[F =\sup_{f\in\mathcal{H}}\|\hat{h}_{f}\|_{\infty}.\]
It is easy to show
\[|S_{2}(\mathcal{H})|=|-W_{n}^{\hat{f}_{z}}+W_{n}^{f_{\mathcal{H}}}|\leq\frac{ 2}{n(n-1)}Z.\]
By setting
\[\exp\left(-\frac{1}{C}\min\left(\left(\frac{t}{\mathbb{E}[U_{\epsilon}]}\right)^{ 2},\frac{t}{\mathbb{E}[M]+Fn},\left(\frac{t}{F\sqrt{n}}\right)^{2/3},\sqrt{ \frac{t}{F}}\right)\right)=\delta/2,\]
with probability at least \(1-\delta/2\), we have
\[S_{2}(\mathcal{H})\leq\frac{C\log^{2}(2/\delta)}{n^{2}}\left(\mathbb{E}[Z_{ \epsilon}]+\mathbb{E}[U_{\epsilon}]+\mathbb{E}[M]+Fn\right).\]
So our aim now becomes to estimate the expectation of \(Z_{\epsilon},U_{\epsilon}\) and \(M\).
First we consider \(\mathbb{E}[Z_{\epsilon}]\), and \(Z_{\epsilon}\) is the Rademacher chaos of order \(2\) indexed by \(\mathcal{H}\). In general, the expectation of the supremum of a stochastic process \(\mathbb{E}\sup_{t\in T}X_{t}\) over a metric space \(T\) is estimated by chaining methods, and the process is often assumed to be sub-Gaussian with respect to the metric on \(T\)[20]. However, the Rademacher chaos \(\sum_{i\neq j}\epsilon_{i}\epsilon_{j}\hat{h}_{f}(Z_{i},Z_{j})\) is not sub-Gaussian with parameters in terms of empirical norm of \(\hat{h}_{f}\), thus we cannot use the classical chaining methods directly, while the following two lemmas are an extension of the standard chaining methods where the sub-Gaussian condition is substituted into Rademacher chaos, and by using these lemmas we will get a very similar result. The first one is established in [24], which is a maximal inequality of Rademacher chaos and the second-one enables us to bound \(\mathbb{E}[Z_{\epsilon}]\) in an entropy integral form.
Let \(\mathcal{W}:=\{\hat{h}_{f}:f\in\mathcal{H}\}\), and define by \(\xi(B):=\frac{1}{n(n-1)}\sum_{i\neq j}1_{\{(Z_{i},Z_{j})\in B\}}\) the coupled empirical measure on \(\mathcal{Z}\times\mathcal{Z}\), for any Borel measurable set \(B\subset\mathcal{Z}\times\mathcal{Z}\). The proofs of the following lemmas are all given in the appendix.
**Lemma 6**.: _Let \(\{\hat{h}_{1},...,\hat{h}_{N}\}\) be a sequence of functions from \(\mathcal{Z}\times\mathcal{Z}\rightarrow\mathbb{R}\), then we have_
\[\mathbb{E}_{\epsilon}\max_{k\in\{1,...,N\}}\left|\sum_{i\neq j}\epsilon_{i} \epsilon_{j}\hat{h}_{k}(Z_{i},Z_{j})\right|\leq 2e\sqrt{n(n-1)}\log(2N)\max_{k\in\{1,...,N\}}\|\hat{h}_{k}\|_{L^{2}_{\epsilon}}.\]
Let \(\mathcal{Q}:=\{\hat{h}(z,z^{\prime}):\mathcal{Z}\times\mathcal{Z}\rightarrow \mathbb{R}\}\) be a uniformly bounded function class. Then we have the following lemma.
**Lemma 7**.: _Let \(D:=\sup_{\hat{h}_{1},\hat{h}_{2}\in\mathcal{Q}}\|\hat{h}_{1}-\hat{h}_{2}\|_{L ^{2}_{\epsilon}}\) be the diameter of class \(\mathcal{Q}\), then_
\[\mathbb{E}_{\epsilon}\sup_{\hat{h}\in\mathcal{Q}}\left|\frac{1}{n}\sum_{i=1}^ {n}\epsilon_{i}\epsilon_{j}\hat{h}(Z_{i},Z_{j})\right|\leq n\left(72e\int_{0}^ {\frac{D}{2}}\log\mathcal{N}(\mathcal{Q},\|\cdot\|_{L^{2}_{\epsilon}},t)\,dt+ \inf_{\hat{h}\in\mathcal{Q}}\|\hat{h}\|_{L^{2}_{\epsilon}}\right).\]
By applying Lemma 7 and note \(\sup_{\hat{h}_{1},\hat{h}_{2}\in\mathcal{W}}\|\hat{h}_{1}-\hat{h}_{2}\|_{L^{2} _{\epsilon}}\leq 2\sup_{\hat{h}\in\mathcal{W}}\|\hat{h}\|_{\infty}=2F\) (recall the definition of \(F\) in Lemma 5), we immediately get
\[\mathbb{E}_{\epsilon}[Z_{\epsilon}] \leq n\left(72e\int_{0}^{F}\log\mathcal{N}(\mathcal{W},\|\cdot\|_{L ^{2}_{\epsilon}},t)\,dt+F\right)\] \[\leq CnF\int_{0}^{F}\log\mathcal{N}(\mathcal{W},\|\cdot\|_{L^{2}_{ \epsilon}},t)\,dt.\]
Now for the estimates of \(\mathbb{E}[U_{\epsilon}]\), we define first the matrix with zero diagonal entries indexed by \(\hat{h}\), \(H_{\hat{h}}:=(\hat{h}(Z_{i},Z_{j}))_{i\neq j}^{n}\in\mathbb{R}^{n\times n}\), where \(\hat{h}\in\mathcal{W}\). By the definition of operator norm \(\|\cdot\|_{2}\), \(\mathbb{E}[U_{\epsilon}]=\mathbb{E}\sup_{\hat{h}\in\mathcal{W},\|a\|\leq 1} \epsilon^{T}H_{\hat{h}}\alpha=\mathbb{E}\sup_{\hat{h}\in\mathcal{W}}\|\epsilon^ {T}H_{\hat{h}}\|_{2}\). Then, \(\mathbb{E}[U_{\epsilon}^{2}]=\mathbb{E}\sup_{\hat{h}\in\mathcal{W}}\epsilon^ {T}H_{\hat{h}}^{2}\epsilon\). Let \(\|\cdot\|_{F}\) be the Frobenius norm i.e. \(\|H\|_{F}=\sqrt{\sum_{i,j=1}^{n}h_{i,j}^{2}}\)
for \(H=(h_{i,j})_{i,j=1}^{n}\in\mathbb{R}^{n\times n}\) and \(A:=2\sup_{\hat{H}\in\mathcal{W}}\|H_{\hat{h}}\|_{F}\), then \(\forall h_{1},\hat{h}_{2}\in\mathcal{W}\),
\[\frac{\left\|\left(H_{\hat{h}_{1}}^{2}-H_{\hat{h}_{2}}^{2}\right)/ A\right\|_{F}}{n} \leq\frac{\left\|H_{\hat{h}_{1}}(H_{\hat{h}_{1}}-H_{\hat{h}_{2}})/A \right\|_{F}+\left\|(H_{\hat{h}_{1}}-H_{\hat{h}_{2}})H_{\hat{h}_{2}}/A\right\| _{F}}{n}\] \[\leq\frac{\left\|H_{\hat{h}_{1}}\right\|_{F}+\left\|H_{\hat{h}_{ 2}}\right\|_{F}}{A}\frac{\left\|H_{\hat{h}_{1}}-H_{\hat{h}_{2}}\right\|_{F}}{ n}\] \[\leq\frac{\left\|H_{\hat{h}_{1}}-H_{\hat{h}_{2}}\right\|_{F}}{n} \leq\left\|\hat{h}_{1}-\hat{h}_{2}\right\|_{L_{\hat{t}}^{2}}.\]
Then we have
\[\|H_{\hat{h}_{1}}^{2}-H_{\hat{h}_{2}}^{2}\|_{F}\leq nA\|\hat{h}_{2}-\hat{h}_{ 2}\|_{L_{\hat{t}}^{2}}. \tag{5.2}\]
Thus, \(U_{\epsilon}^{2}\) could be decomposed into Rademacher chaos indexed by \(\mathcal{W}\) plus a constant term (the summation of diagonal entries of \(H_{\hat{h}}\)). The estimates of \(\mathbb{E}[U_{\epsilon}]\) follows from the above lemma.
**Lemma 8**.: _We have_
\[\mathbb{E}_{\epsilon}[U_{\epsilon}]\leq CnF\left(1+\sqrt{\int_{0}^{\infty} \log\mathcal{N}(\mathcal{W},\|\cdot\|_{L_{\hat{t}}^{2}},t)\,dt}\right).\]
Now, for the last term \(\mathbb{E}[M]\), we define an empirical measure on \(\mathcal{Z}\times\mathcal{Z}\) by \(\xi_{k}(B)=\frac{1}{n-1}\sum_{i\neq k}^{n}1_{\{(Z_{i},Z_{k})\in B\}}\) for \(k=1,...,n\), where \(B\subset\mathcal{Z}\times\mathcal{Z}\) is Borel measurable.
**Lemma 9**.: _We have_
\[\mathbb{E}_{\epsilon}[M]\leq C\sqrt{n}\int_{0}^{\infty}\sqrt{\log\left(n\max_ {k=1,...,n}\{\mathcal{N}(\mathcal{W},\|\cdot\|_{L_{\hat{t}_{k}}^{2}},t)\} \right)}\,dt.\]
With all the above lemmas, the only thing left to get an explicit convergence rate of \(S_{2}(\mathcal{H})\) is estimates of the covering number \(\mathcal{N}(\mathcal{W},\|\cdot\|_{L_{\hat{t}}^{2}},t)\) and \(\mathcal{N}(\mathcal{W},\|\cdot\|_{L_{\hat{t}_{k}}^{2}},t)\).
Before estimating the covering number, for \(\hat{h}\in\mathcal{W}\), we notice first \(\hat{h}_{f}(z,z^{\prime})=q_{f}(z,z^{\prime})-E[q_{f}(z,Z^{\prime})|z]-E[q_{f }(Z,z^{\prime})|z^{\prime}]+E[q_{f}]\). It was shown that the functions in \(\mathcal{W}\) are linear combinations of \(q_{f}\) and their conditional expectations with different variables and their expectations. The function class \(\mathcal{W}\) is more complicated than the previous shifted class \(\mathcal{G}\) and hypothesis space \(\mathcal{H}\). Indeed, the empirical norm of functions in \(\mathcal{W}\) is a mixture of several norms, hence we cannot bound the covering number of \(\mathcal{W}\) by that of \(\mathcal{H}\) directly. To overcome this difficulty, we state a very simple but useful lemma.
**Lemma 10**.: _Let \(f\) be a bounded measurable function on \(\mathcal{X}\times\mathcal{X}\), and \(\mu_{1},...,\mu_{n}\) be a sequence of Borel probability measures, then we have_
\[\sum_{i=1}^{n}\|f\|_{L_{\mu_{i}}^{2}}\leq n\sqrt{n}\,\|f\|_{L_{\hat{t}}^{2}}\,,\]
_where \(\nu=\frac{1}{n}\sum_{i=1}^{n}\mu_{i}\) is another Borel probability measure on \(\mathcal{X}\times\mathcal{X}\)._
Proof.: for each \(\mu_{i}\), \(\|f\|_{L_{\mu_{i}}^{2}}=\left(\int fd\mu_{i}\right)^{1/2}\leq\left(\int fd(\mu_ {1}+...+\mu_{n})\right)^{1/2}=\sqrt{n}\left(\int fd\nu\right)^{1/2}\), then we immediately get the desired result.
The following proposition provides an explicit upper bound of \(S_{2}(\mathcal{H})\) and the proof is given in the appendix.
**Proposition 3**.: _With probability at least \(1-\delta/2\), we have_
\[S_{2}(\mathcal{H})\leq C_{L,\eta}\frac{V\log^{2}(2/\delta)}{n}.\]
One can observe that the order of degenerate term \(S_{2}(\mathcal{H})\) is \(O(\frac{V}{n})\), which is rather negligible compared with the order \(O\left(\left(\frac{V\log(n)}{n}\right)^{\frac{1}{2-\beta}}\right)\) obtained in the estimates of \(S_{1}(\mathcal{H})\). Therefore, the i.i.d. term \(S_{1}(\mathcal{H})\) plays the main role in the estimation error.
**Proof of Theorem 1**. Applying Proposition 2 and Proposition 3 to the error decomposition
\[\mathcal{E}(\hat{f}_{z})-\mathcal{E}(f_{\rho})\leq S_{1}(\mathcal{H})+S_{2}( \mathcal{H})+D(\mathcal{H}),\]
we have with probability at least \(1-\delta\),
\[\mathcal{E}(\hat{f}_{z})-\mathcal{E}(f_{\rho}) \leq C_{\eta,L,M,\beta}\left(\left(\frac{V\log(n)}{n}\right)^{ \frac{1}{2-\beta}}+\left(\frac{\log^{2}(4/\delta)}{n}\right)^{\frac{1}{2-\beta}}\right)\] \[\quad+C_{\eta,L}\frac{V\log^{2}(2/\delta)}{n}+2(1+\beta)\mathcal{ D}(\mathcal{H}).\]
Then we will get the desired results. \(\Box\)
**Acknowledgements** Part of the paper was written when the last author worked at City University of Hong Kong, supported partially by InnoHK initiative, The Government of the HKSAR, and Laboratory for AI-Powered Financial Technologies, NSFC/RGC Joint Research Scheme [RGC Project No. N-CityU102/20 and NSFC Project No. 12061160462], and Research Grant Council of Hong Kong [Project # CityU 11308121].
## Appendix A.
In this appendix we give detailed proofs of some results stated in Section 5:
**Proof of Theorem 4**. **Step 1:** Bounding \(\phi(r)\) in terms of the entropy integral.
Define the empirical measure \(\rho_{n}(B):=\frac{1}{n}\sum_{i=1}^{n}1_{\{Z_{i}\in B\}}\) for any Borel measurable set \(B\subset\mathcal{Z}\), and for \(g_{1},g_{2}\in\mathcal{G}_{r}^{*}\), \(\|g_{1}-g_{2}\|_{L_{\rho_{n}}^{2}}^{2}=\frac{1}{n}\sum_{i=1}^{n}|g_{1}(Z_{i})- g_{2}(Z_{i})|^{2}\). Let \(\{\epsilon_{i}\}_{i=1}^{n}\) be the i.i.d. Rademacher variables, i.e. \(Prob\{\epsilon_{i}=1\}=Prob\{\epsilon_{i}=-1\}=1/2\), and \(\mathbb{E}_{\epsilon}[\cdot]\) means we take expectation only on variables \(\epsilon=(\epsilon_{1},...,\epsilon_{n})\). Then \(\forall r\geq r^{*}\), by the standard symmetrization of empirical processes and the chaining lemma [20]
\[\phi(r) \leq 2\mathbb{E}\mathbb{E}_{\epsilon}\sup_{g\in\mathcal{G}_{r}^{*}} \left|\frac{1}{n}\sum_{i=1}^{n}\epsilon_{i}g(Z_{i})\right|\] \[\leq \frac{C}{\sqrt{n}}\mathbb{E}\int_{0}^{\sqrt{S}}\sqrt{\log \mathcal{N}(\mathcal{G}_{r}^{*},\|\cdot\|_{L_{\rho_{n}}^{2}},t)}dt,\]
where \(S:=\sup_{g\in\mathcal{G}_{r}^{*}}\frac{1}{n}\sum_{i=1}^{n}g(Z_{i})^{2}\) and \(\mathcal{N}(\mathcal{G}_{r}^{*},\|\cdot\|_{L_{\rho_{n}}^{2}},t)\) is the covering number of \(\mathcal{G}_{r}^{*}\) with radius \(t\) and norm \(\|\cdot\|_{L_{\rho_{n}}^{2}}\).
**Step 2:** Estimating the covering number.
Now, we are in a position to estimate \(\mathcal{N}(\mathcal{G}_{r}^{*},\|\cdot\|_{L_{\rho_{n}}^{2}},t)\). Note first \(\mathcal{G}_{r}^{*}\subset\mathcal{G}^{*}\). Let \(\mathcal{M}\) be a \(t\)-net of \(\mathcal{G}\) and \(N:=\lceil\frac{A}{t}\rceil\), where \(A:=\sup_{g\in\mathcal{G}}\|g\|_{L_{\rho_{n}}^{2}}\), then one can easily verify that \(\{\frac{i}{N}g_{t}:i=1,...,N,g_{t}\in\mathcal{M}\}\) is a \(2t\)-net of \(\mathcal{G}^{*}\). Since \(\forall ag\in\mathcal{G}^{*}\), where \(\alpha\in[i/N,(i+1)/N]\) for some \(i\in\{0,...,N-1\}\), we know \(\exists g_{t}\in\mathcal{M}\) such that \(\|g-g_{t}\|_{L_{\rho_{n}}^{2}}\leq t\), then \(\|\alpha g-(i/N)g_{t}\|_{L_{\rho_{n}}^{2}}\leq|\alpha-i/N\|g\|_{L_{\rho_{n}}^{2} }+i/N\|g-g_{t}\|_{L_{\rho_{n}}^{2}}\leq t+it/N\leq 2t\). Hence by Exercise 4.2.10 in [19] and the above arguments we have
\[\mathcal{N}(\mathcal{G}_{r}^{*},\|\cdot\|_{L_{\rho_{n}}^{2}},t)\leq\mathcal{N}( \mathcal{G}^{*},\|\cdot\|_{L_{\rho_{n}}^{2}},t/2)\leq\mathcal{N}(\mathcal{G}, \|\cdot\|_{L_{\rho_{n}}^{2}},t/4)\left\lceil\frac{A}{t}\right\rceil.\]
Define the empirical marginal measure \(\rho_{X}^{n}(E):=\frac{1}{n}\sum_{i=1}^{n}1_{\{X_{i}\in E\}}\) for any Borel measurable set \(E\subset\mathcal{X}\). For any \(g_{1},g_{2}\in\mathcal{G}\), there exists \(f_{1},f_{2}\in\mathcal{H}\) such that \(g_{i}(z)=\mathbb{E}[l(f_{i}(x,X),y,Y)-l(f_{\rho}(x,X),y,Y)|x,y]\) for \(i=1,2\). Then
\[\|g_{1}-g_{2}\|_{L^{2}_{\rho_{n}}}^{2} =\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[l(f_{1}(X_{i},X),Y_{i},Y)-l( f_{2}(X_{i},X),Y_{i},Y)|X_{i},Y_{i}]^{2}\] \[\leq\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[\left(l(f_{1}(X_{i},X),Y_ {i},Y)-l(f_{2}(X_{i},X),Y_{i},Y)\right)^{2}|X_{i},Y_{i}]\] \[=\int_{Z}\left[\int_{Z}\left(l(f_{1}(x,x^{\prime}),y,y^{\prime})- l(f_{2}(x,x^{\prime}),y,y^{\prime})\right)^{2}d\rho(x^{\prime},y^{\prime}) \right]d\rho_{n}(x,y)\] \[\leq L^{2}\int_{\mathcal{X}}\int_{\mathcal{X}}(f_{1}(x,x^{\prime })-f_{2}(x,x^{\prime}))^{2}d\rho_{X}(x^{\prime})d\rho_{X}^{n}(x)\] \[=L^{2}\|f_{1}-f_{2}\|_{L^{2}_{\rho_{X}\times\rho_{X}^{n}}}^{2},\]
where the first-to-second step uses the Jensen's inequality for conditional expectation and the third-to-fourth step uses the Lipschitz property of the loss (Assumption 3). Hence
\[\mathcal{N}(\mathcal{G},\|\cdot\|_{L^{2}_{\rho_{m}}},t)\leq\mathcal{N}( \mathcal{H},\|\cdot\|_{L^{2}_{\rho_{X}\times\rho_{X}^{n}}},t/L).\]
By Assumptions 2 and 3, \(A=\sup_{g\in\mathcal{G}}\|g\|_{L^{2}_{\rho_{m}}}\leq\sup_{g\in\mathcal{G}}\|g \|_{\infty}\leq L\sup_{f\in\mathcal{H}}\|f-f_{\rho}\|_{\infty}\leq 2L\eta\). Then, in summary,
\[\mathcal{N}(\mathcal{G}_{r}^{*},\|\cdot\|_{L^{2}_{\rho_{m}}},t)\leq\mathcal{N }(\mathcal{H},\|\cdot\|_{L^{2}_{\rho_{X}\times\rho_{X}^{n}}},t/4L)\left[\frac{ 2L\eta}{t}\right].\]
**Step 3**: Estimating the entropy integral.
Let \(V\) be the pseudo-dimension of \(\mathcal{H}\). For any \(f\in\mathcal{H}\), by Assumption 2\(\|f\|_{\infty}\leq\eta\), then applying Lemma 1 with \(r=2\) yields
\[\log\mathcal{N}(\mathcal{G}_{r}^{*},\|\cdot\|_{L^{2}_{\rho_{n}}},t)\leq\log \left\{CV(16e)^{V}\left(\frac{4L\eta}{t}\right)^{2(V-1)}\left\lceil\frac{2L \eta}{t}\right\rceil\right\}\leq C_{L,\eta}V\log(\frac{4L\eta}{t}).\]
Plug it into the entropy integral and then we have
\[\phi(r)\leq\frac{C_{L,\eta}}{\sqrt{n}}\mathbb{E}\int_{0}^{\sqrt{S}}\sqrt{V \log\left(\frac{4L\eta}{t}\right)}dt.\]
Since \(\sqrt{S}=\sqrt{\sup_{g\in\mathcal{G}_{r}^{*}}\frac{1}{n}\sum_{i=1}^{n}g(Z_{i} )^{2}}\leq\sup_{g\in\mathcal{G}}\|g\|_{\infty}\leq 2L\eta\), we have \(\log\left(\frac{4L\eta}{t}\right)>0\) for \(t\in(0,\sqrt{S}]\), which implies the above integral is well-defined. By using Lemma 3.8 in [17] and note that \(\sqrt{x\log(c/x)}\) is concave and hence by Jensen's inequality,
\[\phi(r)\leq C_{L,\eta}\sqrt{\frac{V}{n}}\mathbb{E}[S]\log(\frac{C_{L,\eta}}{ \mathbb{E}[S]}).\] (A.1)
Now we can provide an upper bound for \(\mathbb{E}[S]\). Let \(\{Z_{i}^{\prime}\}\) be an independent copy of \(\{Z_{i}\}\), then
\[\mathbb{E}[S] =\mathbb{E}\left[\sup_{g\in\mathcal{G}_{r}^{*}}\frac{1}{n}\sum_{i=1 }^{n}g(Z_{i})^{2}-\mathbb{E}[g^{2}]+\mathbb{E}[g^{2}]\right]\] \[\leq 2\mathbb{E}\mathbb{E}_{\epsilon}\left[\sup_{g\in\mathcal{G}_ {r}^{*}}\frac{1}{n}\sum_{i=1}^{n}\epsilon_{i}g(Z_{i})^{2}\right]+\sup_{g\in \mathcal{G}_{r}^{*}}\mathbb{E}[g^{2}]\] \[\leq 8L\eta\mathbb{E}\left[\sup_{g\in\mathcal{G}_{r}^{*}}\frac{1} {n}\sum_{i=1}^{n}\epsilon_{i}g(Z_{i})\right]+r\] \[=8L\eta\mathbb{E}\left[\sup_{g\in\mathcal{G}_{r}^{*}}\frac{1}{n} \sum_{i=1}^{n}\epsilon_{i}g(Z_{i})-\epsilon_{i}\mathbb{E}[g]+\epsilon_{i} \mathbb{E}[g]\right]+r\] \[\leq 8L\eta\mathbb{E}\left[\sup_{g\in\mathcal{G}_{r}^{*}}\frac{1} {n}\sum_{i=1}^{n}\epsilon_{i}\left(g(Z_{i})-g(Z_{i}^{\prime})\right)\right]+8 L\eta\mathbb{E}\left[\frac{1}{n}\sum_{i=1}^{n}\epsilon_{i}\sup_{g\in\mathcal{G}_{r}^{*}} \mathbb{E}[g]\right]+r\] \[=8L\eta\mathbb{E}\left[\sup_{g\in\mathcal{G}_{r}^{*}}\frac{1}{n} \sum_{i=1}^{n}g(Z_{i})-g(Z_{i}^{\prime})\right]+r\leq 16L\eta\phi(r)+r,\]
where the first-to-second step again uses symmetrization, the second-to-third step uses the well-known Ledoux-Talagrand contraction principle [15] for \(y(x)=x^{2}\) and the definition of \(\mathcal{G}_{r}^{*}\), the fourth-to-fifth step uses symmetrization and the fifth-to-sixth step follows from the fact that \(\epsilon_{i}(g(Z_{i})-g(Z_{i}^{\prime}))\) and \(g(Z_{i})-g(Z_{i}^{\prime})\) are identically distributed.
For the lower bound, we observe that when \(r\) is small enough, there exists some \(g_{0}\in\mathcal{G}_{r}^{*}\) such that \(E[g_{0}^{2}]=r\). In fact, if we take a nonzero \(g\in\mathcal{G}\) with \(E[g^{2}]>r\), by the star-shaped structure, we can choose \(g_{0}=\sqrt{\frac{r}{E[g^{2}]}}g\in\mathcal{G}_{r}^{*}\). Hence
\[E[S]\geq E\left[\frac{1}{n}\sum_{i=1}^{n}g_{0}(Z_{i})^{2}\right]=E[g_{0}^{2}] =r,\]
by applying the upper and lower bounds to (A.1) we have
\[\phi(r)\leq C_{L,\eta}\sqrt{\frac{V}{n}}\sqrt{(C_{L,\eta}\phi(r)+r)\log(\frac{ C_{L,\eta}}{r})}.\]
Recall \(r^{*}\) is a fixed point of \(\phi(r)\), i.e. \(\phi(r^{*})=r^{*}\). Letting \(r\to r^{*}\) and by the continuity of \(\phi(r)\), we have
\[r^{*} \leq C_{L,\eta}\sqrt{\frac{V}{n}r^{*}\log\left(\frac{C_{L,\eta}}{ r^{*}}\right)}\] \[\implies r^{*} \leq C_{L,\eta}\frac{V}{n}\log(\frac{C_{L,\eta}}{r^{*}})\] \[\implies r^{*} \leq C_{L,\eta}\frac{V\log(n)}{n}.\]
The proof of Theorem 4 is complete.
**Proof of Proposition 2** Let \(\hat{g}(z)=\mathbb{E}[l(\hat{f}_{z}(x,X),y,Y)-l(f_{\rho}(x,X),y,Y)|x,y],g_{ \mathcal{H}}(z)=\mathbb{E}[l(f_{\mathcal{H}}(x,X),y,Y)-l(f_{\rho}(x,X),y,Y)|x,y]\). We know \(\hat{g},g_{\mathcal{H}}\in\mathcal{G}^{*}\) and \(\mathbb{E}[\hat{g}]=\mathcal{E}(\hat{f}_{z})-\mathcal{E}(f_{\rho})\), \(\mathbb{E}[g_{\mathcal{H}}]=\mathcal{D}(\mathcal{H})\). The error \(S_{1}(\mathcal{H})\) can be expressed as
\[S_{1}(\mathcal{H})=2\left(\mathbb{E}[\hat{g}]-\frac{1}{n}\sum_{i=1}^{n}\hat{g} (Z_{i})+\frac{1}{n}\sum_{i=1}^{n}g_{\mathcal{H}}(Z_{i})-\mathbb{E}[g_{ \mathcal{H}}]\right).\]
Since \(\mathcal{G}^{*}\) is a star-shaped class around \(0\) which is uniformly bounded by \(2L\eta\) and has a variance-expectation bound with parameter pair \((\beta,M)\), by Lemma 4 and Theorem 4 with setting \(K=5\), we have, with probability at least
\(1-\delta/4\),
\[\mathbb{E}[\hat{g}]-\frac{1}{n}\sum_{i=1}^{n}\hat{g}(Z_{i})\leq\frac{\mathcal{E}( \hat{f}_{z})-\mathcal{E}(f_{\rho})}{4}+C_{\eta,L,M,\beta}\left(\left(\frac{V\log (n)}{n}\right)^{\frac{1}{2-\beta}}+\left(\frac{\log(4/\delta)}{n}\right)^{\frac{ 1}{2-\beta}}\right).\]
Since \(g_{\mathcal{H}}\) is data-independent, \(g_{\mathcal{H}}(Z_{1}),...,g_{\mathcal{H}}(Z_{n})\) are i.i.d. variables. Also note that \(|g_{\mathcal{H}}-\mathbb{E}[g_{\mathcal{H}}]|\leq 4L\eta\), by the one-sided Bernstein concentration inequality, we know for any \(\epsilon>0\),
\[Prob\left\{\frac{1}{n}\sum_{i=1}^{n}g_{\mathcal{H}}(Z_{i})-\mathbb{E}[g_{ \mathcal{H}}]>\epsilon\right\}\leq\exp\left\{-\frac{n\epsilon^{2}}{2(\mathbb{ E}[g_{\mathcal{H}}^{2}]+\frac{4}{3}L\eta\epsilon)}\right\},\]
By setting
\[-\frac{n\epsilon^{2}}{2(\mathbb{E}[g_{\mathcal{H}}^{2}]+\frac{4}{3}L\eta \epsilon)}=\log\left(\frac{\delta}{4}\right),\]
and by the variance-expectation bound of the shifted hypothesis class and Jensen's inequality for conditional expectation, we have \(\mathbb{E}[g_{\mathcal{H}}^{2}]\leq M\left(\mathbb{E}[g_{\mathcal{H}}]\right) ^{\beta}=M\mathcal{D}(\mathcal{H})^{\beta}\). Hence with probability at least \(1-\delta/4\)
\[\frac{1}{n}\sum_{i=1}^{n}g_{\mathcal{H}}(Z_{i})-\mathbb{E}[g_{ \mathcal{H}}] \leq\frac{\frac{4}{3}L\eta\log(4/\delta)+\sqrt{\left(\frac{4}{3}L \eta\log(4/\delta)\right)^{2}+2n\mathbb{E}[g_{\mathcal{H}}^{2}]\log(4/\delta) }}{n}\] \[\leq\frac{8L\eta\log(4/\delta)}{3n}+\sqrt{\frac{2M\log(4/\delta)} {n}\mathcal{D}(\mathcal{H})^{\beta}}.\]
Applying Young's inequality,
\[ab\leq\frac{1}{p}a^{p}+\frac{1}{q}b^{q},\ \ \forall a,b>0,\,\text{where}\ \frac{1}{p}+\frac{1}{q}=1\ \text{for}\ q,p>1\]
with \(a=\sqrt{\frac{2M\log(4/\delta)}{n}},b=\sqrt{\mathcal{D}(\mathcal{H})^{\beta}}\) and \(p=\frac{2}{2-\beta}\), \(q=\frac{2}{\beta}\), we have
\[\sqrt{\frac{2M\log(4/\delta)}{n}\mathcal{D}(\mathcal{H})^{\beta}}\leq\frac{2- \beta}{2}\left(\frac{2M\log(4/\delta)}{n}\right)^{\frac{1}{2-\beta}}+\frac{ \beta}{2}\mathcal{D}(\mathcal{H}).\]
Then with probability at least \(1-4/\delta\)
\[\frac{1}{n}\sum_{i=1}^{n}g_{\mathcal{H}}(Z_{i})-\mathbb{E}[g_{\mathcal{H}}] \leq\frac{8L\eta\log(4/\delta)}{3n}+\frac{2-\beta}{2}\left(\frac{2M\log(4/ \delta)}{n}\right)^{\frac{1}{2-\beta}}+\frac{\beta}{2}\mathcal{D}(\mathcal{H}).\]
Noting that \(\frac{1}{2-\beta}\in(\frac{1}{2},1]\) and \(\log(4/\delta)>1\), we have \(\frac{8L\eta\log(4/\delta)}{3n}\leq C_{L,\eta}\left(\frac{\log^{2}(4/\delta)}{ n}\right)^{\frac{1}{2-\beta}}\). Thus, combining the above two parts, with probability \(1-\delta/2\),
\[S_{1}(\mathcal{H})\leq C_{\eta,L,M,\beta}\left(\left(\frac{V\log(n)}{n} \right)^{\frac{1}{2-\beta}}+\left(\frac{\log^{2}(4/\delta)}{n}\right)^{\frac{1 }{2-\beta}}\right)+\frac{\mathcal{E}(\hat{f}_{z})-\mathcal{E}(f_{\rho})}{2}+ \beta\mathcal{D}(\mathcal{H}).\]
This completes the proof of Proposition 2. \(\Box\)
**Proof of Lemma 6** For any \(\lambda>0\), by denoting the absolute value of the summation \(H_{k}:=\left|\sum_{i\neq j}\epsilon_{i}\epsilon_{j}\hat{h}_{k}(Z_{i},Z_{j})\right|\)
we have
\[\mathbb{E}_{\epsilon}\max_{k\in\{1,\ldots,N\}}\left|\sum_{i\neq j} \epsilon_{i}\epsilon_{j}\hat{h}_{k}(Z_{i},Z_{j})\right| =\frac{1}{\lambda}\mathbb{E}_{\epsilon}\log\exp\left\{\lambda\max_ {k\in\{1,\ldots,N\}}H_{k}\right\}\] \[\leq\frac{1}{\lambda}\log\mathbb{E}_{\epsilon}\exp\left\{\lambda \max_{k\in\{1,\ldots,N\}}H_{k}\right\}\] \[\leq\frac{1}{\lambda}\log\mathbb{E}_{\epsilon}\sum_{k=1}^{N}\exp \left\{\lambda H_{k}\right\}\] \[=\frac{1}{\lambda}\log\sum_{k=1}^{N}\sum_{s=0}^{\infty}\frac{ \lambda^{s}\mathbb{E}_{\epsilon}H_{k}^{s}}{s!}\] \[\leq\frac{1}{\lambda}\log\sum_{k=1}^{N}\sum_{s=0}^{\infty}\frac{ \lambda^{s}s^{s}\left(\mathbb{E}_{\epsilon}H_{k}^{2}\right)^{s/2}}{s!}\] \[\leq\frac{1}{\lambda}\log\sum_{k=1}^{N}\sum_{s=0}^{\infty}\left( e\lambda\left(\mathbb{E}_{\epsilon}H_{k}^{2}\right)^{1/2}\right)^{s},\]
where the first-to-second step uses Jensen's inequality, the fourth-to-fifth step uses Theorem 3.2.2 in [11] for Rademacher chaos of order 2, i.e. \(\mathbb{E}H_{k}^{s}\leq s^{s}(\mathbb{E}_{\epsilon}H_{k}^{2})^{s/2}\) and the fifth-to-last step uses Stirling's formula with \(s^{s}/s!\leq e^{s}\). Now, we let \(\lambda=(2e\max_{k\in\{1,\ldots,N\}}(\mathbb{E}_{\epsilon}H_{k}^{2})^{1/2})^{-1}\), then \(e\lambda\left(\mathbb{E}_{\epsilon}H_{k}^{2}\right)^{1/2}\leq 1/2\) and \(\sum_{s=0}^{\infty}\left(e\lambda\left(\mathbb{E}_{\epsilon}H_{k}^{2}\right)^{ 1/2}\right)^{s}\leq 2\). Also, note
\[\mathbb{E}_{\epsilon}H_{k}^{2} =\sum_{i\neq j,i^{\prime}\neq j^{\prime}}\mathbb{E}_{\epsilon} \epsilon_{i}\epsilon_{j^{\prime}}\epsilon_{j^{\prime}}\hat{h}_{k}(Z_{i},Z_{j} )\hat{h}_{k}(Z_{i^{\prime}},Z_{j^{\prime}})\] \[=\sum_{i\neq j}\hat{h}_{k}(Z_{i},Z_{j})^{2}=n(n-1)\|\hat{h}_{k}\| _{L_{\xi}^{2}}^{2},\]
then we know
\[\mathbb{E}_{\epsilon}\max_{k\in\{1,\ldots,N\}}\left|\sum_{i\neq j }\epsilon_{i}\epsilon_{j}\hat{h}_{k}(Z_{i},Z_{j})\right| \leq 2e\log(2N)\max_{k\in\{1,\ldots,N\}}(\mathbb{E}H_{k}^{2})^{1/2}\] \[=2e\sqrt{n(n-1)}\log(2N)\max_{k\in\{1,\ldots,N\}}\|\hat{h}_{k}\| _{L_{\xi}^{2}}.\]
This proves Lemma 6. \(\Box\)
**Proof of Lemma 7** Let \(\mathcal{N}_{k}\) be a \(\frac{D}{2^{k}}\)-net of \(\mathcal{Q}\) with respect to the norm \(\|\cdot\|_{L_{\xi}^{2}}\), for \(k\in\mathbb{N}\). For each \(k\), we define a map \(\pi_{k}\) from \(\mathcal{Q}\) to \(\mathcal{N}_{k}\) such that \(\|\hat{h}-\pi_{k}(\hat{h})\|_{L_{\xi}^{2}}\leq\frac{D}{2^{k}}\) for \(\hat{h}\in\mathcal{Q}\). Since \(\sup_{\hat{h}\in\mathcal{Q}}\|\hat{h}-\pi_{k}(\hat{h})\|_{L_{\xi}^{2}}\to 0\) as \(k\to\infty\), \(\sup_{\hat{h}\in\mathcal{Q}}|\sum_{i\neq j}\epsilon_{i}\epsilon_{j}(\hat{h}- \pi_{k}(\hat{h}))(Z_{i},Z_{j})|\to 0\) as \(k\to\infty\). Also, notice that \(\sup_{\hat{h}\in\mathcal{Q}}|\sum_{i\neq j}\epsilon_{i}\epsilon_{j}(\hat{h}- \pi_{k}(\hat{h}))(Z_{i},Z_{j})|\) is bounded, which follows from the uniform boundedness of \(\mathcal{Q}\). Hence by dominated convergence theorem, \(\mathbb{E}_{\epsilon}\sup_{\hat{h}\in\mathcal{Q}}|\sum_{i\neq j}\epsilon_{i} \epsilon_{j}(\hat{h}-\pi_{k}(\hat{h}))(Z_{i},Z_{j})|\to 0\) as \(k\to\infty\).
Take an arbitrary \(h_{0}\in\mathcal{Q}\), let \(\mathcal{N}_{0}=\{h_{0}\}\) and note that \(\pi_{0}(\hat{h})=h_{0}\) for any \(\hat{h}\in\mathcal{Q}\). In the following arguments, we
denote \(\epsilon_{i}\epsilon_{j}\hat{h}(Z_{i},Z_{j})\) by \(\epsilon_{i}\epsilon_{j}\hat{h}\) for simplicity. Then
\[\mathbb{E}_{\epsilon}\sup_{\hat{h}\in\mathcal{Q}}\left|\sum_{i\neq j }\epsilon_{i}\epsilon_{j}\hat{h}\right| \leq\mathbb{E}_{\epsilon}\sup_{\hat{h}\in\mathcal{Q}}\left|\sum_{ i\neq j}\epsilon_{i}\epsilon_{j}\left(\hat{h}-h_{0}\right)\right|+\mathbb{E}_{ \epsilon}\left|\sum_{i\neq j}\epsilon_{i}\epsilon_{j}h_{0}\right|\] \[\leq\mathbb{E}_{\epsilon}\sup_{\hat{h}\in\mathcal{Q}}\left|\sum_ {i\neq j}\epsilon_{i}\epsilon_{j}\left(\hat{h}-h_{0}\right)\right|+\left( \mathbb{E}_{\epsilon}\left|\sum_{i\neq j}\epsilon_{i}\epsilon_{j}h_{0}\right| ^{2}\right)^{1/2}\] \[\leq\mathbb{E}_{\epsilon}\sup_{\hat{h}\in\mathcal{Q}}\left|\sum_ {i\neq j}\epsilon_{i}\epsilon_{j}\left(\hat{h}-\pi_{k}(\hat{h})\right)\right| +\sum_{m=1}^{k}\mathbb{E}_{\epsilon}\sup_{\hat{h}\in\mathcal{Q}}\left|\sum_{ i\neq j}\epsilon_{i}\epsilon_{j}\left(\pi_{m}(\hat{h})-\pi_{m-1}(\hat{h})\right) \right|\] \[\qquad\qquad+\sqrt{n(n-1)}\|h_{0}\|_{L^{2}_{\xi}}\] \[\rightarrow\sum_{k=1}^{\infty}\mathbb{E}_{\epsilon}\sup_{ \begin{subarray}{c}\begin{subarray}{c}\begin{subarray}{c}\begin{subarray}{c} \begin{subarray}{c}\begin{subarray}{c}\begin{subarray}{c}\begin{subarray}{c} \begin{subarray}{c}\begin{subarray}{c}\begin{subarray}{c}\begin{subarray}{c} \begin{subarray}{c}\begin{subarray}{c}\begin{subarray}{c}\begin{subarray}{c} \begin{subarray}{c}\begin{subarray}{c}\begin{subarray}{c}\begin{subarray}{c} \begin{subarray}{c}\begin{subarray}{c}\begin{subarray}{c}\begin{subarray}{c} \begin{subarray}{c}\begin{subarray}{c}\begin{subarray}{c}\begin{subarray}{c} \begin{subarray}{c}\begin{subarray}{c}\begin{subarray}{c}\begin{subarray}{c} \begin{subarray}{c}\begin{subarray}{c}\begin{subarray}{c}\begin{subarray}{c} \begin{subarray}{c}\begin{subarray}{c}\begin{subarray}{c}\begin{array}{c} \begin{subarray}{c}\begin{subarray}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c} \begin{subarray}{c}\begin{array}{c}\begin{array}{c}\begin{array}{c}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array}\end{array} \end{array}\end{subarray}\end{subarray}\end{subarray}\end{subarray}\end{subarray}\right.\]
Note that for any \(\hat{h}\in\mathcal{Q}\), \(\|\pi_{k}(\hat{h})-\pi_{k-1}(\hat{h})\|_{L^{2}_{\xi}}\leq\|\pi_{k}(\hat{h})- \hat{h}\|_{L^{2}_{\xi}}+\|\hat{h}-\pi_{k-1}(\hat{h})\|_{L^{2}_{\xi}}\leq\frac{3} {2^{k-1}}D\). Also, the cardinality satisfies the relation \(|\mathcal{N}_{k}\times\mathcal{N}_{k-1}|=|\mathcal{N}_{k}||\mathcal{N}_{k-1}| \leq\mathcal{N}(\mathcal{Q},\|\cdot\|_{L^{2}_{\xi}},\frac{D}{2^{k}})^{2}\). By Lemma 6, the above inequality can be bounded further by
\[n\left(2e\sum_{k=1}^{\infty}\log(2\mathcal{N}(\mathcal{Q},\|\cdot \|_{L^{2}_{\xi}},D/2^{k})^{2})\frac{3D}{2^{k-1}}+\|h_{0}\|_{L^{2}_{\xi}}\right)\] \[\leq n\left(72e\sum_{k=1}^{\infty}\int_{\frac{D}{2^{k+1}}}^{\frac {D}{2^{k}}}\log\mathcal{N}(\mathcal{Q},\|\cdot\|_{L^{2}_{\xi}},t)\,dt+\|h_{0}\| _{L^{2}_{\xi}}\right)\] \[=n\left(72e\int_{0}^{\frac{D}{2}}\log\mathcal{N}(\mathcal{Q},\| \cdot\|_{L^{2}_{\xi}},t)\,dt+\|h_{0}\|_{L^{2}_{\xi}}\right).\]
Since \(h_{0}\) is arbitrary in \(\mathcal{Q}\), we get the desired result.
**Proof of Lemma 8** For \(\hat{h}\in\mathcal{W}\), define a random function \(m:\mathcal{Z}\times\mathcal{Z}\rightarrow\mathbb{R}\) such that \(m(z,z^{\prime})=(H^{2}_{h})_{i,j}\) if \((z,z^{\prime})=(Z_{i},Z_{j})\) for \(i\neq j\), otherwise we set \(m(z,z^{\prime})=0\). Denote by \(\mathcal{M}=\{m(z,z^{\prime})\}\) the class consisting of functions defined above. Actually, when we condition on the sample, the function class associated with the process is much smaller since we only take into account its projection on the sample, which is in fact isomorphic to some finite subset of a Euclidean space. For Rademacher chaos, the corresponding function class could be regarded as a set of symmetric matrices with zero diagonal. Then we have
\[\mathbb{E}_{\epsilon}[U^{2}_{\epsilon}]=\mathbb{E}_{\epsilon}\sup_{\hat{h}\in \mathcal{W}}\epsilon^{T}H^{2}_{h}\epsilon\leq\mathbb{E}_{\epsilon}\sup_{m\in \mathcal{M}}\left|\sum_{i\neq j}\epsilon_{i}\epsilon_{j}m(Z_{i},Z_{j})\right|+n (n-1)\sup_{\hat{h}\in\mathcal{W}}\|\hat{h}\|_{L^{2}_{\xi}}^{2}.\]
By using Lemma 7 directly,
\[\mathbb{E}_{\epsilon}\sup_{m\in\mathcal{M}}\left|\frac{1}{n}\sum_{i=1}^{n} \epsilon_{i}\epsilon_{j}m(Z_{i},Z_{j})\right|\leq n\left(72e\int_{0}^{D/2}\log \mathcal{N}(\mathcal{M},\|\cdot\|_{L^{2}_{\xi}},t)\,dt+\inf_{m\in\mathcal{M}} \|m\|_{L^{2}_{\xi}}\right).\] (A.2)
For \(m_{1},m_{2}\in\mathcal{M}\) and the corresponding \(\hat{h}_{1},\hat{h}_{2}\in\mathcal{W}\), by inequality (5.2), we have
\[\|m_{1}-m_{2}\|_{L^{2}_{\xi}}\leq\frac{\|H_{\hat{h}_{1}}-H_{\hat{h}_{2}}\|_{F}}{ \sqrt{n(n-1)}}\leq\frac{nA}{\sqrt{n(n-1)}}\|\hat{h}_{1}-\hat{h}_{2}\|_{L^{2}_{ \xi}}\leq\sqrt{2}A\|\hat{h}_{1}-\hat{h}_{2}\|_{L^{2}_{\xi}}.\]
Therefore, the covering number satisfies
\[\mathcal{N}(\mathcal{M},\|\cdot\|_{L^{2}_{\xi}},t)\leq\mathcal{N}(\mathcal{W},\| \cdot\|_{L^{2}_{\xi}},t/\sqrt{2}A).\]
The right-hand side of (A.2) can be bounded by
\[n\left(CA\int_{0}^{\infty}\log\mathcal{N}(\mathcal{W},\|\cdot\|_{L^{2}_{\xi}},t )\,dt+\inf_{m\in\mathcal{M}}\|m\|_{L^{2}_{\xi}}\right).\]
We also have \(A=2\sqrt{n(n-1)}\sup_{\hat{h}\in\mathcal{W}}\|\hat{h}\|_{L^{2}_{\xi}}\leq 2nF\), \(\|m\|_{L^{2}_{\xi}}\leq\frac{\|H^{2}_{k}\|_{F}}{\sqrt{n(n-1)}}\leq\frac{\|H^{ 2}_{k}\|_{F}^{2}}{\sqrt{n(n-1)}}=\sqrt{n(n-1)}\|\hat{h}\|_{L^{2}_{\xi}}^{2} \leq nF^{2}\) and \(n(n-1)\sup_{\hat{h}\in\mathcal{W}}\|\hat{h}\|_{L^{2}_{\xi}}^{2}\leq n^{2}F^{2}\). Therefore,
\[\mathbb{E}_{\epsilon}[U_{\epsilon}]\leq\left(\mathbb{E}[U_{\epsilon}^{2}] \right)^{1/2}\leq CnF\left(1+\sqrt{\int_{0}^{\infty}\log\mathcal{N}(\mathcal{W },\|\cdot\|_{L^{2}_{\xi}},t)\,dt}\right).\]
This proves Lemma 8. \(\Box\)
**Proof of Lemma 9** Define \(\Psi=\{\hat{h}(\cdot,Z_{k}):\hat{h}\in\mathcal{W},k=1,...,n\}\), and let \(\mathcal{N}_{0}\) be a \(t\)-net of \(\Psi\) with norm \(\|\cdot\|_{L^{2}_{\sigma_{n}}}\) and \(\mathcal{N}_{k}\) be a \(t\)-net of \(\mathcal{W}\) with norm \(\|\cdot\|_{L^{2}_{\xi_{k}}}\). It is not hard to see \(\{\hat{h}_{k}(\cdot,Z_{k}):\hat{h}_{k}\in\mathcal{N}_{k},k=1,...,n\}\) is a \(t\)-net of \(\Psi\). Then we have \(|\mathcal{N}_{0}|\leq\prod_{k=1}^{n}|\mathcal{N}_{k}|\), that is,
\[\mathcal{N}(\Psi,\|\cdot\|_{L^{2}_{\sigma_{n}}},t)\leq\prod_{k=1}^{n}\mathcal{ N}(\mathcal{W},\|\cdot\|_{L^{2}_{\xi_{k}}},t).\]
By chaining lemma [20]
\[\mathbb{E}_{\epsilon}\left[\frac{M}{n}\right] =\mathbb{E}_{\epsilon}\sup_{f\in\Psi}\left|\frac{1}{n}\sum_{i=1}^ {n}\epsilon_{i}f(Z_{i})\right|\] \[\leq\frac{C}{\sqrt{n}}\int_{0}^{\infty}\sqrt{\log\mathcal{N}( \Psi,\|\cdot\|_{L^{2}_{\sigma_{n}}},t)}\,dt\] \[\leq\frac{C}{\sqrt{n}}\int_{0}^{\infty}\sqrt{\log\prod_{k=1}^{n} \mathcal{N}(\mathcal{W},\|\cdot\|_{L^{2}_{\xi_{k}}},t)}\,dt\] \[\leq\frac{C}{\sqrt{n}}\int_{0}^{\infty}\sqrt{\log\left(n\max_{k=1,...,n}\{\mathcal{N}(\mathcal{W},\|\cdot\|_{L^{2}_{\xi_{k}}},t)\}\right)}\,dt.\]
\(\Box\)
Then the proof of Lemma 9 is complete. \(\Box\)
**Proof of Proposition 3** Let \(\xi_{X}\) be the marginal distribution of \(\xi\) on \(\mathcal{X}\times\mathcal{X}\) and denote by \(l_{f}\) the loss induced by \(f\in\mathcal{H}\) for simplicity. Similar to the arguments in the proof of Theorem 4, by applying the triangle inequality of the norm and Jensen's inequality for \(\hat{h}_{1},\hat{h}_{2}\in\mathcal{W}\) with the corresponding \(f_{1},f_{2}\in\mathcal{H}\), we can show
\[\|\hat{h}_{1}-\hat{h}_{2}\|_{L^{2}_{\xi}} =\|l_{f_{1}}-l_{f_{2}}-\mathbb{E}[l_{f_{1}}-l_{f_{2}}|Z]-\mathbb{E }[l_{f_{1}}-l_{f_{2}}|Z^{\prime}]+\mathbb{E}[l_{f_{1}}-l_{f_{2}}]\|_{L^{2}_{ \xi}}\] \[\leq\|l_{f_{1}}-l_{f_{2}}\|_{L^{2}_{\xi}}+2\|l_{f_{1}}-l_{f_{2}} \|_{L^{2}_{\rho\times\rho_{n}}}+\|l_{f_{1}}-l_{f_{2}}\|_{L^{2}_{\rho\times\rho _{x}}}\] \[\leq L\left\{\|f_{1}-f_{2}\|_{L^{2}_{\xi_{X}}}+2\|f_{1}-f_{2}\|_{L ^{2}_{\rho\times\rho_{X}}\times\rho_{X}}+\|f_{1}-f_{2}\|_{L^{2}_{\rho\times \rho_{X}}\times\rho_{X}}\right\}.\]
Then, by Lemma 10, there exists a probability measure \(\nu\) on \(\mathcal{X}\times\mathcal{X}\) such that \(\|\hat{h}_{1}-\hat{h}_{2}\|_{L^{2}_{\xi_{k}}}\leq C_{L}\|f_{1}-f_{2}\|_{L^{2}_{ \nu}}\).
Similarly, for each \(\xi_{k}\), there exists a probability measure \(\nu_{k}\), such that
\[\|\hat{h}_{1}-\hat{h}_{2}\|_{L^{2}_{\xi_{k}}}\leq C_{L}\|f_{1}-f_{2}\|_{L^{2}_{ \nu_{k}}}.\]
Then we have
\[\mathcal{N}(\mathcal{W},\|\cdot\|_{L^{2}_{\xi}},t) \leq\mathcal{N}(\mathcal{H},\|\cdot\|_{L^{2}_{\xi}},t/C_{L}).\] \[\mathcal{N}(\mathcal{W},\|\cdot\|_{L^{2}_{\xi_{k}}},t) \leq\mathcal{N}(\mathcal{H},\|\cdot\|_{L^{2}_{\xi_{k}}},t/C_{L}).\]
For \(\hat{h}_{f}\in\mathcal{W}\), we have \(\|\hat{h}_{f}\|_{L^{2}_{\xi}}\leq\|\hat{h}_{f}\|_{\infty}\leq\|I_{f}-l_{f_{ \rho}}-\mathbb{E}[l_{f}-l_{f_{\rho}}|Z]-\mathbb{E}[l_{f}-l_{f_{\rho}}|Z^{\prime }]+\mathbb{E}[l_{f}-l_{f_{\rho}}]\|_{\infty}\leq 4\|l_{f}-l_{f_{\rho}}\|_{\infty} \leq 4L\|f-f_{\rho}\|_{\infty}\leq 8L\eta\) and thereby \(F=\sup_{\hat{h}\in\mathcal{W}}\|\hat{h}\|_{\infty}\leq 8L\eta\). Again, by Lemma 1, one gets
\[\max\left\{\log\mathcal{N}(\mathcal{W},\|\cdot\|_{L^{2}_{\xi}}W,t),\log \mathcal{N}(\mathcal{W},\|\cdot\|_{L^{2}_{\xi_{k}}},t)\right\}\leq C_{L,\eta} V\log\frac{8L\eta}{t},\]
then the integral satisfies
\[\int_{0}^{\infty}\log\mathcal{N}(\mathcal{W},\|\cdot\|\mathcal{W},t)\,dt\leq \int_{0}^{8L\eta}C_{L,\eta}V\log\frac{8L\eta}{t}\,dt\leq C_{L,\eta}V.\]
Similarly, we can show
\[\int_{0}^{\infty}\sqrt{\log\left(n\max_{k=1,\ldots,k}\{\mathcal{N}(\mathcal{W },\|\cdot\|_{L^{2}_{\xi_{k}}},t)\}\right)}\,dt\leq C_{L,\eta}\sqrt{\log(n)V}.\]
Thus, from the above estimates we have the following bounds
\[\mathbb{E}_{\epsilon}[Z_{\epsilon}]\leq C_{L,\eta}nV,\ \ \mathbb{E}_{\epsilon}[U_{ \epsilon}]\leq C_{L,\eta}n\sqrt{V},\ \ \mathbb{E}[M]\leq C_{L,\eta}\sqrt{n\log(n)V}.\]
Combining all the analysis, with probability at least \(1-\delta/2\),
\[S_{2}(\mathcal{H})\leq C_{L,\eta}\frac{V\log^{2}(2/\delta)}{n}.\]
This proves Proposition 3. \(\Box\)
|
2309.08498 | Sympathetic Mechanism for Vibrational Condensation Enabled by Polariton
Optomechanical Interaction | We demonstrate a macro-coherent regime in exciton-polariton systems, where
nonequilibrium polariton Bose--Einstein condensation coexists with
macroscopically occupied vibrational states. Strong exciton-vibration coupling
induces an effective optomechanical interaction between cavity polaritons and
vibrational degrees of freedom of molecules, leading to vibrational
amplification in a resonant blue-detuned configuration. This interaction
provide a sympathetic mechanism to achieve vibrational condensation with
potential applications in cavity-controlled chemistry, nonlinear and quantum
optics. | Vladislav Yu. Shishkov, Evgeny S. Andrianov, Sergei Tretiak, K. Birgitta Whaley, Anton V. Zasedatelev | 2023-09-15T16:03:13Z | http://arxiv.org/abs/2309.08498v2 | # Mapping polariton Bose-Einstein condensate onto vibrational degrees of freedom
###### Abstract
We demonstrate a macro-coherent regime in molecular exciton-polariton systems, where nonequilibrium polariton Bose-Einstein condensation coexists with macroscopically occupied vibrational states. Strong vibronic coupling in molecules induces an effective optomechanical interaction between cavity polaritons and vibrational degrees of freedom of molecules, leading to vibrational amplification in a resonant blue-detuned configuration. This interaction maps out properties of the condensate to vibrational states, offering a novel approach to achieve vibrational condensation with potential applications in cavity-controlled chemistry, nonlinear and quantum optics.
pacs: The formation of new eigenstates - cavity polaritons - through strong light-matter interaction, grants remarkable control over the physical and chemical properties of molecular materials. This opens new ways for the modification of chemical reactions [1; 2; 3; 4], long-range energy transfer [5; 6; 7], steering of singlet/triplet dynamics [8; 9; 10; 11], and enhancement of nonlinear optical response [12; 13]. Macroscopic quantum phenomena, such as nonequilibrium polariton Bose-Einstein condensation (BEC) [14; 15] and superfluidity [16] in various organic materials at room temperature, have established light-matter BECs as a versatile platform suitable for diverse applications [17; 18; 19; 20]. Over the past decade, progress in exciton-polariton BEC has given rise to novel architectures in molecular optoelectronics, which include energy-efficient tunable coherent light sources not reliant on population inversion [21; 22; 23], ultra-fast all-optical transistors and logic gates [24; 25; 12], extreme photon nonlinearities for emerging quantum technologies [13], and room-temperature topological states immune to disorder [26; 27].
While exciton-polariton BECs are relatively well explored, vibrational polariton condensation remains experimentally elusive. If realized, it will bring molecular vibrations to a macroscopic quantum state, where all vibrations share the same quantum properties, become indistinguishable and behave as of macroscopic coherent matter wave demonstrating long range order. However, achieving this state is challenging. Factors such as fast vibrational relaxation, relatively high thermal fluctuations at room temperatures, the absence of appropriate experimental methods, and insufficient quality of cavity structures stand as current obstacles on a way towards vibrational BEC. Amidst these challenges, the significance of vibrational condensates has grown, particularly owing to the recent progress of cavity-controlled chemistry. They offer an elegant solution to the dark state problem [28], preventing undesirable decay into the dark state manifold. Recently Pannir-Sivajothi and colleagues have harnessed the quantum statistics of BECs to exploit benefits of light-matter condensates in driving chemical reactions, thereby reporting distinct energetic and entropic advantages in electron transfer [29].
Typically Raman-active molecular vibrations do not directly interact with the cavity mode in exciton-polariton systems. However, they significantly influence polariton dynamics through vibronic coupling with the excitonic component. The strong vibronic coupling in molecular systems leads to distinct replicas in absorption and emission spectra, as well as large Stokes shifts [30; 31]. In polariton systems, intense high-energy vibrational resonances enable efficient single-step relaxation [32; 33], reducing the threshold for polariton condensation [12; 13]. Recent experimental reports showcase unprecedented high nonlinearity, driven by bosonic stimulation through the coupling to intense molecular vibrations [13]. However, the quantum state of the vibrational degrees of freedom in exciton-polariton systems largely remain an uncharted territory. In this study, we investigate vibrational states in exciton-polariton systems that have strong vibronic coupling. We identify a new regime, characterized by macroscopically occupied vibrational states that coexist with a well-controlled exciton-polariton BEC. Our research introduces novel approaches to achieve vibrational condensation and could serve as a guideline for future experiments, taking advantage of strong vibronic coupling as an inherent mechanism and general feature of molecular systems.
Here we consider a nonequilibrium microscopic model that describes a large ensemble of organic molecules, each hosting a single exciton (\(\hat{H}_{\text{Exc}}=\sum_{j=1}^{N_{\text{real}}}\hbar\omega_{\text{exc}} \hat{\sigma}_{\text{Exc}j}^{\dagger}\hat{\sigma}_{\text{Exc}j}\)
where \(N_{\rm mol}\) is the total number of molecules, \(\omega_{\rm exc}\) is the eigenfrequency of the excitons, \(\hat{\sigma}_{\rm Exc}^{\dagger}\)\(\hat{\sigma}_{\rm Excj}\) is the creation (annihilation) operator for an exciton of a molecule located at the point \({\bf r}_{j}\)) coupled strongly to an optical cavity (\(\hat{H}_{\rm Cav}=\sum_{\bf k}\hbar\omega_{\rm Cav|{\bf k}}\hat{a}_{\rm Cav|{\bf k }}^{\dagger}\hat{a}_{\rm Cav|{\bf k}}\hat{a}_{\rm Cav|{\bf k}}\), where \(\hat{a}_{\rm Cav|{\bf k}}^{\dagger}\) (\(\hat{a}_{\rm Cav|{\bf k}}\)) is the creation (annihilation) operator for a cavity photon with in-plane wavevector \({\bf k}\) and frequency \(\omega_{\rm Cav|{\bf k}}\)) and interacting with molecular vibrations (\(\hat{H}_{\rm Vib}=\sum_{j=1}^{N_{\rm mol}}\hbar\omega_{\rm Vib}\hat{b}_{\rm Vib }^{\dagger}j_{\rm Vib}j\), where \(\hat{b}_{\rm Vib}^{\dagger}j\) (\(\hat{b}_{\rm Vib}j\)) is the creation (annihilation) operator of a molecular vibration with the eigenfrequency \(\omega_{\rm Vib}\) for the corresponding molecule). The electronic excitation, initially confined to a single molecule, becomes delocalised through interaction with the cavity field: \(\hat{H}_{\rm Exc-Cav}=\sum_{j=1}^{N_{\rm mol}}\sum_{\bf k}\hbar\Omega_{\rm j{ \bf k}}\left(\hat{\sigma}_{\rm Excj}^{\dagger}\hat{a}_{\rm Cav|{\bf k}}e^{i{ \bf k}{\bf r}_{j}}+h.c.\right)\), where \(\Omega_{\rm j{\bf k}}\) is Rabi frequency of the \(j\)-th molecule and cavity mode with wavevector \({\bf k}\)[34]. This interaction implicitly involves vibronic coupling via the matter component, represented by the term in the Hamiltonian: \(\hat{H}_{\rm Exc-Vib}=\sum_{j=1}^{N_{\rm mol}}\hbar\Delta\omega_{\rm Vib}\hat{ \sigma}_{\rm Excj}^{\dagger}\hat{\sigma}_{\rm Excj}\left(\hat{b}_{\rm Vib}j+ \hat{b}_{\rm Vib}^{\dagger}j\right)\), where \(\Lambda\) is the interaction constant between excitons and molecular vibrations being a square root of the Huang-Rhys (HR) factor [30]. It quantifies interaction between the electronic structure of a molecule and the displacement of its nucleus, which subsequently extend significant influence on the polaritonic degrees of freedom. The full system Hamiltonian thus includes molecular excitons along with vibrational eigenenergies, cavity modes, and the aforementioned coupling terms between subsystems (see details in Supplementary Materials, Section I).
\[\hat{H}=\hat{H}_{\rm Exc}+\hat{H}_{\rm Cav}+\hat{H}_{\rm Vib}+\hat{H}_{\rm Exc -Cav}+\hat{H}_{\rm Exc-Vib}. \tag{1}\]
Considering the strong interaction between excitons, the cavity, and molecular vibrations, we necessitate a transformation to the momentum representation of the dressed states (see Supplemental Materials, Section I, for details). The transformation unfolds in several stages. Firstly, we introduce vibrationally dressed excitons and dressed molecular vibrations [35]. Subsequently, we account for the light-matter interaction of the dressed excitons with the cavity, yielding lower and upper polariton states [36]. Excitons accessible by dipole coupling from the ground state, termed "bright excitons", contrast with the remaining "dark excitons". Both bright excitons and polaritons are phase-coherent, many-body delocalized states, which have a well-defined in-plane momentum \(\hbar{\bf k}\), matching the corresponding eigenstates of the cavity and in-plane component of the incident wavevector of a pump beam \({\bf k}_{\rm Pump||}\). Dark excitons, lacking well-defined momentum, represent a manifold of localized states. Analogously, we separate the dressed molecular vibrations into the bright and dark types. The bright vibrations are coherent and delocalized states. Similar to the bright excitons they have a well-defined momentum \(\hbar{\bf k}_{\rm Vib}\)[37], unlike the localized dark vibrational states.
Moving forward from a static eigenstate problem to the dynamic description of the system, we deepen our analysis using the Lindblad approach, which accounts for interactions with the environment [34; 38; 39]. We establish a theoretical framework to calculate average values of observables, which includes polariton, exciton, and vibration occupation numbers. Given that interactions with the environment inevitably result in decoherence and energy relaxation, it is very important to analyse potential relaxation and thermalization processes in the system. Thus, intermolecular interaction and non-radiative decay are the main mechanisms for excitons [40; 41]. In contrast, for polaritonic states, cavity decay often dominates the energy and phase loss in practical devices [17]. The coherence of vibrational states is typically lifetime-limited by a fast inter- and intramolecular vibrational energy redistribution (IVR) [42]. Thermalization is another essential process for polariton BEC [43]. In organic polariton systems, thermalization can occur due to intermolecular energy relaxation and via nonlinear interaction with low-frequency vibrations. We describe all these relaxation and thermalization processes by Lindblad superoperators \(\hat{L}\) acting on the general density matrix operator \(\hat{\rho}\) (see details in Supplementary Materials, Section II).
We consider coherent laser excitation of the bright excitonic states with in-plane momentum \(\hbar{\bf k}_{\rm ex}=\hbar{\bf k}_{\rm Pump||}\), adhering to the resonant blue-detuned configuration \(\hbar\omega_{\rm Pump}=\hbar\omega_{\rm Exc}=\hbar\omega_{\rm Pol|{\bf k}= \mathbf{0}}+\hbar\omega_{\rm Vib}\) as illustrated in Figure 1. Here, \(\hbar\omega_{\rm Pump}\) and \(\hbar\omega_{\rm Pol|{\bf k}=\mathbf{0}}\) denote photon energy of the pump and the ground polariton state energy. Next, we derive a master equation and develop a mean-field theory for the system. The time evolution
Figure 1: Schematic of the setup to generate macroscopic vibrational states within exciton-polariton Bose–Einstein condensation (BEC) in a strong vibronic regime. A coherent laser (Pump) in the blue-detuned configuration resonantly excites bright excitonic states (Exc), causing vibrational amplification through optomechanical coupling between molecular vibrations (Vib) and cavity exciton-polariton BEC satisfying the phase-matching condition: \(\hbar{\bf k}_{\rm Pump||}=\hbar{\bf k}_{\rm BEC}+\hbar{\bf k}_{\rm Vib}\).
of average occupation numbers within the system is described by the equations below, and further elaborated in Supplementary Materials, Section III. This includes polaritons \(n_{\mathrm{Pol}|\mathbf{k}}\) and bright vibrational states \(n_{\mathrm{Vib}|\mathbf{k}}\), associated with momenta \(\hbar\mathbf{k}\) and \(\hbar\mathbf{k}_{\mathrm{Vib}}\) respectively, along with the occupation for the resonantly pumped bright exciton state \(n_{\mathrm{Exc}|\mathbf{k}_{\mathrm{ex}}}\), all dark excitons \(n_{\mathrm{Exc}_{\mathrm{D}}}\), and vibrations \(n_{\mathrm{Vib}_{\mathrm{D}}}\).
\[\frac{dn_{\mathrm{Exc}|\mathbf{k}_{\mathrm{ex}}}}{dt}=-\gamma_{ \mathrm{Exc}}(n_{\mathrm{Exc}|\mathbf{k}_{\mathrm{ex}}}-\varkappa_{\mathrm{ pump}})+\gamma_{\mathrm{Exc}}^{\mathrm{B-D}}(n_{\mathrm{Exc}_{\mathrm{D}}}-\] \[n_{\mathrm{Exc}|\mathbf{k}_{\mathrm{ex}}})-\sum_{\mathbf{k}} \frac{G_{\mathbf{k}}}{N_{\mathrm{mol}}}\left[n_{\mathrm{Exc}|\mathbf{k}_{ \mathrm{ex}}}\left(n_{\mathrm{Pol}|\mathbf{k}}+1\right)+\right.\] \[\left.n_{\mathrm{Vib}|\mathbf{k}_{\mathrm{ex}}-\mathbf{k}}\left(n _{\mathrm{Exc}|\mathbf{k}_{\mathrm{ex}}}-n_{\mathrm{Exc}|\mathbf{k}_{ \mathrm{ex}}}\right)\right] \tag{2}\]
\[\frac{dn_{\mathrm{Exc}|\mathbf{k}}}{dt}=-\gamma_{\mathrm{Exc}}n _{\mathrm{Exc}_{\mathrm{D}}}+\frac{\gamma_{\mathrm{Exc}}^{\mathrm{B-D}}}{N_ {\mathrm{mol}}}(n_{\mathrm{Exc}|\mathbf{k}_{\mathrm{ex}}}-n_{\mathrm{Exc}_{ \mathrm{D}}})-\] \[\sum_{\mathbf{k}}\frac{G_{\mathbf{k}}}{N_{\mathrm{mol}}}\left[n _{\mathrm{Exc}_{\mathrm{D}}}\left(n_{\mathrm{Pol}|\mathbf{k}}+1\right)+n_{ \mathrm{Vib}_{\mathrm{D}}}\left(n_{\mathrm{Exc}_{\mathrm{D}}}-n_{\mathrm{Pol} |\mathbf{k}}\right)\right] \tag{3}\]
\[\frac{dn_{\mathrm{Pol}|\mathbf{k}}}{dt}=-\gamma_{\mathrm{Pol}| \mathbf{k}}n_{\mathrm{Pol}|\mathbf{k}}+G_{\mathbf{k}}\left[n_{\mathrm{Exc}_{ \mathrm{D}}}\left(n_{\mathrm{Pol}|\mathbf{k}}+1\right)+\right.\] \[\left.n_{\mathrm{Vib}_{\mathrm{D}}}\left(n_{\mathrm{Exc}_{ \mathrm{D}}}-n_{\mathrm{Pol}|\mathbf{k}}\right)\right]+\frac{G_{\mathbf{k}}}{N _{\mathrm{mol}}}\left[n_{\mathrm{Exc}|\mathbf{k}_{\mathrm{ex}}}\left(n_{ \mathrm{Pol}|\mathbf{k}}+1\right)+\right.\] \[\left.1\right)+n_{\mathrm{Vib}|\mathbf{k}_{\mathrm{ex}}-\mathbf{k }}\left(n_{\mathrm{Exc}|\mathbf{k}_{\mathrm{ex}}}-n_{\mathrm{Pol}|\mathbf{k} }\right)\right]+\sum_{\mathbf{k}^{\prime}}\left\{\gamma_{\mathrm{therm}}^{ \mathrm{\prime}\mathbf{k}^{\prime}}\right.\] \[\left.\left(n_{\mathrm{Pol}|\mathbf{k}}+1\right)n_{\mathrm{Pol}| \mathbf{k}^{\prime}}-\gamma_{\mathrm{therm}}^{\mathrm{\prime}\mathbf{k}}\left(n _{\mathrm{Pol}|\mathbf{k}^{\prime}}+1\right)n_{\mathrm{Pol}|\mathbf{k}}\right\} \tag{4}\]
\[\frac{dn_{\mathrm{Vib}_{\mathrm{D}}}}{dt}=-\gamma_{\mathrm{Vib}} \left(n_{\mathrm{Vib}_{\mathrm{D}}}-n_{\mathrm{Vib}}^{\mathrm{th}}\right)+ \frac{\gamma_{\mathrm{Vib}}^{\mathrm{B-D}}}{N_{\mathrm{mol}}}\sum_{\mathbf{k}} \left(\right.\] \[\left.n_{\mathrm{Vib}|\mathbf{k}_{\mathrm{ex}}-\mathbf{k}}-n_{ \mathrm{Vib}_{\mathrm{D}}}\right)+\sum_{\mathbf{k}}\frac{G_{\mathbf{k}}}{N_{ \mathrm{mol}}}\left[n_{\mathrm{Exc}_{\mathrm{D}}}\left(n_{\mathrm{Pol}| \mathbf{k}}+1\right)+\right.\] \[\left.n_{\mathrm{Vib}_{\mathrm{D}}}\left(n_{\mathrm{Exc}_{ \mathrm{D}}}-n_{\mathrm{Pol}|\mathbf{k}}\right)\right] \tag{5}\]
\[\frac{dn_{\mathrm{Vib}|\mathbf{k}_{\mathrm{ex}}-\mathbf{k}}}{dt}=- \gamma_{\mathrm{Vib}}\left(n_{\mathrm{Vib}|\mathbf{k}_{\mathrm{ex}}-\mathbf{k} }-n_{\mathrm{Vib}}^{\mathrm{th}}\right)+\gamma_{\mathrm{Vib}}^{\mathrm{B-D}}\] \[\left(n_{\mathrm{Vib}_{\mathrm{D}}}-n_{\mathrm{Vib}|\mathbf{k}_{ \mathrm{ex}}-\mathbf{k}}\right)+\frac{G_{\mathbf{k}}}{N_{\mathrm{mol}}}\left[n _{\mathrm{Exc}|\mathbf{k}_{\mathrm{ex}}}\left(n_{\mathrm{Pol}|\mathbf{k}}+1 \right)+\right.\] \[\left.n_{\mathrm{Vib}|\mathbf{k}_{\mathrm{ex}}-\mathbf{k}}\left(n _{\mathrm{Exc}|\mathbf{k}_{\mathrm{ex}}}-n_{\mathrm{Pol}|\mathbf{k}}\right)\right] \tag{6}\]
Here, \(n_{\mathrm{Exc}|\mathbf{k}_{\mathrm{ex}}}\) refers to the number of dressed bright excitons pumped directly by the resonant excitation, \(n_{\mathbf{k}}\) is the average number of lower polaritons in the state with \(\hbar\mathbf{k}\) in-plane momentum, \(n_{\mathrm{Exc}_{\mathrm{D}}}\) represents the number of dressed dark excitons, \(n_{\mathrm{Vib}}\) and \(n_{\mathrm{Vib}_{\mathrm{D}}}\) denote the number of bright and dark molecular vibrations, respectively. The term \(n_{\mathrm{Vib}}^{\mathrm{th}}=(e^{\hbar\omega_{\mathrm{Vib}}/T}-1)^{-1}\) is the vibrational population in the thermal equilibrium with an environment at temperature \(T\). The parameters \(\gamma_{\mathrm{Exc}}\), \(\gamma_{\mathrm{Pol}|\mathbf{k}}\) and \(\gamma_{\mathrm{Vib}}\) are the energy dissipation rates for excitons, lower polaritons with wavevector \(\mathbf{k}\) and molecular vibrations respectively. Thermalization rates \(\gamma_{\mathrm{therm}}^{\mathrm{\prime}}\) and \(\gamma_{\mathrm{therm}}^{\mathrm{\prime}\mathbf{k}^{\prime}}\) represent downward and upward energy relaxation in the polariton subsystem, respectively, linked by the Kubo-Martin-Schwinger relation [44, 45, 46, 39]. Transition rates between bright and dark states \(\gamma_{\mathrm{Exc}}^{\mathrm{B-D}}\) and \(\gamma_{\mathrm{Vib}}^{\mathrm{B-D}}\) correspond to the relaxation of the bright excitonic and vibrational states to the manifold of localized dark states due to IVR mechanisms leading to the dephasing \(\Gamma_{\mathrm{Exc}}=\gamma_{\mathrm{Exc}}^{\mathrm{B-D}}\) and \(\Gamma_{\mathrm{Vib}}=\gamma_{\mathrm{Vib}}^{\mathrm{B-D}}\), respectively. The resonant pumping rate \(\varkappa_{\mathrm{Pump}}\), being devided by total number of molecules \(N_{\mathrm{mol}}\) has a clear physical meaning of the stationary excitation number per molecule.
The constant \(G_{\mathbf{k}}\) plays a central role in polariton
Figure 2: Phase diagrams for polariton and vibrational states demonstrating the regime of coexisting polariton BEC and macroscopically occupied vibrational states. (a),(b) - Polariton and vibrational occupation numbers as the function of Huang-Rhys factor \(\Lambda^{2}\) and the resonant pumping rate \(\varkappa_{\mathrm{Pump}}\) normalized to the number of molecules \(N_{\mathrm{mol}}\). Vibrational decoherence rate is assumed to be lifetime-limited \(\Gamma_{\mathrm{Vib}}=\gamma_{\mathrm{Vib}}/2=5\times 10^{-3}\omega_{\mathrm{Vib}}\). (c),(d) - Polariton and vibrational occupation numbers as the function of vibrational decoherence rate \(\Gamma_{\mathrm{Vib}}\) and the resonant pumping rate \(\varkappa_{\mathrm{Pump}}\). Here we fix Huang-Rhys factor \(\Lambda^{2}=1\). Black dashed line shows the analytical value of the threshold for polariton and vibrational macroscopically occupied states given by Eq. (8). The parameters here are \(N_{\mathrm{mol}}=10^{8}\), \(\Omega_{R}=0.05\) eV, \(\gamma_{\mathrm{Exc}}=10^{-5}\) eV, \(\gamma_{\mathrm{Exc}}^{\mathrm{B-D}}=\Gamma_{\mathrm{Exc}}=10^{-2}\) eV, \(\gamma_{\mathrm{Vib}}^{\mathrm{B-D}}=\Gamma_{\mathrm{Vib}}\), \(\gamma_{\mathrm{Vib}}^{\mathrm{\prime}\mathbf{k}^{\prime}}=10^{-5}\) eV for \(|\mathbf{k}|<|\mathbf{k}^{\prime}|\) and \(T=290\) K, \(\gamma_{\mathrm{Pol}|\mathbf{k}}\approx\gamma_{\mathrm{Car}|\mathbf{k}}=2.5 \cdot 10^{-3}\) eV, \(\omega_{\mathrm{Car}|\mathbf{k}}=\omega_{\mathrm{Car}|\mathbf{k}=\mathbf{0}}+ \alpha_{\mathrm{Car}}\mathbf{k}^{2}\) with \(\alpha_{\mathrm{Car}}=2\cdot 10^{-3}\) eV\(\mu\)m\({}^{2}\) and \(S=500\)\(\mu\)m\({}^{2}\). These parameters are consistent with the recent experiments [12, 13].
condensation and generation of macroscopic vibrational states:
\[G_{\mathbf{k}}=\frac{\Lambda^{2}\Omega_{R}^{2}\Gamma_{\mathrm{Exc}}}{(\omega_{ \mathrm{Exc}}-\omega_{\mathrm{Pol}|\mathbf{k}}-\omega_{\mathrm{Vib}})^{2}+( \Gamma_{\mathrm{Exc}}/2)^{2}} \tag{7}\]
where \(\omega_{\mathrm{Exc}}=\omega_{\mathrm{exc}}-\Lambda^{2}\omega_{\mathrm{Vib}}\) and \(\Omega_{R}^{2}=\sum_{j=1}^{N_{\mathrm{mol}}}|\Omega_{j\mathbf{k}}|^{2}\) represents the total Rabi frequency. Serving as the effective polariton-vibration coupling constant, \(G_{\mathbf{k}}\) characterizes the interaction strength between exciton-polariton and vibrational states according to the term \(\hbar G_{\mathbf{k}}\hat{a}_{\mathrm{Pol}|\mathbf{k}}^{\dagger}\hat{a}_{ \mathrm{Pol}|\mathbf{k}}\left(\hat{b}_{\mathrm{Vib}|\mathbf{k}}+b_{\mathrm{Vib }|\mathbf{k}}^{\dagger}\right)\). Evidently, this coupling corresponds to a conventional cavity optomechanical Hamiltonian [47], except that here, we deal with a hybrid light-matter state (exciton-polariton) rather than a bare cavity mode. Within this framework, molecular vibrational modes take on the function of a mechanical oscillator. The optomechanical interaction steers population dynamics across the cavity and mechanical degrees of freedom and depends on the optomechanical back action rate, denoted as \(\sim G_{\mathbf{k}}^{2}\times N\)[47]. In our case, the blue-detuned external laser drive \(\hbar\omega_{\mathrm{Pump}}=\hbar\omega_{\mathrm{Pol}|\mathbf{k}=\mathbf{0}}+ \hbar\omega_{\mathrm{Vib}}\) imposes a negative back action rate (anti-damping), resulting in strong vibrational amplification under certain conditions, as illustrated in Figure 1.
Equations (2)-(6) provide a reduced microscopic model able to simulate dynamics of electronic and vibrational states of molecules with strong vibronic coupling in realistic cavities, given the right choice of experimental parameters. We numerically solve Eqs. (2)-(6) and find a steady-state population of polaritons at the ground state (\(\mathbf{k}=\mathbf{0}\)) as well as the population of the bright vibrational state mediating polariton condensation that fulfils phase matching condition \(\hbar\mathbf{k}_{\mathrm{Vib}}=\hbar\mathbf{k}_{\mathrm{Pump}||}-\hbar \mathbf{k}_{\mathrm{BEC}}\) (refer to the Supplementary Materials, Section IV for a discrete version of Eqs. (2)-(6)). Figure 2 presents density plots of the polariton and vibrational occupation numbers as functions of the pumping rate \(\varkappa_{\mathrm{Pump}}/N_{\mathrm{mol}}\), Huang-Rhys factor \(\sim\Lambda^{2}\), and vibrational decay rate \(\gamma_{\mathrm{Vib}}\). An analytic approximation to the steady-state solution reveals the threshold pumping rate as a function of polariton decay rate and optomechanical coupling
\[\varkappa_{\mathrm{Pump}}^{\mathrm{thresh}}=N_{\mathrm{mol}}\left.\frac{ \gamma_{\mathrm{Pol}|\mathbf{k}}}{G_{\mathbf{k}}}\right|_{\mathbf{k}=\mathbf{ 0}}, \tag{8}\]
as detailed in Supplementary Materials, Section V. Notably, the threshold value (8) \(\varkappa_{\mathrm{Pump}}^{\mathrm{thresh}}\propto\Lambda^{-2}\) does not depend on the vibrational decay rate \(\gamma_{\mathrm{Vib}}\), which makes our proposal robust against fast IVR mechanisms. Figures 2(a,c) show an average occupation number in the polariton subsystem as a function of Huang-Rhys factor and vibrational decay.
The formation of polariton BEC, in general, requires two conditions: 1 - the rate of polariton thermalization overcomes the energy dissipation \(\langle\sum_{\mathbf{k}}\gamma_{\mathrm{therm}}^{\mathbf{k0}}n_{\mathrm{Pol}| \mathbf{k}}\gtrsim\gamma_{\mathrm{Pol}|\mathbf{k}}\rangle\), and 2 - the total number of lower polariton surpasses the critical number \(\sum_{\mathbf{k}}n_{\mathrm{Pol}|\mathbf{k}}|_{\mathrm{critical}}\approx\nu k_{ B}T\)[48; 49]. In our system, the first condition is met well below the threshold (8) while the second one is achieved at the threshold \(\sum_{\mathbf{k}}n_{\mathrm{Pol}|\mathbf{k}}|_{\mathrm{thresh}}\sim\sum_{ \mathbf{k}}n_{\mathrm{Pol}|\mathbf{k}}|_{\mathrm{critical}}\sim 10^{2}-10^{3}\) (see Supplementary Materials, Section V). Followed by BEC formation of polaritons with in-plane wavevector \(\mathbf{k}=\mathbf{0}\), the system exhibits macroscopic occupation of the vibrational states. The momentum conservation law sets the in-plane wavevector of the maroscopically occupated vibrational states to be \(\mathbf{k}_{\mathrm{Vib}}=\mathbf{k}_{\mathrm{ex}}\). Figure 2b shows vibrational amplification towards the macroscopically occupied state above a certain pumping rate, consistent with the polariton BEC threshold in Eq. (8). The matching threshold values for polariton and vibrational subsystems, unaffected by vibrational decay rate, highlights the fact that macroscopic occupation of the vibrational state is a result of polariton condensation. This regime of coexisting macroscopically occupied exciton-polariton and vibrational states is a distinct feature of exciton-polariton systems with strong vibronic interaction in the resonant blue-detuned configuration. Although the decay of molecular vibrations does not explicitly influence the threshold, it does have an impact on the average vi
Figure 3: Average occupation number of the bright vibrational state (brown) with the wavevector \(\mathbf{k}_{\mathrm{ex}}\) and at the ground polariton state (green) (a). (b) - Energy distributions for polariton states below (blue) and above (red) the condensation threshold, where \(\Delta\omega_{\mathbf{k}}=\omega_{\mathrm{Pol}|\mathbf{k}}-\omega_{\mathrm{Pol}| \mathbf{k}=\mathbf{0}}\). (c,d) - Momentum distributions for vibrational and polariton states below (blue) and above (red) the condensation threshold. The parameters are the same as for Figure 2. Fitting curves in (b) and (d) represent the effective temperature determined by the Boltzmann distribution in energy and momentum spaces, respectively.
bration occupation number, as shown in Figure 2(d) and is also evidenced by the relationship \(n_{\mathrm{Vib}|\mathbf{k}_{\mathrm{ex}}}\propto\gamma_{\mathrm{Vib}}^{-1}\).
The polariton optomechanical interaction in our system allows the state of the BEC to be mapped onto vibrational degrees of freedom. We consider two regimes when being below (blue) and above (red) the condensation threshold. Figure 3a shows the average occupation numbers of the bright vibrational state with the wavevector \(\mathbf{k}_{\mathrm{ex}}\) and the ground polariton state at the resonant condition. Owing to the strong light-matter interaction, exciton-polaritons show a parabolic dispersion relation near the ground state, unlike dispersionless uncoupled vibrational states. Hence, further analysis of energy distribution is restricted to the polariton subsystem. Figure 3b illustrates energy distributions below and above the condensation threshold, with polaritons exhibiting thermalization at an effective temperature \(T_{\mathrm{eff}}\approx 140K\). Above the threshold, the system undergoes a Bose-Einstein distribution with the thermalized tail at \(T_{\mathrm{BEC}}=0.6T_{\mathrm{eff}}\), a distinctive cooling effect that comes from the nonequilibrium nature of polariton BEC, recently demonstrated in Ref. [48] In momentum space, Figure 3c, vibrational states lock to the polariton distribution due to resonant phase-matching condition. When the pumping rate exceeds the condensation threshold, it collapses to the bright state with a well-defined in-plane momentum \(\hbar\mathbf{k}_{\mathrm{ex}}\) exhibiting thermalised distribution for states at high in-plane momenta. This resembles the momentum distribution of the polariton BEC depicted in Figure 3d, effectively mapping the vibrational subsystem into a macroscopically occupied BEC-like state.
Next, we discuss evolution of the long-time vibronic coherences enabled by the blue-detuned configuration in Figure 1. Our proposal involves two coherent drives acting on the vibrational mode above condensation threshold: the laser pump and polariton BEC. Similar to coherent Raman scattering [50] both waves map out their coherence on the vibrational state. However, in contrast to Raman-based methods, which imprint coherence only on vibrational degrees of freedom, our approach enables a concurrent vibrational and exciton-polariton macro-coherent regime. This necessitates resonant electronic excitation, for which two-dimensional electronic spectroscopy serves as an accessible tool to generate macroscopically coherent electron-vibrational (vibronic) states in organic systems at room temperature [51]. Yet, ultra-broadband excitation imposes certain constraints on coherence time, causing vibronic coherence to dissipate within a hundred femtoseconds due to inherent dephasing of electronic states [52; 53]. Our approach eliminates these effects by leveraging exciton-polariton states, protected from natural decoherence by the cavity. In fact, in our case the coherence time is limited by nonlinear effects occurring at the exciton-polariton BEC [54; 55]. This enables an improvement of at least three orders of magnitude in coherence time at room temperature under narrow-band quasi-steady-state pumping conditions [23; 54]. Moreover, the coherence time can be further manipulated by engineering the nonlinear potential landscape for the condensate [56; 57], potentially extending it to the nanosecond level in future experiments, such as with novel multi-component polariton systems [58; 59].
To validate the presence of macroscopically occupied vibrational states, we propose using angle-resolved non-resonant Raman spectroscopy of polariton BEC. The respective experimental arrangement is shown in Figure 4. We calculate the intensity of the Stokes and anti-Stokes components of Raman scattering for a non-resonant probe beam with an energy \(\hbar\omega_{\mathrm{Probe}}\) and in-plane momentum \(\hbar\mathbf{k}_{\mathrm{Probe}}\). In the practical setups, the probe beam must be synchronized with the onset of polariton condensation. Figure 4 demonstrates the ratio between the anti-Stokes and Stokes components as a function of both pumping rate and detection angle. A coherent anti-Stokes Raman scattering (CARS) signal emerges at a specific "magic angle" (MA), denoted as \(\theta_{\mathrm{MA}}\) (see Eq. (9)), due to resonant vibrational amplification.
\[\sin\theta_{\mathrm{MA}}=\frac{\omega_{\mathrm{Pump}}\sin\theta_{\mathrm{Pump }}+\omega_{\mathrm{Probe}}\sin\theta_{\mathrm{Probe}}}{\omega_{\mathrm{Probe}} +\omega_{\mathrm{Vib}}} \tag{9}\]
The magic angle provides a direct measurement of the momentum of the vibrational mode \(\hbar\mathbf{k}_{\mathrm{ex}}\) corresponding to the resonant condition (see Supplementary Materials,
Figure 4: Probing macroscopic vibrational states by non-resonant Raman spectroscopy of polariton BEC. The graph illustrates the angle-resolved and pump-dependent distribution of the intensity ratio between anti-Stokes and Stokes components of the Raman scattering, with parameters \(\hbar\omega_{\mathrm{Probe}}=2\) eV, \(\theta_{\mathrm{Probe}}=-30^{\circ}\) and the rest parameters are the same as for Figure 2. The inset shows proposed experimental arrangement.
Section VI for details). The formation of macroscopically occupied vibrational states is manifested by the CARS signal, which asymptotically approaches intensity of the Stokes component above condensation threshold, as depicted in Figure 4.
In conclusion, we have introduced a new way of generating macroscopic vibrational states through exciton-polariton condensation in the strong vibronic regime. We have demonstrated that an effective strong optomechanical coupling between vibrational and polariton degrees of freedom can be achieved characterized by interaction strength \(G_{\mathbf{k}}\). This coupling results in parametric amplification of the resonant vibrational states above condensation threshold in close analogy to the cavity optomechanics in the blue-detuned configuration [47]. Our findings predict a unique macro-coherent regime in strongly-coupled organic cavities. This has conceptual implications for understanding coherence in molecular systems and its compatibility with various chemical and physical processes [60]. We identified conditions where vibronic light-matter BEC results in both macroscopic vibrational and exciton-polariton states, aligning with cutting-edge experiments. Recent studies further validate the vibronic mechanism in polariton condensation, revealing significant energy advantage over non-resonant configurations [12; 13]. This implies the likely existence of macroscopic vibrational states in these experiments, awaiting verification. We further suggest a practical setup using a non-resonant Raman probe to identify these states, if they are present. Expanding upon the proposed scheme to include non-resonant vibrational control methods is straightforward. Together with the recently developed concept of the resonant seeded polariton condensation at room temperature, this will open up new avenues for manipulating light-matter states across a broad spectral range for nonlinear and quantum optics with molecular systems.
V.Yu.Sh. thanks Foundation for the Advancement of Theoretical Physics and Mathematics "Basis" for financial support. A.V.Z. acknowledges support from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101030987 (LOREN). This work was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility at Los Alamos National Laboratory operated for the U.S. Department of Energy (DOE) Office of Science.
|
2304.00038 | Connecting $(g-2)_μ$ to neutrino mass in the extended neutrinophilic
2HDM | One simple way to lower the scale of the seesaw mechanism that generates
neutrino masses is to attribute part of their smallness to a suppressed vacuum
expectation value of a second Higgs doublet as in the neutrinophilic 2HDM or in
the type IB seesaw model. On that structure we add one charged singlet scalar
to induce a chirally enhanced contribution to $(g-2)_\mu$ with the same
righthanded neutrinos of the seesaw. We discuss the interplay of generating the
necessary contribution to the latter with lepton flavor violation which is also
necessarily brought to low scale. We show that it is possible to explain
$(g-2)_\mu$ even for heavy neutrino masses of order of a few TeV. | A. L. Cherchiglia, G. De Conto, C. C. Nishi | 2023-03-31T18:00:05Z | http://arxiv.org/abs/2304.00038v2 | # Connecting \((g-2)_{\mu}\) to neutrino mass
###### Abstract
One simple way to lower the scale of the seesaw mechanism that generates neutrino masses is to attribute part of their smallness to a suppressed vacuum expectation value of a second Higgs doublet as in the neutrinophilic 2HDM or similar. On that structure we add one charged singlet scalar to induce a chirally enhanced contribution to \((g-2)_{\mu}\) with the same righthanded neutrinos of the seesaw. We discuss the interplay of generating the necessary contribution to the latter with lepton flavor violation which is also necessarily brought to low scale. We show that it is possible to explain \((g-2)_{\mu}\) even for heavy neutrino masses of order of a few TeV.
## I Introduction
The observation of neutrino oscillations firmly stablished that neutrinos have tiny masses and mix in the weak charged current [1]. These properties clearly demonstrate that family lepton numbers associated to the lepton flavors \(e,\mu,\tau\) are not conserved in nature. The origin of both family lepton number violation and of neutrino masses can be naturally attributed to the dimension five Weinberg operator [2] pointing to a natural scale of \(10^{12}\,\mathrm{GeV}\). If the new physics violating family lepton number appears only at this scale, then its effects, visible primarily on dimension six operators, are expected to be unobservably small. These effects include those of charged lepton flavor violation (CLFV) that are expected to be probed with much greater precision in the coming years.
In contrast, the discrepancy between the experimental and SM prediction for the muon anoma
lous magnetic moment, \((g-2)_{\mu}\) or \(a_{\mu}\), is a persistent anomaly that requires TeV scale (or lower) new physics coupling to the muon that may or may not violate lepton flavor. A dedicated program to decrease the experimental uncertainity by a factor of four is currently ongoing in Fermilab, with the first data analysis released in 2021. The value for \(a_{\mu}\) combined from the results obtained in Fermilab and Brookhaven is [3]\(a_{\mu}^{\rm Exp}=(11659206.1\pm 4.1)\times 10^{-10}\), while the SM prediction from the Muon \(g-2\) Theory Initiative White Paper [4] is \(a_{\mu}^{\rm SM}=(11659181.0\pm 4.3)\times 10^{-10}\), resulting in the \(4.2\sigma\) discrepancy:
\[\delta a_{\mu}^{\rm BSM}=(25.1\pm 5.9)\times 10^{-10}. \tag{1}\]
The SM prediction makes use of dispersion methods to calculate the hadronic contributions that are the major source of uncertainty.1
Footnote 1: There is currently no consensus between the dispersion method and lattice results reported by the BMW collaboration [5]. On the one hand, if the latter is used, the \(a_{\mu}\) discrepancy is reduced to \(1.5\sigma\) level. On the other hand, it disagrees with the R-ratio based prediction used in [4] by more than \(2.1\sigma\). Comparison between different lattice groups is under way, with consistent results among them [6]. Nevertheless, most of these results stand for specific euclidean windows, not the full determination of hadronic contributions to \(a_{\mu}\). So an investigation of the source of incompatibility among the two approaches is urgent, together with more high precision lattice results. In this work, we will adopt the result in eq. (1).
By disregarding neutrino mass generation and minimally introducing new physics that couples to the muon in a flavor conserving fashion or obeying minimal flavor violation, one can explain \(a_{\mu}\)[7; 8; 9; 10] and still easily avoid large CLFV effects. A critical review of the minimal extensions with one or two new fields can be seen in Ref. [11]. If the dominant new physics contribution occurs at 1-loop, a key ingredient is the presence of a _chiral enhancement_ that occurs by replacing one chiral flipping coupling involving the muon Yukawa \(y_{\mu}\sim 0.0006\) by an order one coupling [7; 9; 10; 12]. In the Two-Higgs-doublet model (2HDM), this feature is available for the types II/X or aligned version with contributions enhanced by \(\tan\beta\) (or, similarly, \(\zeta_{l}\) for the aligned version). However, in order to accommodate \(a_{\mu}\), some of the scalars must be at the weak scale, which renders only type X [13] or the aligned version [14; 15; 16; 17] as possible candidates. For the muon specific 2HDM [18], it is possible to push the scalars to higher masses. In the MSSM, a similar reasoning apply, where the main contributions are also enhanced by \(\tan\beta\), allowing to accommodate \(a_{\mu}\) (see e.g. [19] for a review). However, differently from the 2HDM versions mentioned, in the MSSM there are numerous sources of flavor violation that need to be properly suppressed; see e.g. [11] for an updated analysis.
In this work, we aim to connect the physics responsible for \(a_{\mu}\) with neutrino masses generation so that the same mediators participate in both processes. Two general features immediately follow.
Firstly, the seesaw scale has to be lowered to the TeV scale or lower and, secondly, one certainly cannot impose family lepton number to be conserved so that the interplay with CLFV is crucial. Simply lowering the mediator masses for the type I [20], type II [21] or type III [22] seesaw models do not work because their contribution to \(a_{\mu}\) is negative [8; 23]. 2 In particular, adding any number of singlet RHNs to implement the type I seesaw mechanism leads to a negative contribution to \(a_{\mu}\)[24] even in low-scale seesaw scenarios such as the inverse seesaw [25]. This situation does not change if the heavy particles do not participate in the generation of neutrino masses.
Footnote 2: Ref. [28] had considered the type II seesaw and its interplay with \(a_{\mu}\) but they did not consider the correct sign of the contribution.
Focusing on the seesaw, there are many ways that the seesaw scale can be lowered. The aforementioned inverse seesaw is the case with extended tree level mediators that brings additional suppression due to approximate lepton number conservation. Another option is to consider radiative generation; see Ref. [26] for a review. Yet another way is to attribute part of the smallness of neutrino masses to a vacuum expectation value (VEV) of a second Higgs doublet that only couples to neutrinos. This is the neutrinophilic 2HDM (\(\nu\)-2HDM) for which the RHNs can be Majorana [27; 28] or Dirac [29; 30]. Our focus will be on the former as the Majorana mass will induce the chirality flip enhancing the \(a_{\mu}\) contribution. An earlier attempt to connect neutrino masses with \(a_{\mu}\) considered this neutrinophilic 2HDM with Majorana RHNs [28], where the interplay with CLFV was studied. However, in its simplest form, the new contribution to \((g-2)_{\mu}\) were negative. Phenomenological studies unrelated to \(a_{\mu}\) were performed in Refs. [31; 32]. A second Higgs doublet with small vev also helps explaining the small neutrino mass in the recently proposed type IB seesaw [33] where two Higgs doublets couple to the two RHNs that form a pseudo-Dirac pair of an approximate \(U(1)\) symmetry. Other recent works connecting \(a_{\mu}\) with neutrino mass generation can be seen in Ref. [34].
Here we show that the shortcomings of the \(\nu\)-2HDM (and also the type IB seesaw) to solve \(a_{\mu}\) can be overcome by introducing another field --a charged scalar singlet-- to the model so that a chirally enhanced contribution can be generated. This contribution has no definite sign and allows the new fields to be above the weak scale at TeV. Both the Majorana RHNs and the charged scalar contributes to \(a_{\mu}\) while the RHNs and the neutrinophilic Higgs doublets are responsible for neutrino masses. In particular, the RHN Majorana mass in the loop induces the necessarily chirality flip for the enhancement. Since lepton flavor violation is built in with the TeV scale new physics, it is important to study the induced CLFV processes.
The outline of the paper is as follows. In Sec. II we present the models and in Sec. III we show how they generate the neutrino masses with TeV scale mediators. Section IV shows the calculation for \(a_{\mu}\) and CLFV observables. Section V discusses the interplay between obtaining the correct \(a_{\mu}\) and avoiding the constraints of CLFV processes. The summary is given in Sec. VI.
## II Models
We seek scenarios where \(a_{\mu}\) is explained with the participation of the righthand neutrinos (RHNs) at one-loop at the same time that these heavy neutrinos generate the necessary active neutrino masses through a low scale seesaw mechanism. More specifically, we require a chiral enhanced contribution to \(a_{\mu}\) connecting \(\mu_{L}\) with \(\mu_{R}\) and then two more charged scalars --one electroweak singlet and another residing in a doublet-- are necessary to close the loop as shown in Fig. 1. From the diagram it is clear that the chiral enhancement will be proportional to the RHN Majorana mass \(M_{N}\) which we require to be at the TeV scale or below.3
Footnote 3: This is one of the minimal possibilities considered in Ref. [10] to explain \((g-2)_{\mu}\) and DM stabilized by an unbroken \(\mathbb{Z}_{2}\) leading to an inert Higgs doublet. Here the additional Higgs doublet has a small vev that helps explaining small neutrino masses.
So we extend the SM by considering two Higgs doublets \(\Phi_{1},\Phi_{2}\), two 4 righthanded neutrino (RHN) fields \(N_{iR}\), \(i=1,2\), and one singlet charged scalar \(\varphi^{+}\). The relevant part of the Lagrangian involving the Higgs doublets and the righthanded neutrino fields is
Footnote 4: These two neutrinos will generate the minimal number of two nonzero light neutrino masses but three RHNs can be equally considered.
\[-\mathscr{L}\supset\bar{\ell}_{\alpha}h_{\alpha}\Phi_{e}e_{\alpha R}+\bar{N}_ {iR}\lambda^{(1)}_{i\alpha}\tilde{\Phi}_{1}^{\dagger}\ell_{\alpha}+\bar{N}_{ iR}\lambda^{(2)}_{i\alpha}\tilde{\Phi}_{2}^{\dagger}\ell_{\alpha}+\tfrac{1}{2} \bar{N}_{iR}(M_{R})_{ij}N_{jR}^{c}+h.c., \tag{2}\]
where \(\ell_{\alpha}\), \(\alpha=e,\mu,\tau\) are the lepton doublets and \(\Phi_{e}\) is \(\Phi_{1}\) or \(\Phi_{2}\) depending on the model. The relevant interaction terms involving the charged singlet scalar is
\[-\mathscr{L}\supset\mu_{\varphi}\Phi_{2}^{\mathsf{T}}\epsilon\Phi_{1}\varphi^ {-}+f_{i\alpha}\bar{N}_{iR}e^{c}_{\alpha R}\varphi^{-}+h.c., \tag{3}\]
Figure 1: Chirally enhanced contribution to muon \(g-2\) involving RHN \(N_{R}\) and charged scalars.
where we choose \(\mu_{\varphi}\) real without loss of generality. The interaction term \(\overline{\ell_{\alpha}^{c}}\ell_{\beta}\varphi^{+}\) is absent owing to some symmetry depending on the model and then there is no radiative neutrino mass generation as in the Zee model [35].5 This absence also forbids the generation of four-fermion operators \(\ell^{4}\) from tree level \(\varphi^{+}\) exchange.
Footnote 5: A recent paper considered the Zee model for \((g-2)_{e,\mu}\)[36], where the couplings to charged singlet are not relevant.
One of the simplest models within the type I seesaw is to suppress the Dirac mass term by attributing its origin to a Higgs doublet different from the rest, with tiny vacuum expectation value. This is the neutrinophilic 2HDM model (\(\nu\)-2HDM) [27] which is obtained from (2) by imposing a \(\mathbb{Z}_{2}\) where \(N_{iR},\Phi_{1}\) are odd. As a consequence, \(\lambda^{(2)}=0\) and \(\Phi_{e}=\Phi_{2}\). The singlet scalar \(\varphi^{+}\) is additional and we assume it is odd so that the terms in (3) are allowed. The resulting Lagrangian is
\[\begin{split}-\mathscr{L}_{\text{$\nu$-2HDM}}& \supset\bar{\ell}_{\alpha}h_{\alpha}\Phi_{2}e_{\alpha R}+\bar{N}_{ iR}\lambda^{(1)}_{i\alpha}\bar{\Phi}_{1}^{\dagger}\ell_{\alpha}+\tfrac{1}{2} \bar{N}_{iR}M_{N_{i}}N_{iR}^{c}\\ &+\ \mu_{\varphi}\Phi_{2}^{\mathsf{T}}\epsilon\Phi_{1}\varphi^{-}+f_{i \alpha}\bar{N}_{iR}e_{\alpha R}^{c}\varphi^{-}+h.c.\end{split} \tag{4}\]
The doublet \(\Phi_{2}\approx H_{\text{SM}}\) will be mostly the SM Higgs doublet while \(\Phi_{1}\approx H_{\nu}\) will be mostly composed of non-SM higgses. Neutrino masses will depend solely on \(\langle\Phi_{1}^{0}\rangle=v_{1}\) which will be suppressed.
Another minimal possibility is to combine the righthanded neutrinos into a (pseudo)Dirac pair through a \(U(1)_{N}\) symmetry with charges [33]
\[N_{1R}\sim\Phi_{1}\sim+1\,,\quad N_{2R}\sim\Phi_{2}\sim-1\,. \tag{5}\]
It leads to the Lagrangian
\[-\mathscr{L}_{\text{Ib}}\supset\bar{N}_{R1}\lambda_{1\alpha}\tilde{\Phi}_{1}^ {\dagger}\ell_{\alpha}+\bar{N}_{R2}\lambda_{2\alpha}\tilde{\Phi}_{2}^{\dagger} \ell_{\alpha}+M\overline{N_{R1}^{c}}N_{R2}+h.c. \tag{6}\]
and the seesaw mechanism is dubbed seesaw type IB. Compared to (2), we are already simplifying the notation in that \(\lambda^{(1)}_{1\alpha}=\lambda_{1\alpha}\) and \(\lambda^{(2)}_{2\alpha}=\lambda_{2\alpha}\) whereas \(\lambda^{(1)}_{2\alpha}=\lambda^{(2)}_{1\alpha}=0\). The other couplings depend on the choice of \(U(1)_{N}\) charges for \(e_{R\alpha}\) and \(\varphi\). We adopt the charges \(e_{R}\sim-1\) and \(\varphi^{-}\sim-2\) so that
\[-\mathscr{L}_{\text{Ib}}\supset h_{\alpha}\bar{\ell}_{\alpha}\Phi_{1}e_{R \alpha}+f_{2\alpha}\bar{N}_{2R}e_{\alpha R}^{c}\varphi^{-}+\ \mu_{\varphi}\Phi_{2}^{\mathsf{T}}\epsilon\Phi_{1}\varphi^{-}+h.c. \tag{7}\]
To keep \(h_{\tau}\) within perturbative values, we need \(v_{1}\gtrsim 10^{-2}v\) and most of the suppression for light neutrino masses should still come from the suppression of the Yukawas \(\lambda_{j\alpha}\). We could flip the
charges of \(e_{R}\) and \(\varphi^{-}\) and then replace \(\Phi_{1}\) by \(\Phi_{2}\) in the first term and \(N_{2R}\) by \(N_{1R}\) in the second. But even in this case we cannot suppress \(v_{1}\) too much because the dominant \(g-2\) contribution is proportional to \(v_{1}\lambda_{2\alpha}\). The third term softly breaks the \(U(1)_{N}\) symmetry 6 which cannot be exact as a global symmetry to avoid unwanted massless scalars. The model in Ref. [33, a], for example, imposes a \(\mathbb{Z}_{3}\) instead of \(U(1)_{N}\).
Footnote 6: The quadratic term \(\Phi_{1}^{\dagger}\Phi_{1}\) should be added as well. The term \(\ell\ell\varphi^{+}\) remains forbidden.
For both models, the charged scalar component that is dominant in the non-SM Higgs doublet behaves as the charged scalar in the type I 2HDM and they are only constrained by LEP [37]:
\[M_{H^{+}}>75\,\text{GeV}\,. \tag{8}\]
On the other hand, the singlet component couples solely with charged leptons and righthand neutrino, implying that the model independent constraint from LEP does not apply. Since we allow for a small mixing, we will consider that both charged scalars will have masses around the electroweak scale or above.
## III Light neutrino masses
Light neutrino masses are generated from the type I seesaw, i.e., through the exchange of RHNs in (2) at tree level. The generated effective Weinberg operator is
\[\mathscr{L}=\frac{1}{2}\bar{\ell}_{\alpha}^{c}\Gamma_{\alpha i}^{\mathsf{T}}(M _{R}^{-1})_{ij}\Gamma_{j\beta}\ell_{\beta}+h.c.\,, \tag{9}\]
depending on the two Higgs doublets through
\[\Gamma_{j\beta}=\lambda_{j\beta}^{(1)}\tilde{\Phi}_{1}^{\dagger}+\lambda_{j \beta}^{(2)}\tilde{\Phi}_{2}^{\dagger}\,. \tag{10}\]
### \(\nu\)-2HDM model
For the \(\nu\)-2HDM model, only \(\Phi_{1}\) participates in the seesaw and \(\lambda^{(2)}=0\). Then the neutrino mass matrix is given by
\[M_{\nu}=-v_{1}^{2}{\lambda^{(1)}}^{\mathsf{T}}M_{R}^{-1}\lambda^{(1)}\,, \tag{11}\]
where its lightness is partly explained by \(\langle\Phi_{1}^{0}\rangle=v_{1}\ll v=\sqrt{v_{1}^{2}+v_{2}^{2}}=174\,\text{ GeV}\). If \(M_{R}\sim\text{TeV}\), we need
\[v_{1}\lambda^{(1)}/v\sim\frac{\sqrt{M_{R}M_{\nu}}}{v}\lesssim\frac{\sqrt{1\, \text{TeV}\times\,0.05\,\text{eV}}}{v}\sim 10^{-6}\,. \tag{12}\]
We can choose \(v_{1}/v\sim 10^{-6}\) in order to have \(\lambda^{(1)}\) of order one to address the deviation in \(a_{\mu}\).
Since we are focusing on the minimal case of two RHNs, the heavy neutrino Yukawa coupling \(\lambda^{(1)}\) is mostly fixed by the masses and mixing of light neutrinos. Using the Casas-Ibarra parametrization, we can write for normal ordering (NO),
\[\begin{split} v_{1}\lambda^{(1)}_{1\alpha}&=i\sqrt{ M_{1}}\big{(}\sqrt{m_{2}}c_{z}V^{\dagger}_{2\alpha}-\sqrt{m_{3}}s_{z}V^{ \dagger}_{3\alpha}\big{)}\,,\\ v_{1}\lambda^{(1)}_{2\alpha}&=i\sqrt{M_{2}}\big{(} \sqrt{m_{2}}s_{z}V^{\dagger}_{2\alpha}+\sqrt{m_{3}}c_{z}V^{\dagger}_{3\alpha} \big{)}\,,\end{split} \tag{13}\]
where \(c_{z}=\cos z\) and \(s_{z}=\sin z\) depend on the free complex angle \(z\). We are taking \(M_{R}=\text{diag}(M_{1},M_{2})\) and the neutrino mass matrix in (11) is \(M_{\nu}=V^{*}\,\text{diag}(0,m_{2},m_{3})V^{\dagger}\), with \(V\) being the PMNS matrix. Therefore, besides the two CP phases in \(V\), we have five free parameters in the neutrino sector: \(M_{1},M_{2},\text{Re}(z),\text{Im}(z),\tan\beta=v_{2}/v_{1}\). Note that for \(M_{1}=M_{2}\), the real part of \(z\) is not physical.
For inverted ordering (IO), we have instead
\[\begin{split} v_{1}\lambda^{(1)}_{1\alpha}&=i\sqrt{ M_{1}}\big{(}\sqrt{m_{1}}c_{z}V^{\dagger}_{1\alpha}-\sqrt{m_{2}}s_{z}V^{ \dagger}_{2\alpha}\big{)}\,,\\ v_{1}\lambda^{(1)}_{2\alpha}&=i\sqrt{M_{2}}\big{(} \sqrt{m_{1}}s_{z}V^{\dagger}_{1\alpha}+\sqrt{m_{2}}c_{z}V^{\dagger}_{2\alpha} \big{)}\,.\end{split} \tag{14}\]
The only difference in this case is that \(M_{\nu}=V^{*}\,\text{diag}(m_{1},m_{2},0)V^{\dagger}\).
### Type Ib seesaw model
For the type IB seesaw model, the neutrino mass matrix coming from (9) is
\[(M_{\nu})_{\alpha\beta}=-\frac{v_{1}v_{2}}{M}(\lambda_{1\alpha}\lambda_{2 \beta}+\lambda_{2\alpha}\lambda_{1\beta})\,, \tag{15}\]
where \(v_{a}=\langle\Phi^{0}_{a}\rangle\), \(a=1,2\), and the Yukawa couplings are defined in (6). We choose \(v_{1}<v_{2}\) because \(\Phi_{2}\) is the doublet coupling to the charged leptons in (7). Adapting (12) to this case with \(M\sim\text{TeV}\), we obtain
\[\frac{1}{v}\sqrt{v_{1}\lambda_{1\alpha}v_{2}\lambda_{2\beta}}\sim\frac{\sqrt{ M_{R}M_{\nu}}}{v}\lesssim\frac{\sqrt{1\,\text{TeV}\times 0.05\,\text{eV}}}{v}\sim 10^{-6}\,. \tag{16}\]
As in the \(\nu\)-2HDM model, the Yukawa couplings \(\lambda_{1\alpha}\) and \(\lambda_{2\alpha}\) are mostly fixed by light neutrino masses and mixing.
For NO, the Yukawa couplings can be parametrized as
\[\begin{split}\lambda_{1\alpha}&=i\kappa\left(\frac {M}{2v_{1}v_{2}}\right)^{1/2}\bigg{[}+i\sqrt{m_{2}}\,V^{\dagger}_{2\alpha}+ \sqrt{m_{3}}\,V^{\dagger}_{3\alpha}\bigg{]}\,,\\ \lambda_{2\alpha}&=i\kappa^{-1}\left(\frac{M}{2v_{1 }v_{2}}\right)^{1/2}\bigg{[}-i\sqrt{m_{2}}\,V^{\dagger}_{2\alpha}+\sqrt{m_{3}} \,V^{\dagger}_{3\alpha}\bigg{]}\,,\end{split} \tag{17}\]
where we use a different but equivalent parametrization with respect to Ref. [33]. Flipping the sign in front of \(\sqrt{m_{2}}\) is equivalent and this sign can be absorbed by flipping the sign of \(\nu_{2L}\). The parameter \(\kappa\) is free and can be chosen real by rephasing \(N_{1R},N_{2R}\) with opposite phases. For IO, we can analogously parametrize
\[\begin{split}\lambda_{1\alpha}&=i\kappa\left(\frac{M }{2v_{1}v_{2}}\right)^{1/2}\bigg{[}+i\sqrt{m_{1}}\,V_{1\alpha}^{\dagger}+ \sqrt{m_{2}}\,V_{2\alpha}^{\dagger}\bigg{]}\,,\\ \lambda_{2\alpha}&=i\kappa^{-1}\left(\frac{M}{2v_{1} v_{2}}\right)^{1/2}\bigg{[}-i\sqrt{m_{1}}\,V_{1\alpha}^{\dagger}+\sqrt{m_{2}} \,V_{2\alpha}^{\dagger}\bigg{]}\,.\end{split} \tag{18}\]
Note that in the neutrino sector, besides the unknown CP phases, there are only three free parameters: \(M,\tan\beta,\kappa\). This is the same number as the neutrinophilic case with equal masses for the RHNs.
## IV Dipole moments and CLFV
Using an effective theory approach, the operators relevant to lepton dipole moments and charged lepton flavor violation (CLFV) processes are the photonic operators
\[\begin{split}\mathscr{L}_{\gamma\text{-eff}}=&- \left(C_{\alpha\beta}^{\sigma R}\bar{e}_{\alpha L}\sigma_{\mu\nu}e_{\beta R}F^ {\mu\nu}+h.c.\right)\\ &-\ \left(C_{\alpha\beta}^{\text{ND-}L}\bar{e}_{\alpha L}\gamma_{\nu}e_{ \beta L}+C_{\alpha\beta}^{\text{ND-}R}\bar{e}_{\alpha R}\gamma_{\nu}e_{\beta R }\right)\partial_{\mu}F^{\mu\nu}\,.\end{split} \tag{19}\]
The operator in the first line is the dipole contribution whereas the ones in the second line are the non-dipole (ND) part. We can see that only the dipole part involves chirarity flipping and chiral enhancement will be possible only for this term. The Wilson coefficients at 1-loop can be obtained by matching the full theory with the effective theory through appropriate 1-loop amplitudes. The relevant amplitudes come from the dipole (left) and self-energy (right) diagrams in Fig. 2.
Figure 2: Dipole and self-energy contribution to flavor changing processes.
### Dipole moments and \(\ell_{\alpha}\to\ell_{\beta}\gamma\)
The Wilson coefficient of the dipole operator contributes to the dipole moments and \(\ell_{\alpha}\to\ell_{\beta}\gamma\) as
\[a_{\alpha} =-\frac{4m_{\alpha}}{e}\operatorname{Re}(C_{\alpha\alpha}^{\sigma R })\,, \tag{20}\] \[d_{\alpha} =-2\operatorname{Im}(C_{\alpha\alpha}^{\sigma R})\,,\] \[\operatorname{Br}[\ell_{\alpha}\to\ell_{\beta}\gamma] =\frac{m_{\alpha}^{3}}{4\pi\Gamma_{\alpha}}(|C_{\alpha\beta}^{ \sigma R}|^{2}+|C_{\beta\alpha}^{\sigma R}|^{2})\,,\]
where \(a_{\alpha},d_{\alpha}\) are the contributions to the magnetic and eletric dipole moments, respectively. These formulas assume that our covariant derivative in QED is \(D_{\mu}=\partial_{\mu}+ieQA_{\mu}\) and similarly for the SM.
For the \(\nu\)-2HDM described in (4), \(\Phi_{2}\approx H_{\text{SM}}\) and \(\Phi_{1}\approx H_{\nu}\) is almost inert with a tiny vev. The chirally enhanced contribution to the dipole operator is given in Fig. 1, with \(\varphi\) being the charged singlet, \(H^{+}\) being essentially the charged Higgs residing in \(\Phi_{1}\) and \(h^{0}\) being the SM Higgs boson. The chiral enhancement comes from the \(N_{iR}\) Majorana masses, \(M_{N_{i}}\), and these heavy neutrinos only couple to \(\Phi_{1}\). The expected chirally enhanced contribution is then of the form
\[C_{\beta\alpha}^{\sigma R}\sim\frac{ev_{2}\mu_{\varphi}}{(4\pi)^{2}\Lambda^{4} }{\lambda^{(1)}_{\beta i}}^{\dagger}M_{N_{i}}f_{i\alpha}^{*}\,, \tag{21}\]
where the transferred momentum is \(q^{2}=-m_{\mu}^{2}\).
For the exact contribution we need to include the mixing between the charged higgs \(H^{+}\) in the doublet and the scalar singlet \(\varphi^{+}\) coming from the \(\mu_{\varphi}\) term in (4) or (3). The contribution in the scalar potential is
\[\mu_{\varphi}\varphi^{-}(\Phi_{2}^{+}\Phi_{1}^{0}-\Phi_{2}^{0}\Phi_{1}^{+}) \to-\mu_{\varphi}v\,\varphi^{-}H^{+}\,, \tag{22}\]
where \(vH^{+}\equiv v_{2}\Phi_{1}^{+}-v_{1}\Phi_{2}^{+}\), \(v_{i}=\langle\Phi_{i}^{0}\rangle\), is the physical charged Higgs field within the doublets when \(\mu_{\varphi}=0\). The orthogonal direction is the charged Goldstone absorbed by the \(W\). Considering arbitrary mass terms for \(H^{+}H^{-}\) and \(\varphi^{+}\varphi^{-}\), the mixig term (22) induces the mixing
\[\begin{pmatrix}H^{+}\\ \varphi^{+}\end{pmatrix}=\begin{pmatrix}c_{\gamma}&-s_{\gamma}\\ s_{\gamma}&c_{\gamma}\end{pmatrix}\begin{pmatrix}S_{1}^{+}\\ S_{2}^{+}\end{pmatrix}\,, \tag{23}\]
with angle
\[\sin 2\gamma=-\frac{2\mu_{\varphi}v}{M_{S_{2}}^{2}-M_{S_{1}}^{2}}\,, \tag{24}\]
and \(S_{i}^{+}\) are the charged scalars with masses \(M_{S_{i}}\), \(i=1,2\). We have chosen \(M_{S_{2}}>M_{S_{1}}\).
Then the complete contribution arising from the diagrams in Fig. 2 is
\[\frac{16\pi^{2}}{e}C_{\beta\alpha}^{\sigma R} =c_{\gamma}s_{\gamma}\frac{v_{2}}{v}\sum_{j}\frac{{\lambda^{(1)} _{\beta j}}^{\dagger}f_{j\alpha}^{*}}{M_{N_{j}}}\left[x_{2j}f_{S}(x_{2j})-x_{1 j}f_{S}(x_{1j})\right]\,, \tag{25a}\] \[+\;m_{\alpha}\frac{v_{2}^{2}}{v^{2}}\sum_{j}\lambda^{(1)\dagger}_ {\beta j}\left[\frac{c_{\gamma}^{2}}{M_{S_{1}}^{2}}\tilde{f}_{S}(x_{1j})+ \frac{s_{\gamma}^{2}}{M_{S_{2}}^{2}}\tilde{f}_{S}(x_{2j})\right]\lambda^{(1)}_ {j\alpha}\] (25b) \[+\;m_{\beta}\sum_{j}f_{\beta j}^{\dagger}\left[\frac{s_{\gamma}^{2}}{M_{S_{ 1}}^{2}}\tilde{f}_{S}(x_{1j})+\frac{c_{\gamma}^{2}}{M_{S_{2}}^{2}}\tilde{f}_{S }(x_{2j})\right]f_{j\alpha}^{*}\,, \tag{25c}\]
where \(x_{kj}\equiv M_{N_{j}}^{2}/M_{S_{k}}^{2}\) and the loop functions are [9; 10]
\[f_{S}(x) \equiv\frac{x^{2}-1-2x\log x}{4(x-1)^{3}}\,, \tag{26}\] \[\tilde{f}_{S}(x) \equiv\frac{2x^{3}+3x^{2}-6x+1-6x^{2}\log x}{24(x-1)^{4}}\,.\]
See appendix A for the details. We have checked that the contribution proportional to \((\lambda^{(1)})^{2}\) matches Ref. [32] for \(\gamma=0\).
Let us discuss the various contributions. The first contribution (25a) is the chirally enhanced (left-right), reducing to (21) when \(M_{S_{i}}\gg M_{N_{j}}\) as \(f_{S}(0)=1/4\) and we identify \(\Lambda^{4}=4M_{S_{2}}^{2}M_{S_{1}}^{2}(M_{S_{2}}^{2}-M_{S_{1}}^{2})\). The contributions (25b) and (25c) are not chirally enhanced (left-left and right-right respectively) as the chiral flip comes from the exernal lines. So the chirally enhanced contribution is larger than the non-chirally enhanced contributions by a factor
\[c_{\gamma}s_{\gamma}\frac{M_{S}^{2}}{m_{\mu}M_{N}}\sim c_{\gamma}s_{\gamma} \times 10^{4}\,, \tag{27}\]
for order one couplings and the numbers assume \(M_{N}\sim M_{S_{i}}\sim\text{TeV}\). Given the definition of the angle \(\gamma\) in (24) the trilinear coupling \(\mu_{\varphi}\) cannot be arbitrarily small if we require the chirally enhanced contribution to dominate:
\[|\mu_{\varphi}|\gtrsim 1\,\text{GeV}\;\;\text{or}\;\;|s_{\gamma}|\gtrsim 10^{-4}\,. \tag{28}\]
The dominance of the chirally enhanced contribution is important because the non-chirally enhanced contributions are positive definite and leads to a contribution to \(a_{\mu}\) which is negative definite, contrary to the experimental observation. They are similar to the contribution of \(N_{R}\) exchange in the usual seesaw models [8; 24].
For the type IB model in (6) and (7), the estimate of the chirally enhanced contribution analogous to (21) will be
\[C_{\beta\alpha}^{\sigma R}\sim\frac{v_{2}\mu_{\varphi}}{(4\pi)^{2}\Lambda^{4}} \lambda^{\dagger}_{\beta 1}Mf_{2\alpha}^{*}\,. \tag{29}\]
Then we need \(\lambda_{1\beta}\) to be of order one while \(\lambda_{2\beta}\) needs to be suppressed due to neutrino masses, cf. (16).
The complete contribution reads
\[\frac{16\pi^{2}}{e}C^{\sigma R}_{\beta\alpha} =c_{\gamma}s_{\gamma}\frac{v_{2}}{v}\frac{\lambda^{\dagger}_{ \beta 1}f^{*}_{2\alpha}}{M_{N}}\left[x_{2}f_{S}(x_{2})-x_{1}f_{S}(x_{1})\right]\,, \tag{30a}\] \[+\;m_{\alpha}\frac{v_{2}^{2}}{v^{2}}\lambda^{\dagger}_{\beta 1} \left[\frac{c_{\gamma}^{2}}{M_{S_{1}}^{2}}\tilde{f}_{S}(x_{1})+\frac{s_{\gamma }^{2}}{M_{S_{2}}^{2}}\tilde{f}_{S}(x_{2})\right]\lambda_{1\alpha}\] (30b) \[+m_{\beta}f^{\mathsf{T}}_{\beta 2}\left[\frac{s_{\gamma}^{2}}{M_{S_ {1}}^{2}}\tilde{f}_{S}(x_{1})+\frac{c_{\gamma}^{2}}{M_{S_{2}}^{2}}\tilde{f}_{ S}(x_{2})\right]f^{*}_{2\alpha}\,, \tag{30c}\]
where \(x_{i}=M_{N}^{2}/M_{S_{i}^{2}}\). We have neglected additional contributions proportional to \(\lambda^{*}_{2\beta}\lambda_{2\alpha}v_{1}^{2}\) which will be highly suppressed. Here the requirement (28) is equally necessary for order one couplings.
In Table 1 we show the current and future limits for different \(\ell_{\alpha}\to\ell_{\beta}\gamma\). Limits for other CLFV processes are also shown.
### \(\mu\)-\(e\) conversion in nuclei
A very stringent test for CFLV is the coherent \(\mu^{-}\,\)-\(\,e^{-}\) conversion in a muonic atom of nucleus \((A,Z)\) by neutrinoless muon capture
\[\mu^{-}+(A,Z)\to e^{-}+(A,Z)\,,\]
\begin{table}
\begin{tabular}{|l|c|c|} \hline Observable & Current limit & Future limit \\ \hline Br(\(\mu\to eee\)) & \(<1.0\times 10^{-12}\)[38] & \(10^{-16}\)[39] \\ Br(\(\tau\to\mu\mu\mu\)) & \(<2.1\times 10^{-8}\)[40] & \(3.4\times 10^{-10}\)[41] \\ Br(\(\tau\to\mu ee\)) & \(<8.4\times 10^{-9}\)[40] & \(2.9\times 10^{-10}\)[41] \\ Br(\(\tau\to eee\)) & \(<1.4\times 10^{-8}\)[40] & \(4.3\times 10^{-10}\)[41] \\ Br(\(\tau\to e\mu\mu\)) & \(<1.6\times 10^{-8}\)[40] & \(4.3\times 10^{-10}\)[41] \\ \hline Br(\(\mu\to e\gamma\)) & \(<4.2\times 10^{-13}\)[38] & \(6\times 10^{-14}\)[42] \\ Br(\(\tau\to\mu\gamma\)) & \(<4.4\times 10^{-8}\)[38] & \(10^{-9}\)[41] \\ Br(\(\tau\to e\gamma\)) & \(<3.3\times 10^{-8}\)[38] & \(3\times 10^{-9}\)[41] \\ \hline \(\Gamma^{\rm conv}_{\mu\to e}/\Gamma^{\rm capt}_{N}\) & \(<7.0\times 10^{-13}\)[43]\({}^{*}\) & \(3\times 10^{-17}\)[44, 45]\({}^{**}/10^{-18}\)[46]\({}^{\dagger}\) \\ \hline \end{tabular}
\end{table}
Table 1: Current and future limits for charged lepton flavor violating processes at 90% CL. For \(\mu e\) conversion, the nucleons is Au (*) and Al (**) or Ti (\(\dagger\)).
mediated in our case by the effective flavor-changing photon interactions in (19). We neglect similar \(Z\) mediated processes which are suppressed by the \(Z\) mass. It should be noticed that, contrary to \(\mu\to e\gamma\) decay, one needs to consider off-shell photon emission. The general photonic \(\mu-e\) transition amplitude is given by [47]
\[\mathcal{M}=-eA_{\mu}^{*}(q)\bar{u}_{e}(p_{e})\left[(f_{E0}+\gamma_{5}f_{M0}) \gamma_{\nu}\left(g^{\mu\nu}-\frac{q^{\mu}q^{\nu}}{q^{2}}\right)+(f_{M1}+\gamma _{5}f_{E1})\frac{i\sigma_{\mu\nu}q^{\nu}}{m_{\mu}}\right]u_{\mu}(p_{\mu})\,, \tag{31}\]
where \(p_{\mu}\) and \(p_{e}\) are the momenta of the muon and electron respectively, \(q=p_{\mu}-p_{e}\) is the transfered momentum, and \(f_{X}\equiv f_{X}(q^{2})\) are form factors. In the case of \(\mu\to e\gamma\) decay, only the dipole part (\(f_{E1}\), \(f_{M1}\)) contributes. The form factors can be written in terms of the Wilson coefficients introduced in (19):
\[f_{E0}=(C_{e\mu}^{\text{ND-}L}+C_{e\mu}^{\text{ND-}R})\frac{q^{2} }{2e},\quad f_{M0}=(C_{e\mu}^{\text{ND-}L}-C_{e\mu}^{\text{ND-}R})\frac{q^{2}} {2e}, \tag{32}\] \[f_{E1}=-(C_{e\mu}^{\sigma R}-C_{\mu e}^{\sigma R*})\frac{m_{\mu} }{e},\quad f_{M1}=-(C_{e\mu}^{\sigma R}+C_{\mu e}^{\sigma R*})\frac{m_{\mu}}{e }\,. \tag{33}\]
In the approximation pioneered by Weinberg and Feinberg [48], only the photonic contribution is included in the calculation of the branching ratio of the coherent \(\mu-e\) conversion which is given by
\[\text{Br}[\mu N\to eN]\equiv\frac{\Gamma[\mu N\to eN]}{\Gamma_{\text{ capt}}}=\frac{8m_{\mu}\alpha^{5}Z_{\text{eff}}^{4}Z|F_{p}|^{2}\xi^{2}}{ \Gamma_{\text{capt}}}\,. \tag{34}\]
Here \(Z_{\text{eff}}\) is an effective atomic charge due to averaging the muon wave function over the nuclear density, \(\Gamma_{\text{capt}}\) is the total muon capture rate, and \(\xi^{2}\) is obtained with knowledge of the form factors
\[\xi^{2}=|f_{E0}(-m_{\mu}^{2})+f_{M1}(-m_{\mu}^{2})|^{2}+|f_{E1}(-m_{\mu}^{2})+ f_{M0}(-m_{\mu}^{2})|^{2}\,, \tag{35}\]
where we use the transferred momentum \(q^{2}=-m_{\mu}^{2}\).
In general, the non-photonic part should be included as well, which is parametrized by an effective four-fermion interaction containing quarks. In our particular case, these contributions would come from box diagrams containing both the righthanded neutrinos and charged scalars. Since the coupling between quarks and the charged scalars is suppressed (it will be proportional to the ratio \(v_{1}/v_{2}\)), we can safely neglect these diagrams.
Finally, we rewrite the branching ratio in terms of the Wilson coefficients, to allow a straightforward application to our model
\[\text{Br}[\mu N\to eN]=\frac{4m_{\mu}^{3}\alpha^{4}Z_{\text{eff}}^{4}Z|F_{p}|^ {2}}{\pi\Gamma_{\text{capt}}}\left(\left|C_{e\mu}^{\sigma R}+\frac{m_{\mu}}{2 }C_{e\mu}^{\text{ND-}\text{L}}\right|^{2}+\left|C_{\mu e}^{\sigma R}+\frac{m_{ \mu}}{2}(C_{e\mu}^{\text{ND-}\text{R}})^{*}\right|^{2}\right)\,. \tag{36}\]
In our case, for order one couplings and loop functions, the chiral enhanced part in the dipole contribution will dominate the non-dipole part by a factor \(M_{S_{i}}/m_{\mu}\sim M_{N_{k}}/m_{\mu}\sim 10^{4}\) for the new fields in the TeV scale and \(s_{\gamma}\) not too small as (28). In this case of dipole dominance, the ratio \(\text{Br}[\mu N\to eN]/\,\text{Br}(\mu\to e\gamma)\) depends roughly only on the atomic number of the nucleous [49] and we have checked it for our case. As for the non-dipole part, we have found that the relative sign between \(C^{\sigma R}\) and \(C^{\text{ND}}\) in (36) is effectively opposite to Refs. [50; 51; 32].
At present, the best limit for this branching ratio comes from conversion in gold nuclei [43]: \(\Gamma^{\text{Au}}_{\text{conv}}/\Gamma^{\text{Au}}_{\text{capt}}<7.0\times 10^ {-13}\), where \(\Gamma^{\text{Au}}_{\text{capt}}=8.7\times 10^{-18}\) GeV. There are, however, future experiments that aim to reduce the bounds on \(\mu-e\) conversion by several orders of magnitude [44; 45]. Their aim is to achieve \(\Gamma^{\text{Al}}_{\text{conv}}/\Gamma^{\text{Al}}_{\text{capt}}<3\times 1 0^{-17}\) using aluminium nuclei. The different effective parameters for different nucleous are shown in Table 2.
The Wilson coefficient of the non-dipole part can be obtained for the \(\nu\)-2HDM model as
\[\begin{split}\frac{16\pi^{2}}{e}C^{\text{ND-}L}_{\beta\alpha}& =\frac{v_{2}^{2}}{v^{2}}\sum_{j}{\lambda^{(1)}_{\beta j}}^{\dagger }\lambda^{(1)}_{j\alpha}\left[c_{\gamma}^{2}\frac{G_{S}(x_{1j})}{6M_{S_{1}}^{2} }+s_{\gamma}^{2}\frac{G_{S}(x_{2j})}{6M_{S_{2}}^{2}}\right]\,,\\ \frac{16\pi^{2}}{e}C^{\text{ND-}R}_{\beta\alpha}&= \sum_{j}f_{\beta j}^{\intercal}f_{j\alpha}^{*}\left[s_{\gamma}^{2}\frac{G_{S}( x_{1j})}{6M_{S_{1}}^{2}}+c_{\gamma}^{2}\frac{G_{S}(x_{2j})}{6M_{S_{2}}^{2}} \right]\,,\end{split} \tag{37}\]
where the loop function is
\[G_{S}(x)=\frac{2-9x+18x^{2}-11x^{3}+6x^{3}\log(x)}{6(1-x)^{4}}\,. \tag{38}\]
The latter function is the same as \(G_{2}(x)\) in Ref. [32]. See details in appendix A. As anticipated, there is no chiral enhancement for these coefficients.
For the type IB seesaw model, we analogously obtain
\[\begin{split}\frac{16\pi^{2}}{e}C^{\text{ND-}L}_{\beta\alpha}& =\frac{v_{2}^{2}}{v^{2}}\lambda_{\beta 1}^{\dagger}\lambda_{1\alpha}\left[c_{ \gamma}^{2}\frac{G_{S}(x_{1})}{6M_{S_{1}}^{2}}+s_{\gamma}^{2}\frac{G_{S}(x_{2} )}{6M_{S_{2}}^{2}}\right]\,,\\ \frac{16\pi^{2}}{e}C^{\text{ND-}R}_{\beta\alpha}&=f_ {\beta 2}^{\intercal}I_{2\alpha}^{*}\left[s_{\gamma}^{2}\frac{G_{S}(x_{1j})}{6M_{S_{ 1}}^{2}}+c_{\gamma}^{2}\frac{G_{S}(x_{2j})}{6M_{S_{2}}^{2}}\right]\,.\end{split} \tag{39}\]
We could also consider the decay \(\mu\to eee\). In addition to the Wilson coefficients that we already have, we will need to consider box diagrams. These diagrams have righthanded neutrinos and charged scalars inside, whose contributions will be of the form
\[f_{i\mu}f_{ie}f_{je}f_{je},\quad\lambda_{i\mu}\lambda_{ie}\lambda_{je}\lambda_ {je},\quad\lambda_{i\mu}f_{ie}\lambda_{je}f_{je},\quad f_{i\mu}\lambda_{ie}f_{je }\lambda_{je}\]
In Sec. V we will see that we need suppressed \(f_{ie}\) to evade CLFV constraints and still be able to account for \((g-2)_{\mu}\). In this case, only the contribution with \(\lambda\) survives which is subdominant [32].
## V Solving \((g-2)_{\mu}\) avoiding CLFV
By considering only the dominant chirally enhanced contribution to the dipole term, we can estimate the interplay between the necessary contribution to \(a_{\mu}\) and the necessary suppression to avoid significant CLFV processes. The contribution to \(a_{\mu}\) in (20) necessary to explain the experimental deviation (1) requires
\[|C^{\sigma R}_{\mu\mu}|\sim|\operatorname{Re}C^{\sigma R}_{\mu\mu}|\sim 2 \times 10^{-9}\operatorname{GeV}^{-1}. \tag{40}\]
On the other hand, the current limit on \(\mu\to e\gamma\) requires
\[\sqrt{|C^{\sigma R}_{\mu e}|^{2}+|C^{\sigma R}_{e\mu}|^{2}}<4\times 10^{-14} \operatorname{GeV}^{-1}. \tag{41}\]
So it is necessary that
\[\frac{\sqrt{|C^{\sigma R}_{\mu e}|^{2}+|C^{\sigma R}_{e\mu}|^{2}}}{|C^{\sigma R }_{\mu\mu}|}\lesssim 2\times 10^{-5}\,. \tag{42}\]
The expected future limit will decrease this number by a factor 2.6. A similar analysis on the current limit on \(\mu e\) conversion in gold nuclei requires
\[\sqrt{|C^{\sigma R}_{\mu e}|^{2}+|C^{\sigma R}_{e\mu}|^{2}}<10^{-12} \operatorname{GeV}^{-1}, \tag{43}\]
which implies
\[\frac{\sqrt{|C^{\sigma R}_{\mu e}|^{2}+|C^{\sigma R}_{e\mu}|^{2}}}{|C^{\sigma R }_{\mu\mu}|}\lesssim 5\times 10^{-4}\,. \tag{44}\]
The expected future limit in aluminium nuclei will greatly reduce the limit in (43) to \(6\times 10^{-15}\,\mathrm{GeV}^{-1}\) and then the bound in (44) becomes \(3\times 10^{-6}\).
Now, let us see how our models can satisfy the hierarchy (42) between flavor changing and flavor conserving couplings to the muon.
We start with the type IB seesaw model. We assume the dominance of the chirally enhanced contribution and require \(|s_{\gamma}|\gg 10^{-4}\) due to (28). In this model, to have appreciable \(a_{\mu}\), we need
\[\lambda_{1\mu}^{*}f_{2\mu}^{*}\sim O(1)\,. \tag{45}\]
On the other hand, according to (42), we need \(|\lambda_{1\mu}^{*}f_{2e}^{*}|,|\lambda_{1e}^{*}f_{2\mu}^{*}|\) to be less than \(10^{-5}\). Since \(\lambda_{1\mu}\) needs to be order one, we can adopt \(f_{2e}=0\) for simplicity. In this case,
\[\frac{\sqrt{|C_{\mu e}^{\sigma R}|^{2}+|C_{e\mu}^{\sigma R}|^{2}}}{|C_{\mu\mu} ^{\sigma R}|^{2}}\approx\frac{|\lambda_{1e}|}{|\lambda_{1\mu}|}\gtrsim 0.1\,, \tag{46}\]
from the dependence of \(\lambda_{1\alpha}\) on neutrino masses and mixing for NO or IO, cf. eqs. (17) or (18), and (42) is never satisfied. Turning on the coupling \(f_{2e}\) only worsens the situation and the type IB seesaw model augmented with a singly charged singlet cannot account for \((g-2)_{\mu}\) without violating current bounds on \(\mu\to e\gamma\).
We now turn to the \(\nu\)-2HDM model. For appreciable \(a_{\mu}\), assuming the dominance of chirally enhanced contribution, we need large \(\mu\mu\) couplings:
\[{\lambda_{\mu 1}^{(1)}}^{\dagger}f_{1\mu}^{*},\,{\lambda_{\mu 2}^{(1)}}^{ \dagger}f_{2\mu}^{*}\sim O(1)\,. \tag{47}\]
For equal masses \(M_{1}=M_{2}\), to suppress CLFV processes, we need suppressed \(\mu e\) couplings:
\[|{\lambda_{\mu 1}^{(1)}}^{\dagger}f_{1e}^{*}+{\lambda_{\mu 2}^{(1)}}^{ \dagger}f_{2e}^{*}|\lesssim 10^{-5}\,,\quad|{\lambda_{e 1}^{(1)}}^{\dagger}f_{1\mu}^{*}+{\lambda_{e 2}^{(1)}}^{\dagger}f_{2\mu}^{*}|\lesssim 10^{-5}\,. \tag{48}\]
Considering (47), the first combination vanishes if
\[f_{1e}=f_{2e}=0\,, \tag{49}\]
while the vanishing of the second requires the orthogonality between \(f_{i\mu}^{*}\) and \(\lambda_{ie}^{(1)}\):
\[(f_{1\mu},f_{2\mu})=\zeta(\lambda_{2e}^{(1)},-\lambda_{1e}^{(1)})\,. \tag{50}\]
This choice leads to
\[{\lambda_{\mu 1}^{(1)}}^{\dagger}f_{1\mu}^{*}+{\lambda_{\mu 2}^{(1)}}^{ \dagger}f_{2\mu}^{*}=\zeta{(\lambda_{2e}^{(1)}}^{*}{\lambda_{1\mu}^{(1)}}^{*}- {\lambda_{1e}^{(1)}}^{*}{\lambda_{2\mu}^{(1)}}^{*})\,, \tag{51}\]
which is mostly fixed from neutrino parameters. The mass degeneracy \(M_{1}=M_{2}\) and the orthogonality condition (50) could in principle be justified by flavor symmetries [53] which needs much more structure and will not be treated here. This links the coupling \(f\) with \(\lambda\) so that all the terms in (25) scale as \(1/v_{1}^{2}\). The vanishing (49) also makes the BSM contribution to electron EDM negligible, making it easily compatible with the current precise measurement [54]. For the muon EDM, the combination (51) contributes but it is equally negligible compared to the current limit [55].
Considering now the complete contribution to the dipole coefficient \(C_{\beta\alpha}^{\sigma R}\) in (25), the turning off (or suppression) of the electron coupling (49) eliminates the \(f^{2}\) contribution (25c) for \(\alpha=e\) or \(\beta=e\). Analogously, the orthogonality condition (50) eliminates the mixed (left-right) contribution (25a), i.e., the chiral enhanced contribution, to the flavor transition \(\mu\to e\) if \(N_{1},N_{2}\) have equal masses. Then, only the \(\lambda^{2}\) term (left-left) in (25c), which is not enhanced, contributes to \(\mu\to e\gamma\) while the chiral enhanced part still contributes dominantly to \(a_{\mu}\). With the increasing of the mass difference \(\Delta M_{N}=M_{2}-M_{1}\), the chiral enhanced contribution also increases rapidly for \(\mu\to e\gamma\). In principle, we could change the orthogonality condition (50) to keep the chiral enhanced part (25a) vanishing, at the cost of devising a mass dependent condition. This vanishing could even be extended to the whole dipole coefficient (25). We will keep the simple orthogonality condition (50), and use the mass difference \(\Delta M_{N}\) as a quantifier of the degree of tuning.
With the orthogonality condition, the couplings \(f_{i\alpha}\) are completely determined by free variables related to neutrinos masses (\(\text{Im}(z)\), \(M_{1}=M_{2}=M_{N}\)), the vev \(v_{1}\), and the scaling parameter \(\zeta\). Note that the real part of \(z\) is not physical for \(M_{1}=M_{2}\). Thus, for fixed values of \(M_{N}\) and \(v_{1}\), we can impose perturbativity bounds \(|f_{i\alpha}|<4\pi\) as a function of \(\text{Im}\,z\) and \(\zeta\). If the masses of the charged scalars and their mixing is also fixed, we can obtain further bounds by requiring compatibility with \((g-2)_{\mu}\) and \(\mu\to e\gamma\). For both NO and IO, we choose
\[M_{S_{1}}=350\;\text{GeV}\,,\quad M_{S_{2}}=450\;\text{GeV}\,,\quad s_{\gamma}= 0.1\,. \tag{52}\]
Figure 3 shows allowed regions in the plane \(\text{Im}(z){-}\zeta\) for \((g-2)_{\mu}\) (blue), \(\mu\to e\gamma\) (pink) and \(\mu e\) conversion in Au (yellow). Perturbativity for \(f_{i\alpha}\) (dashed curves) is also shown. The left (right) figure is for NO (IO) for which \(M_{N}=1\,\text{TeV}\) and \(v_{1}=10^{-3}\;\text{GeV}\) (\(v_{1}=2\times 10^{-3}\;\text{GeV}\)). The mixing angles of the PMNS matrix are fixed in the best-fit of [56] while we choose \(\delta=218^{\circ}\). Using a different value of \(\delta\) leads to different shapes and regions for this plot and others that follow but the overall possibility of explaining \((g-2)_{\mu}\) and avoiding CLFV do not change significantly. The change is larger for NO. For IO the variation is not significant.
As illustrated in the plot, the strongest constraint on \(\text{Im}(z)\) comes from \(\mu\to e\gamma\). There is
no visible constraint on \(\zeta\) from \(\mu\to e\gamma\) because the chiral enhanced term was chosen to vanish. The constraint on \(\zeta\) will mainly come from \((g-2)_{\mu}\), whose dominant chiral enhanced contribution depends linearly on this variable. In turn, perturbativity bounds on \(f_{ij}\) impose upper bounds on \(\zeta\). For each NO and IO, we see that the following benchmark points account for \((g-2)_{\mu}\) still evading the current CLFV constraints:
\[\begin{split}\text{(BM-NO)}&\quad M_{N}=1\,\text{ TeV}\,,\quad v_{1}=10^{-3}\,\text{GeV}\,,\quad z=0.2i\,,\quad\zeta=60\,;\\ \text{(BM-IO)}&\quad M_{N}=1\,\text{TeV}\,,\quad v _{1}=2\times 10^{-3}\,\text{GeV}\,,\quad z=-0.1i\,,\quad\zeta=60\,.\end{split} \tag{53}\]
Only \(v_{1}\) and \(z\) differ in the two points. The rest of the model parameters are fixed as (52).
For comparison and to assess the degree of tuning, we show in Fig. 4 the allowed regions analogous to Fig. 3, with the same parameters, except for the lifting of the mass degeneracy of RHNs to \(M_{2}=1.0001\,M_{1}\). We can see that we lose compatibiliy between \((g-2)_{\mu}\) at \(1\sigma\) and \(\mu\to e\gamma\) as the chiral enhanced term of the latter will not be completely canceled. The region of compatibility for \((g-2)_{\mu}\) is practically unchanged. We fix \(\text{Re}(z)=0\) but we have checked that variation of \(\text{Re}(z)\) may lead at most to a difference of a factor two.
We illustrate in figures 5-7 the interplay of the distinct contributions to \(g-2\), \(\mu\to e\gamma\), and \(\mu e\) conversion in Au respectively, using the benchmarks defined in (52) and (53). In all the plots, we show the influence of the \(\zeta\) parameter in the various contributions for each observable. Starting
Figure 3: Allowed regions in the plane \(\text{Im}(z)\times\zeta\) for NO (left) and IO (right). Scalar masses and \(s_{\gamma}\) are fixed as (52) while \(M_{N}=1\,\text{TeV}\) and \(v_{1}=x\times 10^{-3}\) GeV with \(x=1\) (\(x=2\)) for NO (IO). The region above the gray (green) dashed lines are excluded by perturbativity on the couplings \(f_{1\mu}\) (\(f_{2\mu}\)). The orange region is allowed by the present constraint on \(\mu N\to eN\), while the light red region is related to \(\mu\to e\gamma\). The dark (light) blue region show the 1(2)-\(\sigma\) region allowed by \((g-2)_{\mu}\). The crosses denote the benchmark points in (53). The mixing angles of the PMNS are fixed in best-fit of [56] whereas \(\delta=218^{\circ}\).
with the contributions to \(g-2\), Fig. 5, we show that the chiral enhanced term given by (25a) is dominant, easily explaining the present anomaly. We also show in the figure that allowing for non-degenerate neutrino masses affects very little. Considering \(\mu\to e\gamma\), for non-degenerate masses the chiral contribution can easily surpass the bound on \(\mu\to e\gamma\). For the case of degenerate masses, the present bound can be avoided for this benchmark, even though for the future limit this would not be the case. The important point to notice is that all the contributions can be suppressed by increasing \(v_{1}\) since all of them are proportional to \(v_{1}^{-2}\) under the orthogonality condition. However, since only the chiral contribution is proportional to \(s_{2\gamma}\), we can still compensate by increasing the value of \(s_{\gamma}\). This reasoning was applied when choosing the benchmark for IO, where \(v_{1}\) is higher than the case for NO. At last, we comment on the present bounds on \(\mu e\) conversion in Au, Fig. 7. We show that the non-dipole contribution given by (37) is negligible compared to the dipole contribution given by (25). Moreover, as in the case of \(\mu\to e\gamma\), if the neutrino masses are non-degenerate we can easily surpass the present bound.
To illustrate how the CLFV constraints are sensitive to the mass difference \(M_{2}-M_{1}\) of the RHNs, we can choose for NO the scaling \(v_{1}=10^{-3}\,\mathrm{GeV}\sqrt{M_{1}/\mathrm{TeV}}\) such that the Yukawas \(\lambda^{(1)}\) in (13) has the overall scale fixed as
\[(\lambda^{(1)}{\lambda^{(1)}}^{\dagger})_{11}=\frac{M_{1}}{v_{1}^{2}}\Big{(} m_{2}|c_{z}|^{2}+m_{3}|s_{z}|^{2}\Big{)}\sim 0.05\times|s_{z}|^{2}\,, \tag{54}\]
and the benchmark (53) is attained for \(M_{1}=1\,\mathrm{TeV}\). For the benchmark value we can see that \(\lambda^{(1)}\sim O(0.1)\) and \(f\) is larger by \(\zeta\) if respecting the orthogonality condition. For IO we choose twice the value for \(v_{1}\) so that the benchmark (53) is also attained for \(M_{1}=1\,\mathrm{TeV}\).
We start with countour curves, shown in Fig. 8, for the current and future limit for \(\mu\to e\gamma\) as
Figure 4: The same as Fig. 3, but for non-degenerate neutrino masses: \(M_{2}=1.0001\,M_{1}\). Left: NO. Right: IO.
a function of \(\text{Im}(z)\) and \(M_{1}\) for different values of \(\Delta=M_{2}/M_{1}-1\). We clearly see that the curves move to the right as the mass difference increases showing that the constraints get stronger with the mass difference. The benchmark point, marked with a cross, is already excluded for \(\Delta=10^{-4}\) for both NO and IO. In the future, this benchmark will be excluded even for degenerate masses. A curve similar to the blue curve can be also found in Ref. [32] where they consider the pure \(\nu\)-2HDM without the charged singlet and hence equivalent to our case turning off \(f_{j\alpha}\). The shape of the curve is not exactly the same because their treatment of the Casas-Ibarra parametrization of the minimal
Figure 5: The absolute value of the contributions for \((g{-}2)_{\mu}\) as defined in eq. (25) as a function of \(\zeta\), for the benchmarks defined in (52) and (53) (left is NO and right is IO). The red line stands for the dominant chiral enhanced term (25a). The blue line refers to the \(\lambda^{2}\) term, (25b), while the green line corresponds to the \(f^{2}\) term, (25c). Both the green and blue contributions are negative, while the red one is positive. The dark (light) gray band corresponds to the 1(2)-\(\sigma\) region allowed at present for \((g{-}2)_{\mu}\). The curves assumes \(M_{1}=M_{2}\) while the bands around them show the variation for \(|M_{2}/M_{1}-1|\leq 0.4\).
Figure 6: The contributions for \(\mu\to e\gamma\) as defined in eq. (25) as a function of \(\zeta\), for the benchmarks defined in (52) and (53) (left is NO and right is IO). The red line stands for the chiral enhanced term (25a) while the blue line refers to the \(\lambda^{2}\) term, cf. (25b). The dashed (continuous) gray line corresponds to the present (future) limits. Here, \(\Delta=M_{2}/M_{1}-1\). The band around the blue line shows how much this contribution varies if \(\Delta\) varies within \(|\Delta|\leq 0.4\).
case is not appropriate. For certain curves, there are islands near \(M_{1}\sim 10^{2.3}\,\mathrm{GeV}\) indicating a destructive interference between the chiral enhanced contribution and the \(\lambda^{2}\) contribution in (25).
In Fig. 9, we show similar contours for \(\mu e\) conversion in nuclei. We can see that the future
Figure 7: The contributions for Br(\(\mu\)Au\(\to\)eAu) as defined in eqs. (25) and (37) as a function of \(\zeta\), for the benchmark defined in (52) and (53) (left is NO and right is IO). The red line stands for the dipole contribution (25), which is dominant. Moreover, if the neutrino masses are chosen non-degenerate, it can violate the present bound, given by the dashed gray line. The blue line is related to the non-dipole contribution, (37). Here, \(\Delta=M_{2}/M_{1}-1\). The band around the blue line shows how much this contribution varies if \(\Delta\) varies within \(|\Delta|\leq 0.4\).
constraints will be much stronger than the current ones. Currently, a small mass difference is still allowed but in the future the benchmark will be easily excluded.
Finally we briefly comment on the dependence of our results on the masses of the charged scalars. The expression of the chiral enhanced contribution (25a) involves a cancellation between the contributions of the two charged scalars and it vanishes for degenerate masses. So up to a certain point, increasing the mass difference leads to an increase in the contribution to \(g-2\). This informatin can be seen in Fig. 10 where we show \(1\sigma\) regions to satisfy the \((g-2)_{\mu}\) costraint in the \(M_{S_{1}}\) and \(M_{N}=M_{1}=M_{2}\) plane keeping the ratio fixed as \(M_{S_{2}}/M_{S_{1}}=450/350\approx 1.29\) (blue) and \(M_{S_{2}}/M_{S_{1}}=2\) (gray). We can see that the larger mass ratio allows a larger compatibility region with larger masses for the scalars and the RHNs. The benchmark points defined in (52) and (53) correspond to the origin of the plot in the corner of the blue region. Following this point to the right inside the blue region would still allow the compatibility to \((g-2)_{\mu}\) but with decreasing contribution to CLFV. We should remark that the mass difference of the charged scalars cannot be arbitrarily large as this would require an increasingly large \(\mu_{\varphi}\). The benchmark we have chosen, given by (52), is conservative in this sense, since it assumes \(\mu_{\varphi}\sim-30\;\mathrm{GeV}\). For higher values, for instance, \(-\mu_{\varphi}\sim\mathrm{TeV}\), it will still be possible to explain \((g-2)_{\mu}\) with masses of the charged scalars and the RHNs of \(\mathcal{O}(\mathrm{TeV})\) given all other parameters fixed as before. For even higher values of
Figure 9: Contours for \(\mathrm{Br}(\mu N{\rightarrow}eN)\) for fixed \(\zeta=60\), and varying \(M_{2}/M_{1}-1=0,10^{-4},10^{-3},10^{-2}\) (respectively in blue, green, red and purple). The vev \(v_{1}=x\times 10^{-3}\,\mathrm{GeV}\sqrt{M_{1}/\mathrm{TeV}}\) scales with \(\sqrt{M_{1}}\) so that the global scale for the Yukawa \(\lambda^{(1)}\) is fixed. The continuous curves denote \(N=\mathrm{Au}\) (current) and the dashed denote \(N=\mathrm{Ti}\) (future). The mass is \(M_{N}=M_{1}\) and \(z\) is the parameter in (13) for NO (similarly for IO) with \(\mathrm{Re}(z)=0\). Left: NO with \(x=1\). Right: IO with \(x=2\). The cross denotes the benchmark points defined in (52) and (53) for NO and IO.
\(-\mu_{\varphi}\sim 20\) TeV, we can avoid future constraints on \(\mu e\) conversion in nuclei and still satisfy \((g-2)_{\mu}\).
To conclude this section, we have seen that the solution for \((g-2)_{\mu}\) within our model requires a sensitive balance of parameters to evade the strong bounds coming from CLFV processes. In general, increasing \(s_{\gamma}\), increasing \(f_{j\alpha}\) or decreasing \(v_{1}\) will increase the chiral enhanced contribution which dominates in \((g-2)_{\mu}\) but also in the CLFV processes if not properly suppressed.
## VI Summary
In order to connect the mechanism of neutrino mass generation with the \((g-2)_{\mu}\) anomaly, we proposed to add a single charged singlet to the neutrinophilic 2HDM and a variant which implements a low scale seesaw by attributing part of the smallness of neutrino masses to a small vev. We studied two models: (a) the \(\nu\)-2HDM version and (b) the type IB seesaw version, both for the minimal case of two righthanded neutrinos. A chiral enhanced contribution to \((g-2)_{\mu}\) is generated through exchange of charged scalars and righthanded neutrinos, the latter also participating in the neutrino mass generation. As family lepton number breaking is also brought to low scale, the chiral enhanced contribution generically leads to large rates for CLFV processes. We find that the type IB seesaw implementation does not have enough freedom to circunvent the constraints from CLFV while solving the \((g-2)_{\mu}\) anomaly. The \(\nu\)-2HDM version, on the other hand, has enough
freedom to allow some special cases where \((g-2)_{\mu}\) anomaly can be solved and yet avoiding the stringent CLFV processes. Even in these special cases, the region of compatibility between \((g-2)_{\mu}\) and current CLFV is very resctricted. One region for some choices of parameters is given in Fig. 3. It is clear the our solutions are not restricted to the minimal case of two RHNs as this limit can be mimicked in the presence of three righthanded neutrinos and more regions may open up. In the future, experiments of \(\mu e\) conversion in different nuclei are expected to drastically improve the limits and this kind of solution to the \((g-2)_{\mu}\) anomaly will be put to test.
###### Acknowledgements.
A.C. acknowledges support from National Council for Scientific and Technological Development - CNPq through projects 166523/2020-8 and 201013/2022-3. G.D.C. acknowledges financial support by the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001. C.C.N. acknowledges partial support by Brazilian Fapesp, grant 2014/19164-6, and CNPq, grant 312866/2022-4.
## Appendix A Wilson coefficients
Here we describe briefly how we obtain the Wilson coefficients of the effective photonic operators (19). The calculations are based on appendices B and C.
### \(\nu\)-2HDM model
For the dipole part, the contribution (25b) comes from the \(LL\) part of the penguin diagram leading to (24a) with the following couplings multiplying (23):
\[\begin{split} N_{k}-S_{1}^{+}:&\quad-e\lambda_{ \beta k}^{(1)\dagger}\frac{v_{2}c_{\gamma}}{v}\lambda_{k\alpha}^{(1)}\frac{v_{ 2}c_{\gamma}}{v}\,,\\ N_{k}-S_{2}^{+}:&\quad-e\lambda_{\beta k}^{(1) \dagger}\frac{v_{2}(-s_{\gamma})}{v}\lambda_{k\alpha}^{(1)}\frac{v_{2}(-s_{ \gamma})}{v}\,.\end{split} \tag{26}\]
The masses should be attributed accordingly. The contribution (25c) comes from the \(RR\) part of the penguin diagram leading to (24a), \(L\) - \(R\) exchanged, with the following couplings multiplying (23):
\[\begin{split} N_{k}-S_{1}^{+}:&\quad-ef_{\beta k}^{ \mathsf{T}}s_{\gamma}f_{k\alpha}^{*}s_{\gamma}\,,\\ N_{k}-S_{2}^{+}:&\quad-ef_{\beta k}^{\mathsf{T}}c_{ \gamma}f_{k\alpha}^{*}c_{\gamma}\,.\end{split} \tag{27}\]
The chirally enhanced contribution (25a) comes from \(LR\) part of the penguin diagram leading to (24b), with the following couplings multiplying (100):
\[\begin{split} N_{k}-S_{1}^{+}:&\quad-e\lambda_{\beta k }^{(1)\dagger}\frac{v_{2}c_{\gamma}}{v}f_{k\alpha}^{*}s_{\gamma}\,,\\ N_{k}-S_{2}^{+}:&\quad-e\lambda_{\beta k}^{(1) \dagger}\frac{v_{2}(-s_{\gamma})}{v}f_{k\alpha}^{*}c_{\gamma}\,.\end{split} \tag{101}\]
For the non-dipole part, there are only chirality preserving contributions. The integral \(I_{LL}^{\mu}\) (24a) and the coefficient (101) should be multiplied by (100) while the analogous \(I_{RR}^{\mu}\) should be multiplied by (101), resulting in the coefficients (37).
### Seesaw type IB
The contribution (30b) comes from the \(LL\) part (24a) with the following couplings multiplying (100):
\[\begin{split} N-S_{1}^{+}:&\quad-e\lambda_{\beta 1}^{ \dagger}\frac{v_{2}c_{\gamma}}{v}\lambda_{1\alpha}\frac{v_{2}c_{\gamma}}{v}\,, \\ N-S_{2}^{+}:&\quad-e\lambda_{\beta 1}^{ \dagger}\frac{v_{2}(-s_{\gamma})}{v}\lambda_{1\alpha}\frac{v_{2}(-s_{\gamma}) }{v}\,.\end{split} \tag{102}\]
The contribution (30c) comes from the \(RR\) part of (24a), \(L\) - \(R\) exchanged, with the following couplings multiplying (101):
\[\begin{split} N-S_{1}^{+}:&\quad-ef_{\beta 2}^{ \mathsf{T}}s_{\gamma}f_{2\alpha}^{*}s_{\gamma}\,,\\ N-S_{2}^{+}:&\quad-ef_{\beta 2}^{\mathsf{T}}c_{ \gamma}f_{2\alpha}^{*}c_{\gamma}\,.\end{split} \tag{103}\]
The chirally enhanced contribution (30a) comes from \(LR\) part of the penguin diagram leading to (24b), with the following couplings multiplying (100):
\[\begin{split} N-S_{1}^{+}:&\quad-e\lambda_{\beta 1}^{ \dagger}\frac{v_{2}}{v}c_{\gamma}f_{2\alpha}^{*}s_{\gamma}\,,\\ N-S_{2}^{+}:&\quad-e\lambda_{\beta 1}^{ \dagger}\frac{v_{2}}{v}(-s_{\gamma})f_{2\alpha}^{*}c_{\gamma}\,.\end{split} \tag{104}\]
Other combinations are forbidden by the pseudo-Dirac nature of \(N\).
For the non-dipole part, there are only chirality preserving contributions. The integral \(I_{LL}^{\mu}\) (24a) and the coefficient (101) should be multiplied by (100) while the analogous \(I_{RR}^{\mu}\) should be multiplied by (101), resulting in the coefficients (39).
## Appendix B Loop integrals
Our calculations for loop integrals and operators are similar to Refs. [57; 58]. The following loop integrals come from self-energy diagrams and penguin diagrams in Fig. 2:
\[iI_{LL} =\int\frac{d^{4}k}{(2\pi)^{4}}\frac{R\not{k}}{(k^{2}-M_{N}^{2})[(k -p)^{2}-M_{\varphi}^{2}]}\] \[=\frac{iR}{(4\pi)^{2}}\frac{\not{p}}{2}\left\{\frac{1}{\epsilon} +\log\frac{\bar{\mu}^{2}}{M_{\varphi}^{2}}+h_{S}(x)+\frac{p^{2}}{M_{\varphi} ^{2}}8\tilde{f}_{S}(x)\right\}\,, \tag{11a}\] \[iI_{LR} =\int\frac{d^{4}k}{(2\pi)^{4}}\frac{RM_{N}}{(k^{2}-M_{N}^{2})[(k -p)^{2}-M_{\varphi}^{2}]}\] \[=\frac{iR}{(4\pi)^{2}}M_{N}\left\{\frac{1}{\epsilon}+\log\frac{ \bar{\mu}^{2}}{M_{\varphi}^{2}}+\frac{1-x+x\log x}{1-x}+\frac{p^{2}}{M_{\varphi }^{2}}2f_{S}(x)\right\}\,, \tag{11b}\]
\[iI_{LL}^{\mu} =\int\frac{d^{4}k}{(2\pi)^{4}}\frac{R\not{k}(p_{1}+p_{2}-2k)^{\mu }}{(k^{2}-M_{N}^{2})[(k-p_{1})^{2}-M_{\varphi}^{2}][(k-p_{2})^{2}-M_{\varphi}^ {2}]}\] \[=-\frac{iR}{(4\pi)^{2}}\biggl{\{}\tfrac{1}{2}\gamma^{\mu}\left[ \frac{1}{\epsilon}+\log\frac{\bar{\mu}^{2}}{M_{\varphi}^{2}}+h_{S}(x)\right]+ \frac{1}{M_{\varphi}^{2}}(q^{2}\gamma^{\mu}-\not{q}q^{\mu})\tfrac{1}{6}G_{S}(x)\] \[\qquad\qquad+\ \frac{1}{M_{\varphi}^{2}}[(\not{p_{1}}+\not{p_{2}})(p_{ 1}+p_{2})^{\mu}+(p_{1}^{2}+p_{2}^{2})\gamma^{\mu}]2\tilde{f}_{S}(x)\biggr{\}}\,, \tag{12a}\] \[iI_{LR}^{\mu} =\int\frac{d^{4}k}{(2\pi)^{4}}\frac{RM_{N}(p_{1}+p_{2}-2k)^{\mu}} {(k^{2}-M_{N}^{2})[(k-p_{1})^{2}-M_{\varphi}^{2}][(k-p_{2})^{2}-M_{\varphi}^{2 }]}\] \[=-\frac{iR}{(4\pi)^{2}}\frac{M_{N}}{M_{\varphi}^{2}}(p_{1}+p_{2}) ^{\mu}2f_{S}(x)\,, \tag{12b}\]
where \(x=M_{N}^{2}/M_{\varphi}^{2}\) and \(q=p_{2}-p_{1}\). These expression should be supplied with couplings and enclosed by spinors \(\bar{u}(p_{2})\) and \(u(p_{1})\) to give the amplitudes. Simple chirality exchange \(L\leftrightarrow R\) leads to identical expressions with the projector exchanged. We use dimensional regularization with \(d=4-2\epsilon\) and retain only terms up to \(M_{\varphi}^{-2}\) or \(M_{N}^{-2}\). The additional loop function that appear is
\[h_{S}(x)=\frac{1-4x+3x^{2}-2x^{2}\log x}{2(1-x)^{2}}\,. \tag{13}\]
The result for \(I_{LL}\) and \(I_{LL}^{\mu}\) match [57] for \(x\to 0\) as \(8\tilde{f}_{S}(0)=1/3\) and \(G_{S}(0)=1/3\).
For the last integral in (12b), we can use the Gordon-type identity where we can replace
\[(p_{1}+p_{2})^{\mu}\to-i\sigma^{\mu\nu}q_{\nu}+\not{p_{2}}\gamma^{\mu}+ \gamma^{\mu}\not{p_{1}}\,. \tag{14}\]
## Appendix C Operators
The operators relevant to CLFV are the photon interactions written in (19). The operator in the first line is the dipole contribution whereas the ones in the second line are the non-dipole
(ND) part. These Wilson coefficients at 1-loop can be obtained by matching the full theory with the effective theory through appropriate 1-loop amplitudes. The relevant amplitudes lead to the expressions in (11) and (12) which comes from the self-energy and dipole diagrams in Fig. 2. To obtain the full expressions, one only needs to add the coupling constants and adapt the masses.
Let us focus first on the chirality preserving contributions. The coefficient of \(\not{p}\) in (11a) and of the \(\gamma^{\mu}\) part in (12a) give rise to the operator \(\bar{\psi}_{L}\not{D}\psi_{L}\), \(\psi\) being the collection of lepton fields, and it should be removed by wave function renormalization. The coefficient of \(\not{p}p^{2}\) in (11a) and the last term in the square brackets in (12a) lead to the operator \(\bar{\psi}_{L}(\not{D}D^{2}+D^{2}\not{D})\psi_{L}\)[57]. This operator can be replaced by \(\bar{\psi}_{L}\not{D}^{3}\psi_{L}\) with additional dipole contributions. The former operator does not lead to relevant physical phenomena. The result is that the last term in \(I^{\mu}_{LL}\) (12a) generates
\[\delta C^{\sigma R}_{\beta\alpha}=\frac{(-1)}{(4\pi)^{2}}\frac{1}{2M_{\varphi }^{2}}2\tilde{f}_{S}(x)m_{\alpha}\,, \tag{13}\]
while an analogous \(I^{\mu}_{RR}\) leads to
\[\delta C^{\sigma R}_{\beta\alpha}=m_{\beta}\frac{(-1)}{(4\pi)^{2}}\frac{1}{2M _{\varphi}^{2}}2\tilde{f}_{S}(x)\,. \tag{14}\]
These contributions are chirality flipping after the use of the equations of motion. Finally, the contribution proportional to \(G_{S}(x)\) in (12a) leads to the non-dipole term with
\[C^{\text{ND-}L}_{\beta\alpha}=\frac{(-1)}{(4\pi)^{2}}\frac{1}{6M_{\varphi}^{2 }}G_{S}(x)\,. \tag{15}\]
A similar term comes from \(I^{\mu}_{RR}\).
Let us now turn to the chirality flipping contributions. The coefficient of \(M_{N}\) in (11b) should be removed by lepton mass renormalization. The coefficient of \(p^{2}M_{N}\) in (11b) and the coefficient of \(\not{p}_{2}\gamma^{\mu}+\gamma^{\mu}\not{p}_{1}\) in (12b), after replacement (14), leads to the operator \(\bar{\psi}_{L}\not{D}^{2}\psi_{R}\) which is not phenomenologically relevant. The dipole contribution (14) in (12b) leads to
\[\delta C^{\sigma R}_{\alpha\beta}=\frac{1}{(4\pi)^{2}}\frac{M_{N}}{M_{\varphi }^{2}}2f_{S}(x)\,. \tag{16}\]
|
2309.14099 | Counting geodesic loops on surfaces of genus at least 2 without
conjugate points | In this paper we prove asymptotic estimates for closed geodesic loops on
compact surfaces with no conjugate points. These generalize the classical
counting results of Huber and Margulis and sector theorems for surfaces of
strictly negative curvature. We will also prove more general sector theorems,
generalizing results of Nicholls and Sharp for special case of surfaces of
strictly negative curvature. | Mark Pollicott, Khadim War | 2023-09-25T12:50:34Z | http://arxiv.org/abs/2309.14099v1 | # Counting geodesic loops on surfaces of genus at least \(2\) without conjugate points
###### Abstract.
In this paper we prove asymptotic estimates for closed geodesic loops on compact surfaces with no conjugate points. These generalize the classical counting results of Huber and Margulis and sector theorems for surfaces of strictly negative curvature. We will also prove more general sector theorems, generalizing results of Nicholls and Sharp for special case of surfaces of strictly negative curvature.
2020 Mathematics Subject Classification: 37C35, 37D40, 53C22
## 1. Introduction
For a closed surface \(M\) of negative curvature there are classical results which count the number of geodesic arcs starting and ending at a given reference point \(p\in M\) and whose length at most \(t\), say. For constant curvature surfaces these were proved by Huber in 1959, and for variable curvature surfaces these were proved by Margulis in 1969, In particular, they give simple asymptotic estimates for this counting function as \(t\to+\infty\). In this brief note we will extend these results in Corollary 1.6 to the more general setting of surfaces without conjugate points.
There are refinements of the original counting results of Huber and Margulis whereby the geodesics are restricted to lie in a sector. These are due to shown for constant curvature surfaces by Nicholls in 1983, and for variable curvature surfaces by Sharp in 2001. We will describe generalizations of these results to surfaces without conjugate points in Corollaries 1.5 and 1.6. These will follow from a more general statement (Theorem 1.3) which appears below.
We begin with some general notation. Let \((M,g)\) be a closed Riemannian manifold, \(SM\) the unit tangent bundle of \(M\) and let \(\pi:SM\to M\) be the natural projection to the footpoint.
Let \(t,\theta,\theta^{\prime}>0\) and \(v_{0},v_{0}^{\prime}\in SM\) with \(\pi v_{0}=\pi v_{0}^{\prime}=p\), say. We want to count geodesic loops \(c:[0,\tau]\to M\) which:
1. start and finish at \(p\) (i.e., \(c(0)=c(\tau)=p\));
2. have length \(\tau\) less than \(t\);
3. leaves the fibre \(S_{p}M\) at an angle at most \(\theta\) to \(v_{0}\); and
4. enters the fibre \(S_{p}M\) at an angle at most \(\theta^{\prime}\) to \(v_{0}^{\prime}\).
(see Figure 1).
**Definition 1.1**.: _Given an angle \(0<\theta\leq\pi\) and a unit tangent vector \(v_{0}\in SM\), we define the following arc in the fibre \(S_{\pi v_{0}}M\):_
\[J(v_{0},\theta):=\{w\in S_{p}M:\measuredangle_{p}(v_{0},w)\leq\theta\},\]
_i.e., the unit tangent vectors \(w\) in the same fibre as \(v_{0}\) at an angle at most \(\theta\)._
This allows us to introduce convenient notation for the collection of geodesic arcs satisfying properties (1)-(4).
**Definition 1.2**.: _We let \(\mathcal{C}(t,J(v_{0},\theta),J(v_{0}^{\prime},\theta^{\prime}))\) denote the set of geodesic loops \(c:[0,\tau]\to M\) based at \(c(0)=c(\tau)=p\in M\) of length \(\tau\leq t\) and satisfying \(c^{\prime}(0)\in J(v_{0},\theta)\) and \(c^{\prime}(\tau)\in J(v_{0}^{\prime},\theta^{\prime})\)._
We will now consider the problem of counting geodesic the number
\[\#\mathcal{C}(t,J(v,\theta),J(v^{\prime},\theta^{\prime}))\]
of such geodesic arcs.
We will work in the general setting of closed surfaces \(M\) of genus at least \(2\) that have no conjugate points, i.e., for any two points \(p,q\in M\) there is no geodesic from \(p\) to \(q\) along which there is a non-trivial Jacobi field vanishing at \(p\) and \(q\). 1 By the Cartan-Hadamard theorem, an equivalent formulation is that there is a unique geodesic arc joining
distinct points in the universal cover \(\widetilde{M}\). Examples include the special case that \(M\) has non-positive curvature. We refer to [1] for another well known example.
Finally, using the following notation
\[S^{2}M:=\{(v,v^{\prime})\in SM\times SM:\pi v=\pi v^{\prime}\}\]
we can formulate our main result.
**Theorem 1.3**.: _Let \(M\) be a closed connected surface of genus at least \(2\) without conjugate points. Then there exists \(\theta_{0}>0\), \(h>0\) and a measurable positive function \(a:S^{2}M\times(0,\theta_{0})^{2}\to\mathbb{R}_{>0}\) such that_
\[\#\mathcal{C}(t,J(v,\theta),J(v^{\prime},\theta^{\prime}))\sim a(v,v^{\prime},\theta,\theta^{\prime})e^{ht},\text{ as }t\to+\infty \tag{1}\]
_i.e., \(\lim_{t\to+\infty}\frac{\#\mathcal{C}(t,J(v,\theta),J(v^{\prime},\theta^{ \prime}))}{a(v,v^{\prime},\theta,\theta^{\prime})e^{ht}}=1\). Moreover if the geodesic flow is expansive 2 then the function \(a(\cdot,\cdot,\cdot,\cdot)\) is continuous._
Footnote 2: A flow \(\phi_{t}:SM\to SM\) is _expansive_ if for all \(\delta>0\) there exists \(\epsilon>0\) such that if \(d(\phi_{t}(x),\phi_{s(t)}(y))<\delta\) for all \(t\in\mathbb{R}\) for \(x,y\in SM\) and a continuous map \(s:\mathbb{R}\to\mathbb{R}\) then \(y=\phi_{t}(x)\) where \(|t|<\epsilon\).
In the statement of the theorem the value \(h\) is the topological entropy of the geodesic flow on the unit tangent bundle \(SM\).
**Remark 1.4**.: _In the special case that \(M\) has constant curvature then \(a(\cdot)\) is a constant function, and when \(M\) has variable negative curvature it is known that \(a(\cdot)\) is a continuous function (not least because it is expansive)._
Theorem 1.3 has corollaries which extend several classical results from the context of negative curvature. In particular, this leads to generalizations of classical counting and sector theorems. For example, when we set \(\theta^{\prime}=\pi\) then this gives the following.
**Corollary 1.5** (Sector Theorem).: _Given \(0<\theta\leq\pi\) there exists \(a=a(p,\theta)>0\) such that the number of geodesic arcs which: start at \(p\in M\) and finish at \(q\in M\); leave \(S_{p}M\) at an angle at most \(\theta\) to \(v_{0}\); and have length at most \(t\), is asymptotic to \(ae^{ht}\) as \(t\to+\infty\)._
This generalizes results from [8], [9], [10].
Furthermore, when \(\theta=\theta^{\prime}=\pi\) then this further reduces to the original counting result:
**Corollary 1.6** (Arc counting).: _There exists \(a=a(p)>0\) such that the number of geodesic arcs which start at \(p\in M\), finish at \(q\in M\) and have length at most \(t\) is asymptotic to \(ae^{ht}\) as \(t\to+\infty\)._
This generalizes results from [5], [6], [7].
Finally, we can describe equidistribution result of a slightly different flavour. Let \(\widehat{M}\) be a finite cover for \(M\). We can associate to any
geodesic arc \(c\) on \(M\) which starts and ends at \(p\in M\) (and has length \(L_{c}\)) a lift \(\widehat{c}\) to \(\widehat{M}\). The following corollary estimates the proportion of geodesic arcs such that \(\widehat{c}\) on \(\widehat{M}\) with \(\widehat{c}(0)=\widehat{c}(L_{c})\)
**Corollary 1.7** (Equidistribution in finite covers).: _The proportion of geodesic arcs \(c\) which start and end at \(p\in M\), have lifts \(\widehat{c}\) which start and end at the same point in \(\widehat{M}\), and have length at most \(t\) is asymptotic to_
\[\frac{\operatorname{Area}(M)}{\operatorname{Area}(\widehat{M})}ae^{ht}\text{ as }t \rightarrow+\infty\]
This corollary can be used to prove a corollary related to the first Homology group \(H_{1}(M,\mathbb{Z})\). Each closed loop \(c\) based at \(p\) gives rise naturally to an element \(\langle c\rangle\in H_{1}(M,\mathbb{Z})\). Let us consider a finite index subgroup \(G<H_{1}(M,\mathbb{Z})\) then for a geodesic arc \(c\) we can associate the coset \(\langle c\rangle G\in H_{1}(M,\mathbb{Z})G\).
**Corollary 1.8** (Homological Equidistribution).: _For a fixed coset \(\alpha\in H_{1}(M,\mathbb{Z})/G\). The proportion of geodesic arcs \(c\) which start and finish at \(p\in M\), satisfy \(\langle c\rangle\Gamma=\alpha\) and have length at most \(t\) is asymptotic to_
\[\#(H_{1}(M,\mathbb{Z})/\Gamma)ae^{ht}\text{ as }t\rightarrow+\infty\]
**Remark 1.9**.: _The theorem and each of the corollaries has a natural equivalent formulation in terms of the action \(\Gamma\times X\to X\) of the covering group \(\Gamma=\pi_{1}(M)\) on the universal cover \(X\). For example, Corollary 1.6 gives an asymptotic estimate for \(\#\{g\in\Gamma\) : \(d_{X}(\overline{p},g\overline{p})\leq t\}\) where \(\overline{p}\in X\) and \(d_{X}\) is the lifted Riemannian metric to \(X\)._
## 2. Closed arcs and isometries
The structure of the proof of Theorem 1.3 follows the lines of Margulis' original proof. However, it requires modifications using a number of recent techniques from [2], [3]. A key ingredient is the construction of the measure of maximal entropy for the geodesic flow \(\phi_{t}:SM\to SM\).
### Some Notation
Let \(X\) be the universal cover of \(M\) with the lifted metric. The covering group \(\Gamma\cong\pi_{1}(M)\) satisfies that \(M=X/\Gamma\).
Let \(SX\) denote the unit tangent bundle for \(X\) and let \(\overline{\pi}:SX\to X\) denote the canonical projection of a unit tangent vector in \(SX\) to its footpoint in \(X\). Let \(\overline{p}\in X\) be a lift of \(p\in M\) and let \(\underline{B}(\overline{p},R)\subset X\) denote a ball of radius \(R>0\) about \(\overline{p}\). We can use this to give a convenient definition of topological entropy [4].
**Definition 2.1**.: _The topological entropy \(h=h(\phi)\) is given by_
\[h=\lim_{R\rightarrow+\infty}\frac{\log\operatorname{Vol}(\underline{B}( \overline{p},R))}{R}.\]
Given \(\overline{v}\in SX\), let \(c=c_{\overline{v}}:\mathbb{R}\to X\) denote the unique geodesic such that \(c_{\overline{v}}(0)=\overline{\pi}(\overline{v})\) and \(c^{\prime}_{\overline{v}}(0)=\overline{v}\).
**Definition 2.2**.: _Let \(\partial X\) denote the ideal boundary of \(X\) consisting of equivalence classes \([c]\) of geodesics \(c:\mathbb{R}\to X\) which stay a bounded distance apart._
(See [2, Section 2] for a detailed description of the construction and properties of \(\partial X\)). In particular, every geodesic \(c_{\overline{v}}:\mathbb{R}\to X\) defines two points \(c(\pm\infty)\in\partial X\), which it is convenient to denote \(\overline{v}^{-}:=c(\infty)\) and \(\overline{v}^{+}:=c(+\infty)\). The natural action \(\Gamma\times X\to X\) extends to an action of \(\Gamma\) on \(\partial X\) given by \(g[c]=[gc]\), where \(g\in\Gamma\).
**Definition 2.3**.: _Given \(\overline{p}\in X\), the Busemann function \(b_{\overline{p}}(\cdot,\cdot):X\times\partial X\to\mathbb{R}\) is defined by_
\[b_{\overline{p}}(\overline{q},\xi)=\lim_{t\to+\infty}d(\overline{q},c_{v}(t))-t\]
_for \(\overline{v}\in S_{\overline{p}}X\) satisfying \(\xi=c_{\overline{v}}(+\infty)\)[2, Definition 2.16]._
We next recall the characterization of Patterson-Sullivan measures on the boundary \(\partial X\) constructed in [2, Proposition 5.1].
**Definition 2.4**.: _The Patterson-Sullivan measures on \(\partial X\) are a family of measures \(\{\mu_{\overline{p}}:\overline{p}\in X\}\) which transform under the action of \(\Gamma\) on \(\partial X\) by_
\[\frac{d\mu_{\overline{p}}\gamma}{d\mu_{\overline{p}}}(\xi)=e^{-hb_{\overline{p }}(\gamma\overline{p},\xi)}\]
_for \(\gamma\in\Gamma\) and \(\xi\in\partial X\)_
The Busemann function is also used in defining horocycles.
**Definition 2.5**.: _The stable horocycle is defined by_
\[H_{\xi}(\overline{p})=\{\overline{q}\in X\text{ : }b_{\overline{p}}(\overline{q}, \xi)=0\}\]
_and the unstable horocycle is defined by_
\[H_{\xi}^{-}(\overline{p})=\{q\in X\text{ : }b_{\overline{p}}(\overline{q},-\xi)=0\}\]
_where \(-\xi\) is the antipodal vector to \(\xi\)._
Finally, we define a class of tangent vectors which will serve us well in the proof.
**Definition 2.6**.: _We denote by \(\mathcal{E}\subset SX\) the set of expansive vectors consisting of those unit tangent vectors whose stable and unstable horocycles intersect at exactly one point._
### The measure of maximal entropy
We begin with a correspondence which is useful in the construction of measures of maximal entropy.
**Definition 2.7**.: _The Hopf map \(H\colon SX\to\partial^{2}X\times\mathbb{R}\) is defined by_
\[H(v):=(\overline{v}^{-},\overline{v}^{+},s(v))\quad\text{ where }\quad s( \overline{v}):=b_{p}(\pi(\overline{v},(\overline{v}^{-}). \tag{2}\]
In particular, following [2, Lemma 5.5] this family of measures defines a \(\Gamma\)-invariant measure \(\overline{\mu}\) on \(\partial X\times\partial X\setminus\text{diag}\) (where \(\text{diag}\subset\partial X\times\partial X\) are the diagonal elements) characterized by
\[d\overline{\mu}(\xi,\eta)=e^{h\beta_{\overline{p}}(\xi,\eta)}d\mu_{\overline{ p}}(\xi)d\mu_{\overline{p}}(\eta),\text{ for }\xi,\eta\in\partial X, \tag{3}\]
where \(\beta_{\overline{p}}(\xi,\eta)\) is the distance in \(X\) between the horospheres \(H_{\overline{p}}(\xi)\) and \(H_{\overline{p}}(\eta)\), see Figure 2 (ii) (or [3, Figure 1]).
**Definition 2.8**.: _The Hopf transform carries \(d\overline{\mu}\times dt\) to a measure \(d\overline{m}:=H_{*}(d\overline{\mu}\times dt)\) on \(SX\)._
There is a natural projection from \(SX\) to \(SM\) (taking \(v\) to \(v\Gamma\)). The following result was proved in [2, Theorem 1.1].
**Lemma 2.9**.: _The measure \(\overline{m}\) on \(SX\) projects (after normalization) to the measure \(\underline{m}\) maximal entropy for the geodesic flow on \(SM\) (i.e., \(\pi_{*}\overline{m}=\underline{m}\) and \(h(\underline{m})=h\)). Moreover,_
_(1) \(\underline{m}\) is unique, strongly mixing 3 and fully supported; and_
Footnote 3: \(\underline{m}(\mathcal{E})=1\) _(cf._ _[_2_, Equation (2.10)]__)._
We now turn to the final ingredients in the proof.
Figure 2. (i) The geometric interpretation of the Busemann function as the signed distance of \(\overline{q}\) from \(H_{\xi}(\overline{p})\) corresponds to \(b_{p}(q,\xi)\); (ii) The distance between the horocycles \(H_{\xi}(\overline{p})\) and \(H_{\eta}(\overline{p})\) corresponds to \(b_{\overline{p}}(\xi,\eta)\)
### Flow boxes
For the remainder of this section, we fix a choice of \((v_{0},v_{0}^{\prime})\in S^{2}M\cap\mathcal{E}^{2}\). We can then associate to the sets \(J(v_{0},\theta),J(v_{0}^{\prime},\theta^{\prime})\subset SM\) in Definition 1.2 a choice of lifts \(\overline{J}(v_{0},\theta),\overline{J}(v_{0}^{\prime},\theta^{\prime})\subset SX\).
To proceed we want to consider the natural images of these sets in \(\partial X\):
**Definition 2.10**.: _We can associate to \(J(v_{0},\theta)\) and \(J(v_{0}^{\prime},\theta^{\prime})\) their "future" and "past" subsets of \(\partial X\) defined, respectively, by_
\[\mathbf{F}=\mathbf{F}_{\theta}:=\{\bar{w}^{+}:\bar{w}\in\overline{J}(v_{0}, \theta)\}\text{ and }\mathbf{P}=\mathbf{P}_{\theta}:=\{\bar{w}^{-}:\bar{w}\in \overline{J}(v_{0},\theta)\}\]
\[\mathbf{F}^{\prime}=\mathbf{F}_{\theta^{\prime}}:=\{\bar{w}^{+}:\bar{w}\in \overline{J}(v_{0}^{\prime},\theta^{\prime})\}\text{ and }\mathbf{P}^{\prime}=\mathbf{P}_{\theta^{ \prime}}:=\{\bar{w}^{-}:\bar{w}\in\overline{J}(v_{0}^{\prime},\theta^{\prime})\}.\]
The sets \(\mathbf{F},\mathbf{P},\mathbf{F}^{\prime},\mathbf{P}^{\prime}\subset\partial X\) will be used to construct flow boxes for the geodesic flow. Assume first that \(\epsilon>0\) is small (with respect to the injectivity radius of \(M\)) and then choose \(\theta_{1}>0\) such that for all \(\theta<\theta_{1}\) we have
\[\operatorname{diam}\left(\pi H^{-1}(\mathbf{P}\times\mathbf{F}\times\{0\}) \right)<\frac{\epsilon}{2}\]
(see [3, Lemma 3.9]). For \(\alpha\leq\frac{3}{2}\epsilon\) and \(\theta\in(0,\theta_{1})\) we define two different flow boxes 4\(B_{\theta}^{\alpha}\) and \(B_{\theta^{\prime}}^{\alpha^{\prime}}\) (of different "lengths" \(\alpha\) and \(\epsilon^{2}\), respectively) in \(SX\) by:
Footnote 4: for the geodesic flow \(\phi_{t}:SX\to SX\) on \(SX\)
\[\overline{B}_{\theta}^{\alpha} :=H^{-1}(\mathbf{P}\times\mathbf{F}\times[0,\alpha])\text{ and }\] \[\overline{B}_{\theta^{\prime}}^{\alpha^{2}} :=H^{-1}(\mathbf{P}^{\prime}\times\mathbf{F}^{\prime}\times[0, \epsilon^{2}]). \tag{4}\]
(cf. [3, (3.11) and (3.12)]).
Let \(\underline{B}_{\theta}^{\alpha}=\pi(B_{\theta}^{\alpha})\) and \(\underline{B}_{\theta^{\prime}}^{\epsilon^{2}}=\pi(B_{\theta^{\prime}}^{ \epsilon^{2}})\) be their projections onto \(SM\).
**Remark 2.11**.: _Since the function \(\rho^{\prime}\to m(\underline{B}^{\epsilon^{2}}_{\rho^{\prime}})\) is nondecreasing, and thus has countably many discontinuities (by Lebesgue's Theorem), we can suppose without loss of generality that \(\theta^{\prime}\in(0,\theta_{1})\) is a continuity point, and so, in particular,_
\[\lim_{\rho^{\prime}\to\theta^{\prime}}m(\underline{B}^{\epsilon^{2}}_{\rho^{ \prime}})=m(\underline{B}^{\epsilon^{2}}_{\theta^{\prime}}). \tag{5}\]
In order to give a dynamical approach to the counting problem the following two definitions will prove useful. Let \(\phi^{t}:SX\to SX\) denote the geodesic flow on \(SX\).
**Definition 2.12**.: _For \(t>0\) we can define two subsets of \(\Gamma\) by:_
\[\Gamma_{\theta,\theta^{\prime}}(t):=\{\gamma\in\Gamma:\overline{B}^{\epsilon^ {2}}_{\theta^{\prime}}\cap\phi^{-t}\gamma_{*}\overline{B}^{\alpha}_{\theta} \neq\emptyset\} \tag{6}\]
\[\Gamma^{*}_{\theta,\theta^{\prime}}(t):=\{\gamma\in\Gamma_{\theta,\theta^{ \prime}}(t):\gamma\mathbf{F}\subset\mathbf{F}^{\prime}\text{ and }\gamma^{-1}\mathbf{P}\subset\mathbf{P}^{\prime}\}. \tag{7}\]
_where the sets have an implicit dependence on \(\epsilon,\alpha,v_{0},v_{0}^{\prime}\). (cf. [3, (4.4) and (4.14)].)_
By definition we have \(\Gamma^{*}_{\theta,\theta^{\prime}}(t)\subset\Gamma_{\theta,\theta^{\prime}}(t)\) and although we may not expect the reverse inclusion to be true, we have the following slightly more modest result.
**Lemma 2.13**.: _For every \(\rho^{\prime}\in(0,\theta^{\prime})\) and \(\rho\in(0,\theta)\), there exists \(t_{0}>0\) such that_
\[\Gamma_{\rho,\rho^{\prime}}(t)\subset\Gamma^{*}_{\theta,\theta^{\prime}}(t) \quad\text{ for all }\quad t\geq t_{0}.\]
We postpone the proof of Lemma 2.13 until Appendix A.
The next lemma shows there is an inclusion of the set defined in Definition 1.2 into \(\Gamma(t)\).
**Lemma 2.14**.: _We have an injection_
\[\mathcal{C}(t,J(v_{0},\theta),J(v_{0}^{\prime},\theta^{\prime}))\hookrightarrow \Gamma(t)\]
_which associates to a geodesic \(c\) the associated homotopy class \([c]\in\pi_{1}(M)\cong\Gamma\)._
We postpone the proof of Lemma 2.14 until Appendix A.
Although we may not expect the reverse inclusion in Lemma 2.14 to be true, we at least have the following partial result.
**Lemma 2.15**.: _For every \(\rho^{\prime}\in(0,\theta^{\prime})\), there exists \(t_{0}>0\) such that there is an inclusion_
\[\Gamma_{\theta,\rho^{\prime}}(t)\hookrightarrow\mathcal{C}(t\pm 2\epsilon,J(v_{0},\theta),J(v_{0}^{\prime},\theta^{\prime}))\quad\forall t>t_{0}.\]
Again we postpone the proof of Lemma 2.15 until Appendix A.
## 3. Proof of the counting results
In this section we will use results from the previous section to prove the following proposition, which easily implies Theorem 1.3.
**Proposition 3.1**.: _We have an asymptotic expression for the cardinality of \(\Gamma(t)\) of the form:_
\[\#\Gamma(t)\sim e^{ht}\overline{m}(B)\frac{\mu_{\overline{p}}(\mathbf{F}^{ \prime})}{\mu_{\overline{p}}(\mathbf{F})}\text{ as }t\to+\infty. \tag{8}\]
_Moreover, if the geodesic flow is expansive then the quantity \(m(B)\frac{\mu_{\overline{p}}(\mathbf{F}^{\prime})}{\mu_{\overline{p}}( \mathbf{F})}\) depends continuously on \(v,v^{\prime},\theta,\theta^{\prime}\)._
**Remark 3.2**.: _The constant on the righthand side of (8) depends on \(p\), but not then on the choice of \(\bar{p}\in\pi^{-1}(p)\)._
We begin with a little more notation. Let
\[S_{\theta}=H^{-1}\left(\mathbf{P}\times\mathbf{F}\times[0,\epsilon^{2}]\right) \subset SX \tag{9}\]
be another flow box and let
\[\Gamma^{*}(t,\alpha):=\{\gamma\in\Gamma^{*}:\,S_{\theta}\cap\gamma_{*}\phi^{- t}B^{\alpha}_{\theta}\neq\emptyset\}.\]
The proof of Proposition 3.1 now depends on the following two technical lemmas.
**Lemma 3.3**.: _For \(\gamma\in\Gamma^{*}(t,\alpha)\), we have_
\[B^{\epsilon^{2}}_{\theta^{\prime}}\cap\phi^{-(t+2\epsilon^{\frac{3}{2}})} \gamma_{*}B^{\alpha+4\epsilon^{\frac{3}{2}}}_{\theta}=H^{-1}(\mathbf{P}^{ \prime}\times\gamma\mathbf{F}\times[0,\epsilon^{2}])=:S^{\gamma}.\]
The next lemma describes the \(\overline{m}\)-measure of the set \(S^{\gamma}\).
**Lemma 3.4**.: _For each \(\gamma\in\Gamma^{*}\), we have_
\[\overline{m}(\overline{S}^{\gamma})=\epsilon^{2}e^{\pm 4h\epsilon}e^{-ht} \mu_{p}(\mathbf{P}^{\prime})\mu_{p}(\mathbf{F}),\]
_and similarly with \(\overline{m}\) and \(\overline{S}^{\gamma}\) on \(SX\) replaced by the projections \(m\) and \(S^{\gamma}=\pi(\overline{S}^{\gamma})\) onto \(SM\)._
We postpone the proofs of both of these lemmas until Appendix B.
Proof of Proposition 3.1.: This follows the general lines of SS5.2 in [3]. It follows from Lemmas 2.13 and 3.3 that given any \(\alpha\in(0,\frac{3}{2}\epsilon]\) and \(\rho^{\prime}\in(0,\theta^{\prime}),\rho\in(0,\theta)\), for all sufficiently large \(t\) we have
\[\underline{B}^{\epsilon^{2}}_{\theta^{\prime}}\cap\phi^{-t}\underline{B}^{ \alpha}_{\theta}\subset\bigcup_{\gamma\in\Gamma^{*}(t,\alpha)}\underline{S}^{ \gamma}\subset\underline{B}^{\epsilon^{2}}_{\theta^{\prime}}\cap\phi^{-(t+2 \epsilon^{2})}\underline{B}^{\alpha+4\epsilon^{2}}_{\theta}\]
by proving the corresponding result on \(SX\) and projecting to \(SM\).
Using Lemma 3.4, for all \(\gamma\in\Gamma^{*}(t)\), we have
\[e^{-4h\epsilon}\underline{m}(\underline{B}^{\epsilon^{2}}_{\rho^ {\prime}}\cap\phi^{-t}\underline{B}) \leq\epsilon^{2}\#\Gamma^{*}(t,\alpha)e^{-ht}\mu_{p}(\mathbf{P}^{ \prime})\mu_{p}(\mathbf{F})\] \[\leq e^{4h\epsilon}\underline{m}(\underline{B}^{\epsilon^{2}}_{ \theta^{\prime}}\cap\phi^{-(t+2\epsilon^{2})}\underline{B}^{\alpha+4\epsilon^ {2}}_{\theta}).\]
Sending \(t\to\infty\), using mixing, and dividing through by \(\underline{m}(\underline{B^{\epsilon^{2}}_{\theta^{\prime}}})\underline{m}( \underline{B^{\alpha}_{\theta}})=\overline{m}(B^{\epsilon^{2}}_{\theta^{\prime}}) \overline{m}(B^{\alpha}_{\theta})\), we get
\[e^{-4h\epsilon}\frac{\overline{m}(B^{\epsilon^{2}}_{\theta^{\prime}})}{ \overline{m}(B^{\epsilon^{2}}_{\theta^{\prime}})}\lesssim\frac{\epsilon^{2} \#\Gamma^{*}(t,\alpha)\mu_{p}(\mathbf{P}^{\prime})\mu_{p}(\mathbf{F})}{e^{h \epsilon}\overline{m}(B^{\epsilon^{2}}_{\theta^{\prime}})\overline{m}(B^{ \alpha}_{\theta})}\lesssim e^{4h\epsilon}\frac{\overline{m}(B^{\alpha+4 \epsilon^{2}}_{\theta})}{\overline{m}(B^{\alpha}_{\theta})}.\]
By (5), assuming that \(\theta^{\prime}\) is a point of continuity for \(\rho^{\prime}\mapsto m(B^{\prime}_{\rho^{\prime}})\), so we can send \(\rho^{\prime}\nearrow\theta^{\prime}\) and obtain
\[e^{-5h\epsilon}\lesssim\frac{\#\Gamma^{*}(t,\alpha)}{e^{ht}\overline{m}(B)} \frac{\mu_{p}(\mathbf{F})}{\mu_{p}(\mathbf{F}^{\prime})}\lesssim e^{5h \epsilon}(1+4\epsilon^{2}/\alpha). \tag{10}\]
Finally we need to replace \(\#\Gamma^{*}(t,\alpha)\) by \(\#\Gamma(t)\). (cf. Compare with [3, (5.4)].)
This ends the proof of 8. Finally, if the geodesic flow is expansive then the space of geodesics is in bijection with \(\partial^{2}X\) then using that the Busemann function \(b_{p}(q,\xi)\) depends continuous on \((p,q,\xi)\) we have \(m(B)\frac{\mu_{\overline{\mathbf{F}}}(\mathbf{F}^{\prime})}{\mu_{p}(\mathbf{F})}\) depends continuously on \(v,v^{\prime},\theta,\theta^{\prime}\).
In order to allow for arbitrary \(\theta\) and \(\theta^{\prime}\) in the main theorem we can break the arcs \(J(\cdot,\cdot)\) into smaller pieces and apply the proposition.
## Appendix A Proofs of lemmas on isometries and closed arcs
This section is devoted to the proof of Lemmas 2.13, 2.14 and 2.15. The proof of Lemma 2.14 is relatively easy while Lemma 2.13 and 2.15 both uses a geometric feature of surfaces without conjugate point that we first recall here.
**Definition A.1**.: _A simply connected Riemannian manifold \(X\) without conjugate points is a (uniform) visibility manifold if for every \(\epsilon>0\) there exists \(L>0\) such that whenever a geodesic \(c:[a,b]\to X\) stays at a distance at least \(L\) from some point \(p\in X\), then the angle sustained by \(c\) at \(p\) is less than \(\epsilon\), that is_
\[\measuredangle_{p}(c)=\sup_{a\leq s,t\leq b}\measuredangle_{p}((c(s),c(t))<\epsilon.\]
Proof of Lemma 2.13.: The proof uses [3, Lemma 4.9] with the choices \(R=\mathbf{F}^{\prime}_{\rho^{\prime}}\), \(Q=\mathbf{P}^{\prime}_{\rho^{\prime}}\), \(V=int(\mathbf{F}^{\prime}_{\theta^{\prime}})\) and \(U=int(\mathbf{P}^{\prime}_{\theta^{\prime}})\).
Proof of Lemma 2.14.: Let \(\underline{c}\in\mathcal{C}(t,J(v_{0},\theta),J(v^{\prime}_{0},\theta^{\prime}))\) and \(c\) be the lift of \(\underline{c}\) on \(X\) with \(\underline{c}(0)=p\). There exists \(\gamma\in\Gamma\) such that \(c(t)=\gamma p=\gamma c(0)\). Let \(\mathrm{pr}_{*}:SX\to SM\) be the map associated to \(\pi:X\to M\) then by definition of \(\mathcal{C}(t,J(v_{0},\theta),J(v^{\prime}_{0},\theta^{\prime}))\), for \(w=c^{\prime}(t)\), \(\mathrm{pr}_{*}\,w\in B^{\epsilon^{2}}_{\theta^{\prime}}\) and \(\phi^{-t}w=c^{\prime}(0)\in B^{\alpha}_{\theta}\) implies that \(\bar{w}:=\gamma_{*}w\in B^{\epsilon^{2}}_{\theta^{\prime}}\) for some \(\gamma\in\Gamma\). Therefore \(\bar{w}\in B^{\epsilon^{2}}_{\theta^{\prime}}\cap\phi^{-t}\gamma_{*}B^{\alpha}_ {\theta}\).
Proof of Lemma 2.15.: Let \(\gamma\in\Gamma_{\theta,\rho^{\prime}}(t)\) and \(w\in B_{\rho^{\prime}}^{\varepsilon^{2}}\cap\phi^{-t}\gamma_{*}B_{\theta}^{\alpha}\). By the triangle inequaity
\[d(p,\gamma p)\leq d(p,\pi w)+d(\pi w,\pi\phi^{t}w)+d(\pi\phi^{t}w,\gamma p).\]
By [3, Lemma 3.10 ], we have \(d(p,\pi w)\leq\operatorname{diam}(B^{\prime})\leq 2\epsilon\) and \(d(\pi\phi^{t}w,\gamma p)\leq\operatorname{diam}(B)\leq 2\epsilon\). Substituting these into the above display inequality gives
\[d(p,\gamma p)\leq t+4\epsilon.\]
We are left to prove that the geodesic \(c:=c_{p,\gamma p}\) connecting \(p\) to \(\gamma p\) satisfies \(c^{\prime}(0)\in J(v_{0},\theta)\) and \(c^{\prime}(d(p,\gamma p))\in J(v_{0}^{\prime},\theta^{\prime})\).
Let \(v\in S_{p}X\) such that \(v^{+}=w^{+}\in\mathbf{F}\), in particular, there exists \(R>0\) such that \(d(c_{v}(t),c_{w}(t))\leq R\) and therefore the geodesic connecting \(\gamma p\) to \(c_{v}(t)\) stays at distance at least \(t-2R\). Then using the uniform visabilty, there exists \(t_{0}\) such that for all \(t>t_{0}\) we have \(\measuredangle_{p}(v,c^{\prime}(0))\leq\theta-\rho\) which implies that \(c^{\prime}(0)\in\mathbf{F}\). Therefore by the uniform visibility, we have \(\measuredangle_{p}(c^{\prime}_{p,\gamma p}(0),c^{\prime}_{v}(0))\leq\theta-\rho\), in particular \(c^{\prime}_{p,\gamma p}(0)\in J(v_{0},\theta)\). Similarly we use the same visibility condition for the point \(\gamma p\) and the geodesic joining \(p\) and \(c_{v}(-t)\) where \(v\in S_{\gamma p}X\) with \(v^{-}=w^{-}\). Thus the geodesic \(c_{p,\gamma p}\) belongs to \(\mathcal{C}(t\pm 2\epsilon,J(v_{0},\theta),J(v_{0}^{\prime},\theta^{\prime}))\).
## Appendix B Counting
This section is devoted to the proof of Lemmas 3.3 and 3.4. The proof uses some geometric quantities that we will define first.
**Definition B.1**.: _For \(\xi\in\partial X\) and \(\gamma\in\Gamma\), we let \(b_{\xi}^{\gamma}:=b_{\xi}(\gamma p,p)\)_
**Lemma B.2**.: _Given any \(\gamma\in\Gamma^{*}=\{\gamma\in\Gamma\) : \(\gamma\mathbf{F}\subset\mathbf{F}\) and \(\gamma^{-1}\mathbf{P}\subset\mathbf{P}\}\) and any \(t\in\mathbb{R}\), we have_
\[B_{\theta^{\prime}}^{\varepsilon^{2}}\cap\phi^{-t}\gamma_{*}B_{\theta}^{ \alpha}=\{w\in E^{-1}(\mathbf{P}^{\prime}\times\gamma\mathbf{F}):s(w)\in[0, \epsilon^{2}]\cap(b_{w^{-}}^{\gamma}-t+[0,\alpha])\}.\]
6
Proof of Lemma b.2.: To prove that \(B_{\theta^{\prime}}^{\epsilon^{2}}\cap\phi^{-1}\gamma_{*}B_{\theta}^{\alpha} \subset E^{-1}(\mathbf{P}^{\prime}\times\gamma\mathbf{F})\), we observe that if \(E(w)\notin\mathbf{P}^{\prime}\times\gamma\mathbf{F}\), then either \(w^{-}\notin\mathbf{P}^{\prime}\), so \(w\notin B_{\theta^{\prime}}^{\epsilon^{2}}\), or \(w^{+}\notin\gamma\mathbf{F}\), so \(w\notin\phi^{-t}\gamma_{*}B_{\theta}^{\alpha}\).
It remains to show that given \(w\in E^{-1}(\mathbf{P}^{\prime}\times\gamma\mathbf{F})\), we have
\[w\in B_{\theta^{\prime}}^{\epsilon^{2}} \Leftrightarrow s(w)\in[0,\alpha],\text{ and } \tag{12}\] \[w\in\phi^{-t}\gamma_{*}B_{\theta}^{\alpha} \Leftrightarrow s(w)\in b_{w^{-}}^{\gamma}-t+[0,\alpha]. \tag{11}\]
The first of these is immediate from the definition of \(B^{\prime}\). For the second, we observe that \(s(v)=b_{v^{-}}(\pi v,p)=b_{\gamma v^{-}}(\gamma\pi v,\gamma p)\), and thus
\[\gamma_{*}B =\{\gamma_{*}v:v\in E^{-1}(\mathbf{P}\times\mathbf{F})\text{ and }b_{v^{-}}(\pi v,p)\in[0,\alpha]\}\] \[=\{w\in E^{-1}(\gamma\mathbf{P}\times\gamma\mathbf{F}):b_{w^{-}}( \pi w,\gamma p)\in[0,\alpha]\}\]
By [3, Equation (3.1)] and [3, Equation (3.2)], we have
\[b_{w^{-}}(\pi w,\gamma p)=b_{w^{-}}(\pi w,p)+b_{w^{-}}(p,\gamma p)=s(w)-b_{w^{-}}^ {\gamma};\]
moreover, since \(s(\phi^{t}w)=s(w)+t\) by [3, Equation (3.8)], we see that \(\phi^{t}w\in\gamma_{*}B\) if and only if \(s(w)-b_{w^{-}}^{\gamma}+t\in[0,\alpha]\), which proves (12) and completes the proof of the lemma.
Proof of Lemma 3.3.: By Lemma B.2, the fact that \(B_{\theta^{\prime}}^{\epsilon^{2}}\cap\phi^{-t}\gamma_{*}B_{\theta}^{\alpha}\neq\emptyset\) implies existence of \(\eta\in\mathbf{P}^{\prime}\) such that
\[(b_{\eta}^{\gamma}-t+[0,\alpha])\cap[0,\epsilon^{2}]\neq\emptyset\]
from which we deduce that
\[b_{\eta}^{\gamma}-t-\epsilon^{\frac{3}{2}}+[0,\alpha+2\epsilon^{\frac{3}{2}}] \supset[0,\epsilon^{2}]\]
By [3, Lemma (4.11)], it follows that every \(\xi\in\mathbf{P}^{\prime}\) has
\[(b_{\xi}^{\gamma}-t-\epsilon^{\frac{3}{2}}+[0,\alpha+2\epsilon^{\frac{3}{2}}] )\cap[0,\epsilon^{2}]\neq\emptyset\]
which in turn implies that
\[b_{\xi}^{\gamma}-t-2\epsilon^{\frac{3}{2}}+[0,\alpha+4\epsilon^{\frac{3}{2}}] \supset[0,\epsilon^{2}].\]
By Lemma B.2, this completes the proof.
Proof of Lemma 3.4.: By definition of \(\underline{m}\), we have \(\underline{m}(\underline{S}^{\gamma})=\overline{m}(\overline{S}^{\gamma})= \epsilon^{2}\bar{\mu}(\mathbf{P}\times\gamma\mathbf{F})\). Then we need to prove that \(\bar{\mu}(\mathbf{P}\times\gamma\mathbf{F})=e^{\pm 4h\epsilon}e^{-ht}\mu_{p}( \mathbf{P}^{\prime})\mu_{p}(\mathbf{F})\)
Given \((\xi,\eta)\in\mathbf{P}^{\prime}\times\gamma\mathbf{F}\), we can take \(q\) to lie on a geodesic connecting \(\xi\) and \(\eta\), with \(b_{\xi}(q,p)=0\); then we have
\[|\beta_{p}(\xi,\eta)|:=|b_{\xi}(q,p)+b_{\eta}(q,p)|\leq d(q,p)<\epsilon/2,\]
where the last inequality uses [3, Lemma 3.9]. Using this together with (3) gives
\[\bar{\mu}(\mathbf{P}^{\prime}\times\gamma\mathbf{F})=e^{\pm h\epsilon/2}\mu_{ p}(\mathbf{P}^{\prime})\mu_{p}(\gamma\mathbf{F}),\]
Using [2, Proposition 5.1 (a)] gives
\[\mu_{p}(\gamma\mathbf{F})=\mu_{\gamma^{-1}p}(\mathbf{F}),\]
and [2, Proposition 5.1 (b)] gives
\[\frac{d\mu_{\gamma^{-1}p}}{d\mu_{p}}(\eta)=e^{-hb_{\eta}(\gamma^{-1}p,p)}.\]
When \(\eta=c(-\infty)\), where \(c:=c_{p,\gamma^{-1}p}\). Using the visibility condition as in the proof of Lemma 2.15, for \(t\) large enough \(\eta\in\mathbf{F}^{\prime}_{\theta^{\prime}+t}\) for some \(\iota>0\) very small. Using Lemma 2.15, \(b_{\eta}(p,\gamma p)=t\pm 4\epsilon\). By [3, Lemma 4.11], for \(\xi\in\mathbf{F}^{\prime}\), \(b_{\xi}(\gamma^{-1}p,p)\) varies by at most \(\epsilon^{2}\). We conclude that \(\mu_{p}(\gamma\mathbf{F})=e^{\pm 5\epsilon}e^{-ht}\mu_{p}(\mathbf{F})\), and and this proves the lemma. |
2305.19668 | Quasars: standard candles up to z=7.5 with the precision of Supernovae
Ia | Currently, the $\Lambda$ Cold Dark Matter model, which relies on the
existence of cold dark matter and a cosmological constant $\Lambda$, best
describes the Universe. However, we lack information in the high-redshift ($z$)
region between Type Ia Supernovae (SNe Ia) (up to $z=2.26$) and the Cosmic
Microwave Background ($z=1100$), an interval crucial to test cosmological
models and their possible evolution. We have defined a sample of 983 Quasars up
to $z=7.54$ with reduced intrinsic dispersion $\delta=0.007$ which determines
the matter density parameter $\Omega_M$ with the same precision of SNe Ia.
Although previous analysis have been used Quasars as cosmological tools (e.g.
Risaliti and Lusso 2019), this is the first time that high-redshift sources, in
this case Quasars, as standalone cosmological probes yield such tight
constraints on $\Omega_M$. Our results show the importance of correcting
cosmological relationships for selection biases and redshift evolution and how
the choice of a golden sample reduces considerably the intrinsic scatter. This
proves the reliability of Quasars as standard cosmological candles. | Maria Giovanna Dainotti, Giada Bargiacchi, Aleksander Łukasz Lenart, Shigehiro Nagataki, Salvatore Capozziello | 2023-05-31T09:07:49Z | http://arxiv.org/abs/2305.19668v1 | # Quasars: standard candles up to z-7.5 with the precision of Supernovae Ia
###### Abstract
Currently, the \(\Lambda\) Cold Dark Matter model, which relies on the existence of cold dark matter and a cosmological constant \(\Lambda\), best describes the Universe. However, we lack information in the high-redshift (\(z\)) region between Type Ia Supernovae (SNe Ia) (up to \(z=2.26\)) and the Cosmic Microwave Background (\(z=1100\)), an interval crucial to test cosmological models and their possible evolution. We have defined a sample of 983 Quasars up to \(z=7.54\) with reduced intrinsic dispersion \(\delta=0.007\) which determines the matter density parameter \(\Omega_{M}\) with the same precision of SNe Ia. Although previous analysis have been used Quasars as cosmological tools (e.g. Risaliti and Lusso, 2019), this is the first time that high-redshift sources, in this case Quasars, as standalone cosmological probes yield such tight constraints on \(\Omega_{M}\). Our results show the importance of correcting cosmological relationships for selection biases and redshift evolution and how the choice of a golden sample reduces considerably the intrinsic scatter. This proves the reliability of Quasars as standard cosmological candles.
Quasars 0000-0002-2276-223]M. G. Dainotti 0000-0002-2276-223]G. Bargiacchi 0000-0002-2276-230]A. L. Lenaret 0000-0002-2276-223]S. Nagataki 000-0002-2276-230]S. Capozziello 0000-0002-2276-2230]C. Capozziello 0000-0002-2276-2230]C. Capozziello 0000-0002-2276-2230]R.
distances (up to \(z=7.54\)). To tackle such an approach we need a standardizable candle and in this regard, a relation exists between the X-ray and the Ultraviolet (UV) luminosities of Quasars (known as the Risaliti-Lusso relation, hereafter called RL). To use this, Quasar emission mechanisms need to be very well understood and the relation at play should not be affected by redshift evolution (a change in redshift) and selection biases. Namely, such a relation should remain the same at all redshifts or if there is a redshift evolution this should be properly accounted for before its use as a cosmological tool. Dainotti et al. (2022) have already demonstrated that the RL relation is not induced by selection biases or redshift evolution, but it is intrinsic to the physics of Quasars. Here we have presented two main gold samples of Quasars. One is built from a flat \(\Lambda\)CDM model with \(\Omega_{M}=0.3\) and \(H_{0}=70\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\) and is composed of 983 Quasars up to \(z=7.54\) which constrains the parameter \(\Omega_{M}\) in the assumed cosmology with the same precision as SNe Ia in the Pantheon sample. The other one is instead obtained through a circularity-free procedure and consists of 975 sources with the same maximum redshift of \(z=7.54\). The paper is structured in data and methodology, Sect. 2, where we detail the sample and all methods, results, Sect. 3 and 4, where the choice of the golden sample assuming a given cosmology is explained and the solution for a golden sample completely independent from the circularity problem including the Markov Chain sampling uncertainty results is presented. We discuss our findings in Sec. 5.
## 2 Data set and methodology
Our initial Quasar sample is the most recent one released for cosmological studies (Lusso et al., 2020). It consists of 2421 sources in the redshift range between \(z=0.009\) and \(z=7.54\)(Banados et al., 2018) collected from eight catalogues (Nardini et al., 2019; Salvestrini et al., 2019; Vito et al., 2019) and archives (Menzel et al., 2016; Paris et al., 2018; Webb et al., 2020; Evans et al., 2010), with the addition of a subsample of low redshift Quasars that present UV observations from the International Ultraviolet Explorer and X-ray data in archives. To obtain this Quasar sample suitable for cosmological analyses, as many as possible observational biases have been meticulously inspected and removed (Risaliti and Lusso, 2015; Lusso and Risaliti, 2016; Risaliti and Lusso, 2019; Salvestrini et al., 2019; Lusso et al., 2020). We note here some differences between our sample and the samples used in previous works. Here, we study this final sample of 2421 sources without any additional selection, such as the cut at redshift \(z=0.7\) previously used in some works (Lusso et al., 2020; Bargiacchi et al., 2022), to avoid any possible induced bias due to the reduction on the redshift sample (Dainotti et al., 2022; Lenart et al., 2023).
### Correction for the redshift evolution of the luminosities
We clarify that previous works have not considered selection biases and redshift evolution with the exception of our works in the literatures (Dainotti et al., 2022; Lenart et al., 2023; Bargiacchi et al., 2023). Since Quasars are high-redshift sources, we need to account for selection biases and redshift evolution effects. These factors could induce artificial correlations between intrinsic physical quantities of sources (Dainotti et al., 2013), such as the relation between X-ray and UV luminosities for Quasars. To correct for these effects, we have applied the statistical Efron and Petrosian method (Efron and Petrosian, 1992) assuming that the luminosities evolve with redshift as \((1+z)^{k}\). This method has already been employed for GRBs (Dainotti et al., 2013, 2015, 2017, 2021, 2023) and Quasars (Dainotti et al., 2022; Lenart et al., 2023). The choice of a more complex function for the redshift evolution would not affect the results (Singal et al., 2011; Dainotti et al., 2021, 2022). Thus, the de-evolved UV and X-ray luminosities are computed as \(L^{\prime}_{UV}=L_{UV}/(1+z)^{k_{UV}}\) and \(L^{\prime}_{X}=L_{X}/(1+z)^{k_{X}}\) by using \(L_{UV}\) and \(L_{X}\) obtained from the measured flux densities \(F_{UV}\) and \(F_{X}\) (in units of \(\mathrm{erg\,s^{-1}\,cm^{-2}\,Hz^{-1}}\)) according to \(L_{X,UV}=4\,\pi\,d_{I}^{2}\,F_{X,UV}\) where \(d_{I}\) is the luminosity distance in cm and the K-correction is assumed to be equal to 1 for Quasars (Lusso et al., 2020). Here we consider a flat \(\Lambda\)CDM model. Our main gold Quasar sample of 983 sources is obtained by fixing \(\Omega_{M}=0.3\) and \(H_{0}=70\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\) in the distance luminosity. The values of \(k_{UV}\) and \(k_{X}\) used to determine \(L^{\prime}_{UV}\) and \(L^{\prime}_{X}\) are \(k_{UV}=4.36\pm 0.08\) and \(k_{X}=3.36\pm 0.07\)(Dainotti et al., 2022), which have been obtained within a flat \(\Lambda\)CDM model with \(\Omega_{M}=0.3\) and \(H_{0}=70\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\), consistently with our assumption. The evolutionary parameter \(k\) depends on \(\Omega_{M}\), but not on \(H_{0}\)(Dainotti et al., 2022, 2023; Lenart et al., 2023). Thus, when we change the value of \(\Omega_{M}\) to \(\Omega_{M}=0.1\) and \(\Omega_{M}=1\) to test the dependence of our results on the cosmological assumptions, we accordingly use the values of \(k_{UV}\) and \(k_{X}\) corresponding to these cosmologies. More precisely, these values are obtained from the functions \(k_{UV}(\Omega_{M})\) and \(k_{X}(\Omega_{M})\) reported in Dainotti et al. (2022) and shown in their Fig. 4. The resulting values are \(k_{UV}=4.79\pm 0.08\) and \(k_{X}=3.81\pm 0.06\) for \(\Omega_{M}=0.1\) and \(k_{UV}=3.89\pm 0.08\) and \(k_{X}=2.88\pm 0.06\) for \(\Omega_{M}=1\).
The choice of a specific \(\Omega_{M}\) and thus, a value of \(k\) for the correction for evolution automatically induces the circularity problem. This issue can be completely overcome in the cosmological fit if we do not fix \(k\) a-priori to compute the luminosities, but we apply the functions \(k_{UV}(\Omega_{M})\) and \(k_{X}(\Omega_{M})\) in the fitting procedure while leaving also the cosmological parameters free to vary, as already done in Lenart et al. (2023) for Quasars and in Dainotti et al. (2023) for GRBs. This allows us to avoid assuming a-priori an underlying cosmology and to leave the evolutionary parameters free to vary together with the parameters of the fit. In our work, we also employ this circularity-free methodology when we fit \(\Omega_{M}\) with the gold sample of 975 sources selected
by applying the \(\sigma\)-clipping procedure to the relation between X-ray and UV fluxes, instead of luminosity. Indeed, the selection of the sample based on measured fluxes does not require any assumption on the cosmological model, contrary to the one based on luminosities, and, thus enables us to apply this circularity-free procedure to fit \(\Omega_{M}\) with a sample that is not biased by any cosmological assumption.
### Fitting procedure
We have performed all fits with the Bayesian D'Agostini method (D'Agostini, 2005). The likelihood function (\(\mathcal{L}\)) employed to constrain parameters with Quasars is given by Khadka and Ratra (2020, 2020); Lusso et al. (2020); Khadka and Ratra (2021); Colgain et al. (2022); Bargiacchi et al. (2022); Lenart et al. (2023) as:
\[\ln\mathcal{L}=-\frac{1}{2}\sum_{i=1}^{N}\left[\frac{(y_{i}-\phi_{i})^{2}}{s_{ i}^{2}}+\ln(s_{i}^{2})\right] \tag{1}\]
where "\(\ln\)" is the natural logarithm and \(N\) is the number of sources. When we fit the luminosities, \(y_{i}=\log_{10}L^{\prime}_{X}\) of the Quasar at redshift \(z_{i}\), while \(\phi_{i}\) is the logarithmic X-ray luminosity predicted from the fitted model. The quantity \(s_{i}^{2}=(\Delta\log_{10}L^{\prime}_{X})_{i}^{2}+\gamma^{2}(\Delta\log_{10} L^{\prime}_{UV})_{i}^{2}+\delta^{2}\) includes the statistical 1 \(\sigma\) uncertainties (\(\Delta\)) on both luminosities and the intrinsic dispersion \(\delta\) of the RL relation. The free parameters of the fit are \(\gamma\), \(\beta\), \(\delta\), and the ones of the cosmological model studied. The same methodology is applied when we fit fluxes instead of luminosities, just replacing \(\log_{10}L^{\prime}_{UV}\) and \(\log_{10}L^{\prime}_{X}\) with the measured \(\log_{10}F_{UV}\) and \(\log_{10}F_{X}\). In this case, \(\gamma_{\rm F}\) and \(\delta_{\rm F}\) are the slope and the intrinsic dispersion of the linear relation.
### \(\sigma\)-Clipping Technique
Before applying the \(\sigma\)-clipping procedure, we have searched for possible outliers. Thus, we have computed the maximum value of \(\Delta\log_{10}L^{\prime}_{UV}\)/\(\log_{10}L^{\prime}_{UV}\) and \(\Delta\log_{10}L^{\prime}_{X}\)/\(\log_{10}L^{\prime}_{X}\). Points with large uncertainties on luminosities would not be removed by the \(\sigma\)-clipping and would have a higher weight in the cosmological fits compared to points with smaller uncertainties. This check has revealed that our sample does not present any outliers of this kind. Independently of the \(\Omega_{M}\) value, the maximum values of \(\Delta\log_{10}L^{\prime}_{UV}\)/\(\log_{10}L^{\prime}_{UV}\) and \(\Delta\log_{10}L^{\prime}_{X}\)/\(\log_{10}L^{\prime}_{X}\) are 0.022 and 0.013, respectively, with only one source with \(\Delta\log_{10}L^{\prime}_{UV}\)/\(\log_{10}L^{\prime}_{UV}>0.02\) and four sources with \(\Delta\log_{10}L^{\prime}_{UV}\)/\(\log_{10}L^{\prime}_{UV}>0.01\). Because these values are lower than \(\sim 1-2\%\), we do not remove any Quasar from the initial sample. Hence, we apply the \(\sigma\)-clipping selection to all 2421 sources.
The \(\sigma\)-clipping allows us to reduce the intrinsic scatter of the RL relation by removing the sources with a vertical distance from the best-fit relation greater than a chosen threshold value by assuming a given cosmological model. It is used when dealing with relations presenting an intrinsic dispersion to remove possible outliers in the sample and assure a better determination of the free parameters of the relation. This procedure has already been successfully applied for Quasars to constrain cosmological parameters (Lusso et al., 2020; Bargiacchi et al., 2021, 2022). We detail the method for our case. First, we fit the RL relation with the whole Quasar sample assuming a flat \(\Lambda\)CDM model with \(H_{0}=70\,\rm km\,s^{-1}\,Mpc^{-1}\) and \(\Omega_{M}=0.3\) or \(\Omega_{M}=0.1\) or \(\Omega_{M}=1\), to compute the X-ray and UV luminosities. The model used is the RL relation, thus \(\phi_{i}\) of Equation 1 is \(\phi_{i}=\gamma\,\log_{10}L^{\prime}_{UV,i}+\beta\) for the source at redshift \(z_{i}\). From this fit, we obtain the best-fit values of \(\gamma\), \(\beta\), and \(\delta\) and their 1 \(\sigma\) uncertainties. Then, we evaluate for each Quasar at \(z_{i}\) the quantity
\[\Sigma=\frac{|\log_{10}L^{\prime}_{X,i}-(\gamma\,\log_{10}L^{\prime}_{UV,i}+ \beta)|}{\sqrt{\Delta^{2}\!\log_{10}}L^{\prime}_{X,i}+\gamma^{2}\,\Delta^{2} \!\log_{10}L^{\prime}_{UV,i}+\delta^{2}} \tag{2}\]
where we use the best-fit values previously obtained for \(\gamma\), \(\beta\), and \(\delta\). This quantity is exactly the one that is minimized in the fitting algorithm (i.e. the first term in square bracket of Equation 1) to determine the parameters of the RL relation, and thus it is the most appropriate to estimate the discrepancy between the measured X-ray luminosity and the one predicted from the RL relation. Once we have computed \(\Sigma\) for each source, we select only Quasars with \(\Sigma\leq\sigma_{\rm clipping}\) and we repeat the fit of the RL relation with this reduced sample. Since the fit on this new sample yields best-fit values of \(\gamma\), \(\beta\), and \(\delta\) different from the ones of the previous fit, and thus different \(\Sigma\) values for each source, there will be Quasars in the sample considered at this step that do no fulfil the requirement \(\Sigma\leq\sigma_{\rm clipping}\) anymore. Hence, we iterate this procedure until all sources in the selected sample verify the requisite. After this \(\sigma\)-clipping process, we obtain a final Quasar sample, with the corresponding best-fit values and 1 \(\sigma\) uncertainties of \(\gamma\), \(\beta\), and \(\delta\). We have chosen several \(\sigma_{\rm clipping}\) values between 0.6 and 2 to investigate how the assumed \(\sigma_{\rm clipping}\), the best-fit value of \(\delta\), and the number of survived sources are related. This method selects the gold samples (983 Quasars) within a flat \(\Lambda\)CDM model shown in the left panel of Fig. 1 with its corresponding best-fit RL relation (purple line). We have also applied the same method to select the Quasar samples using the observed fluxes instead of the luminosities. We fit the relation between \(\log_{10}F_{X}\) and \(\log_{10}F_{UV}\) for which \(\gamma_{\rm F}\), \(\beta_{\rm F}\), and \(\delta_{\rm F}\) are the slope, intercept, and intrinsic dispersion, respectively. The gold sample of 975 Quasars obtained with the \(\sigma\)-clipping on fluxes is shown in the left panel of Fig. 4 with the best-fit linear relation (purple line).
### Cosmological fits
For each Quasar sample produced by the \(\sigma\)-clipping with a specific threshold \(\sigma_{\rm clipping}\) we have fitted a flat \(\Lambda\)CDM model. Following the fitting procedure described above, if we use the Quasar sample selected through the \(\sigma\)-clipping on the luminosities corrected for a fixed redshift evolution, the quantities \(y_{i}\) and \(\phi_{i}\) of Equation 1 for a source at \(z_{i}\) are respectively \(y_{i}=\log_{10}F_{X,i}+\log_{10}(4\,\pi\,d_{l}^{2}(z_{i}))-k_{X}\log_{10}(1+z _{i})\) and \(\phi_{i}=\gamma\left[\log_{10}F_{UV,i}+\log_{10}(4\,\pi\,d_{l}^{2}(z_{i}))-k_{ UV}\log_{10}(1+z_{i})\right]+\beta\), where \(k_{X}\) and \(k_{UV}\) are the evolutions corresponding to the cosmological model assumed. Instead, if we perform the fit on the sample obtained from the \(\sigma\)-clipping on the measured fluxes, we require \(y_{i}=\log_{10}F_{X,i}+\log_{10}(4\,\pi\,d_{l}^{2}(z_{i}))-k_{X}(\Omega_{M}) \log_{10}(1+z_{i})\) and \(\phi_{i}=\gamma\left[\log_{10}F_{UV,i}+\log_{10}(4\,\pi\,d_{l}^{2}(z_{i}))-k_ {UV}(\Omega_{M})\log_{10}(1+z_{i})\right]+\beta\), thus avoiding the circularity problem. The luminosity distance \(d_{l}\) is computed fixing \(H_{0}=70\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\) and considering \(\Omega_{M}\) as a free parameter with a wide uniform prior between 0 and 1. We also leave \(\gamma\) and \(\beta\) free to vary. Hence, we obtain the best-fit values of \(\Omega_{M}\), \(\gamma\), \(\beta\), and \(\delta\) with their associated 1 \(\sigma\) uncertainty.
## 3 Results
### Golden samples with an assumed cosmology
Our initial Quasar sample is the most up-to-date for cosmological studies (Lusso et al., 2020) and composed of 2421 sources between \(z=0.009\) and \(z=7.54\). This sample is studied to define the gold Quasar sample to obtain the tightest possible relation to be used as an efficient cosmological tool with the same precision achieved with SNe Ia. Differently from recent works (Lusso et al., 2020; Bargiacchi et al., 2021, 2022), we use the full sample of Quasars at all redshifts and we correct for redshift evolution of the sample, as detailed before. So, we retain also sources at small redshifts instead excluded in most of previous analyses. Differently from Risaliti and Lusso (2019), the RL relation is already corrected for selection biases and redshift evolution as shown in Dainotti et al. (2022) when the \(\sigma\)-clipping is applied. This procedure allows us to obtain the smallest intrinsic scatter of an already bias-free and evolution-free relation. Nevertheless, to constrain cosmological parameters, such as \(\Omega_{M}\), we are not only interested in reaching the smallest dispersion, but also in relying on a statistically sufficient number of sources. Thus, we need to find a compromise between these two antagonistic factors. Indeed, increasing the data set results in a larger intrinsic dispersion and vice versa. We divide our results in the search of two main golden samples, one assuming a given cosmological model and the second one without any cosmological assumption, thus completely overcoming the so-called circularity problem.
Figure 1: Left panel: The golden sample of 983 Quasars obtained with the \(\sigma\)-clipping on the \(L_{\rm X}-L_{\rm UV}\) relation assuming a flat \(\Lambda\)CDM model with \(\Omega_{M}=0.3\) and \(H_{0}=70\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\). The resulting best-fit values of the parameters are: the slope \(\gamma=0.576\pm 0.004\), the normalization \(\beta=8.7\pm 0.2\), and the dispersion \(\delta=0.007\pm 0.004\). Blue points are the sources with error bars representing the statistical 1 \(\sigma\) uncertainties and the best-fit linear relation is drawn as a purple line. The black dashed line is the best-fit line for the 2036 Quasars used in Lusso et al. (2020), for which \(\gamma=0.562\pm 0.011\), \(\beta=9.2\pm 0.3\), and \(\delta=0.221\pm 0.004\) after correction for evolution. Right panel: Cosmological results in a flat \(\Lambda\)CDM model from our golden sample shown in the left panel. This shows the values of \(\Omega_{M}\), \(\gamma\), \(\beta\), and \(\delta\). The contour levels at 68% and 95% are represented by the inner dark and light blue regions, respectively.
If we assume a flat \(\Lambda\)CDM model with \(\Omega_{M}=0.3\) and \(H_{0}=70\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\), the optimal number of sources is 983, obtained by requiring a threshold for the \(\sigma\)-clipping \(\sigma_{\mathrm{clipping}}=1.788\), with a scatter \(\delta=0.007\) of the RL relation. The 983 Quasars created in such a manner define the golden sample shown with the corresponding best-fit RL relation in the left panel of Fig. 1.
Then, we use this sample to derive \(\Omega_{M}\) with the Monte Carlo Markov Chain (MCMC) computation and leaving contemporaneously free the values of \(\Omega_{M}\) in the range from 0 a 1 using a uniform distribution and the parameters of the RL relation: \(\gamma\) (the slope), \(\beta\) (the normalization) and \(\delta\) (the dispersion). We fix \(H_{0}=70\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\) assuming a standard flat \(\Lambda\)CDM model. Results of this analysis are shown in the right panel of Fig. 1 where the corner plot of the RL relation parameters and \(\Omega_{M}\) are obtained when we perform the correction for the redshift evolution. The best-fit value yields \(\Omega_{M}=0.268\pm 0.022\) which carries the same uncertainty on \(\Omega_{M}\) found with the Pantheon sample (1048 SNe Ia) assuming the same cosmology (Scolnic et al., 2018), as shown with the horizontal blue line in Fig. 2. We also recover the \(\Omega_{M}\) we have assumed to build this sample within 0.68 \(\sigma\).
Figure 2: Upper panel: The uncertainty of \(\Omega_{M}\) as a function of the number of sources and as function of the sigma-clipping showed with the colour bar on the right. The solid black line shows the best-fit of the decreasing trend of the uncertainty on \(\Omega_{M}\) vs. the number of sources, while the horizontal blue line denotes the uncertainty on \(\Omega_{M}\) reached with the Pantheon SNe Ia sample. This is obtained for our golden sample under the assumption of a flat \(\Lambda\)CDM model with \(\Omega_{M}=0.3\), and \(H_{0}=70\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\). Lower panel: The intrinsic dispersion of the RL relation as a function of the number of sources and as a function of the sigma-clipping indicated on the right as a color bar. Black lines mark the 1 \(\sigma\) uncertainty on the \(\delta\) values.
To assess to what extent the number of sources impacts the values and the uncertainty on \(\Omega_{M}\), we show in the upper panel of Fig. 2 the values of \(\sigma_{\Omega_{M}}\) vs. the number of sources and the varying of the sigma-clipping shown as a color bar. In the upper panel of Fig. 2, we start from the total sample of sources and we apply a very restrictive criterion for the \(\sigma\)-clipping=0.6 and then we continue with larger values of \(\sigma\)-clipping until arriving at 2.0. When we change the values of the \(\sigma\)-clipping, the smaller is the \(\sigma\)-clipping, the smaller the sample size obtained if we consider sources from 300 to 1000 and from 1700 to 2000, but
Figure 3: Upper panel: The values of \(\Omega_{M}\) and their associated uncertainties vs. the number of Quasars. The color bar on the right shows the normalized probability density, indicating for each sample size the most probable value of \(\Omega_{M}\), thus the smallest uncertainty on \(\Omega_{M}\). This Fig. indicates that the smallest error bar on \(\Omega_{M}\) (the red contour) is achieved for \(N\approx 2000\), which yields \(\Omega_{M}=0.119\pm 0.019\). This is obtained for our golden sample assuming a flat \(\Lambda\)CDM model. Bottom panel: Values of \(\Omega_{M}\) with corresponding 1 \(\sigma\) uncertainties as a function of the \(\sigma\)-clipping threshold and the probability distribution function (PDF) showed with the colour bar on the right side. The red line is the best-fit of \(\Omega_{M}\) points.
when the \(\sigma\)-clipping is too small (0.045 for example, see the red point in the upper panel of Fig. 2), then the uncertainties on \(\Omega_{M}\) becomes larger. Since the \(\sigma\)-clipping and the number of sources also determine the intrinsic dispersion of the RL, thus we show in the bottom panel of Fig. 2 the intrinsic scatter as as a function of the number of sources and of the \(\sigma\)-clipping shown as a color bar. It is clear from this figure that the dispersion of the RL relation increases monotonically at the increase of the sample sizes starting from around 1000 sources, while to achieve smaller dispersion the trend is rather flat from 300 to 1000 sources. This increasing trend of the dispersion of RL relation as a function of the number of sources is reflected by the increasing values of the uncertainties on \(\Omega_{M}\) in the range between 1000 and 1300. However, the highly non-linear process of obtaining cosmological parameters, in this case the value of \(\sigma_{\Omega_{M}}\), and the cut that the \(\sigma\)-clipping induces in the initial sample does not allow a straightforward comparison between the upper and lower panel of Fig. 2. To conclude, the number of 975 sources is the optimal compromise to obtain the smallest uncertainty on \(\Omega_{M}\) taking into consideration the dispersion. Indeed, if we still would enlarge further the \(\sigma\)-clipping the number of sources would become larger together with the dispersion of the RL relation. This is the reason why we have mentioned before the relevance to reach a compromise among the number of sources used and the dispersion of the RL and consequently the uncertainty on \(\Omega_{M}\) before the increase we observe between 1000 and 1700 sources. We stress that the subsamples shown in Fig. 2 are not subsample of the Gold Sample, but they are samples drawn independently, however with the same procedure of the \(\sigma\)-clipping starting from the full sample assuming a flat \(\Lambda\)CDM model. Continuing on the evaluation of how many number of sources are needed to obtain the most probable value of \(\Omega_{M}\) we show the colour map in the upper panel of Fig. 3 where the probability density function (PDF) of \(\Omega_{M}\) is plotted as a function of the number of sources. We note from this figure that 983 sources are the best sample as it provides closed cosmological contours with the highest corresponding probability. To complement this information, we also plotted in the bottom panel of Fig. 3 the \(\Omega_{M}\) values as a function of the \(\sigma\)-clipping and as a function of the probability density for \(\Omega_{M}\) shown in the color bar. From this figure it is clear that the \(\sigma\)-clipping of 1.8 is the optimal value to choose our Gold sample since the scatter on \(\Omega_{M}\) becomes small enough to reach the precision of SNe Ia Pantheon sample. To check this result against the cosmological settings, we also test the assumptions of \(\Omega_{M}=1\) (Universe filled by matter in which \(\Lambda=0\)), and \(\Omega_{M}=0.10\) (very close to the De Sitter Universe). We obtain that the best-fit values of \(\Omega_{M}\) are consistent within less than 3 \(\sigma\) with the a-priori assumption. In fact, when we choose the golden sample of 980 Quasars derived with the \(\sigma_{\rm clipping}=1.788\) assuming \(\Omega_{M}=0.10\), we obtain \(\Omega_{M}=0.083\pm 0.009\), while when we select the sample of 968 Quasars assuming \(\Omega_{M}=1\), we obtain \(\Omega_{M}=0.910\pm 0.055\) for \(\sigma_{\rm clipping}=1.785\).
### The Anderson Darling Test for different Gold samples and the parent population
To check the similarity between the Gold sample and the parent population, we have applied the Anderson-Darling test to compare both the fluxes-fluxes distributions and the luminosity-luminosity distributions from the parent population and the Gold sample. The result of the test shows that the Gold of the luminosities and fluxes are not compatible both in X-rays and UV with the parent population. In addition, the samples from the parent population used by Lusso et al. (2020) also is not compatible with our Gold sample. It is not surprising that we can have different results when other probes are added to Quasars since the probes with smaller uncertainties (\(s\) in Eq. 1) weigh more than the ones with larger uncertainties, being the \(s\) values at the denominator of the likelihood function. In addition, since our Gold sample is already a sample for which the selection biases have been removed, the cosmological results from this sample should not necessarily be the same as the enlarged sample (the parent population) if the parent population undergoes selection biases and redshift evolution. Indeed, results of cosmological parameters may change if evolutionary effects are not considered (Dainotti et al., 2013, 2022, 2023; Lenart et al., 2023). In relation instead of our Gold sample both in X-rays and UV derived from a given cosmology (\(\Omega_{M}=0.3\) and \(H_{0}=70\:kms^{-1}Mpc^{-1}\)) these are compatible with the Gold sample originated assuming other cosmologies (\(\Omega_{M}=0.1\), \(H_{0}=70\:kms^{-1}Mpc^{-1}\), and \(\Omega_{M}=1\) and \(H_{0}=70\:kms^{-1}Mpc^{-1}\)). This ensures that the selection of our Gold sample does not depend on cosmological models.
### A circularity-free golden sample
To further guarantee that we completely avoid the circularity problem for the choice of the golden sample of Quasars, we use the \(F_{X}-F_{UV}\) relation, the observer frame relation corresponding to the RL relation. As previously, we apply the same \(\sigma_{clipping}\) procedure to reduce the scatter of the \(F_{X}-F_{UV}\) relation. A \(\sigma_{\rm clipping}=1.78\) identifies an optimal sample of 975 sources, shown in the left panel of Fig. 4. We then use this sample (free from any circularity problem) to derive \(\Omega_{M}\) and the RL parameters (see Methods), and we obtain \(\Omega_{M}=0.107\pm 0.047\), as reported in the right panel of Fig. 4.
Additionally, to assure that our findings are not driven by the low-\(z\) Quasars (\(z<0.7\)), which according to Lusso et al. (2020) could be affected by host galaxy contamination and lower data quality, we have removed these sources (47 Quasars) from the golden sample obtaining \(\Omega_{M}=0.125\pm 0.040\).
## 4 Monte Carlo Markov chain (MCMC) sampling uncertainty
To further guarantee that our results are not due to the use of a single run in the MCMC calculation and that the sampling procedure is stable, we show that the results of the computation when it is run 100 times. With this procedure, we obtain \(<\Omega_{M}>=0.112\pm 0.048\), where the symbol \(<>\) denotes the average value. We have investigated the reliability of the best-fit values and 1 \(\sigma\) uncertainty on \(\Omega_{M}\) obtained in each of the cosmological fits. Our results are derived by fitting the free parameters of the models studied with only one MCMC run. We test these results against the sampling error on the parameters derived in the sampling procedure. To this end, we have looped all the MCMC samplings 100 times for each model, then computing the mean values of \(\Omega_{M}\) and its uncertainty. The results obtained with this method for both quantities and all the cosmological cases investigated in our analysis are shown in Table 1. These results are completely consistent with the ones obtained from only one run of the MCMC, with a maximum discrepancy of 0.08 \(\sigma\).
## 5 Discussion and conclusions
In the choice of the golden samples with a fixed cosmology we have first assumed a flat \(\Lambda\)CDM model with \(\Omega_{M}=0.3\) and \(H_{0}=70\,\mathrm{km\;s^{-1}\,Mpc^{-1}}\). This is a necessary starting cosmological model and value for \(\Omega_{M}\), because our aim is to compare the results of our uncertainties with the ones obtained using the Pantheon sample for the same cosmological model. Regarding the sample size, we note that the 1048 Pantheon SNe Ia have been slimmed down from an original sample of 3473 events, with a cutting of the 70% of the starting data set (Scolnic et al., 2018). Instead, in our work, we reduce the initial sample of 2421 Quasars
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\sigma_{\mathrm{clipping}}\) & \(\Omega_{\mathrm{M,start}}\) & \(<\Omega_{M}>\) & \(S_{<\Omega_{M}>}\) & \(<\sigma_{\Omega_{M}}>\) & \(S_{<\sigma_{\Omega_{M}}>}\) \\ \hline \hline \multicolumn{6}{|c|}{Luminosities} \\ \hline
1.788 & 0.3 & 0.2681 & 0.0010 & 0.0223 & 0.0007 \\ \hline
1.788 & 0.1 & 0.0836 & 0.0004 & 0.0086 & 0.0004 \\ \hline
1.785 & 1 & 0.9141 & 0.0024 & 0.0536 & 0.0013 \\ \hline \hline \multicolumn{6}{|c|}{Fluxes} \\ \hline
1.78 & - & 0.1124 & 0.0021 & 0.0475 & 0.0021 \\ \hline \end{tabular}
\end{table}
Table 1: The results of the looped computations for 100 iterations. \(S\) denotes the standard deviation of a given parameter, \(\Omega_{\mathrm{M,start}}\) is the value of \(\Omega_{M}\) assumed to obtain the corresponding golden sample with the threshold value \(\sigma_{\mathrm{clipping}}\). The cases in which the \(\sigma\)-clipping is applied to the relation in luminosities or in fluxes are also shown separately.
Figure 4: Left panel: The gold sample of 975 Quasars generated with the \(\sigma\)-clipping on the \(F_{\mathrm{X}}-F_{\mathrm{UV}}\) relation with the best-fit parameters being \(\gamma_{\mathrm{F}}=0.625\pm 0.007\), \(\beta_{\mathrm{F}}=-14.3\pm 0.2\), and \(\delta_{\mathrm{F}}=0.053\pm 0.002\). Blue points are the sources with error bars representing the statistical 1 \(\sigma\) uncertainties and the best-fit linear relation is drawn as a purple line. Right panel: Cosmological results from the golden sample shown in the left panel. This corner plot shows the values of \(\Omega_{M}\), \(\gamma\), \(\beta\), and \(\delta\). The contour levels at 68% and 95% are represented by the inner dark and light blue regions, respectively.
of the \(\sim 60\%\) to build all the golden samples studied. Thus, the slimming of our sample is even less severe than the one of SNe Ia. We here compare the SNe Ia and Quasars sample from a mere statistical point of view and, not for the purpose to compare them from a physical and an observational point of view, since they differ on both aspects.
Our results highlight that the uncertainties on \(\Omega_{M}\) depend on the assumed cosmology, namely the smaller the uncertainties the more we are closer to the most likely value of the cosmological parameters. The uncertainty on \(\Omega_{M}\) assuming \(\Omega_{M}=0.10\) is 6.7 times smaller than the one in the case of \(\Omega_{M}=1\). With the evolutionary parameters included and with no information about the underlying cosmology, we still obtain a value of \(\Omega_{M}\) compatible with the De-Sitter Universe at the 2.3 \(\sigma\) level, and is not compatible with the current value of \(\Omega_{M}=0.338\pm 0.018\) found in Brout et al. (2022) at the level of 4.58 \(\sigma\) level. We here clarify that this estimate on \(\Omega_{M}\) found by us is larger (more than 1 \(\sigma\)) than the observed baryonic matter today if we consider the value found by Planck measurements of CMB (Planck Collaboration et al., 2020) which is \(0.0500\pm 0.0002\). Although the value of the uncertainties in the gold sample identified by the \(F_{X}-F_{UV}\) relation is 2.14 times larger than the ones obtained by the Pantheon sample of SNe Ia (Scolnic et al., 2018), we can still obtain uncertainties comparable with the 740 SNe Ia shown in Betoule et al. (2014) where \(\sigma_{\Omega_{M}}=0.042\).
The results on \(\Omega_{M}\) obtained with the golden sample of 975 quasars from the \(F_{X}-F_{UV}\) relation (i.e. \(\Omega_{M}=0.107\pm 0.047\)) is compatible with \(\Omega_{M}=0.125\pm 0.040\) derived from the same golden sample to which we have removed the low-\(z\) sources due to the contamination of the host galaxies (47 Quasars). However, when a larger sample is available is necessary to check if the current results still hold within 1 \(\sigma\). Although there have been several studies that have measured \(\Omega_{M}\) with Quasars with lower precision (e.g. Khadka and Ratra, 2020, 2022; Colgain et al., 2022), we here for the first time obtain a value with a higher precision with QSOs alone, which is not due to a circular argument, since it is based on a flux-flux relation. In addition, the analysis performed by us using the luminosities and assuming a given cosmological model is meant to show the great potentiality of Quasars to be used as standardizable candles even currently when an appropriate sample size and reduced uncertainties is used. Indeed, the analysis we have shown here is similar to the analysis we performed in Dainotti et al. (2023) where we were not interested in knowing the value of the cosmological parameters, but we were focused on how many sources, in this case Quasars, are needed to reach the same precision of SNe Ia Pantheon sample.
In conclusion, we have shown that Quasars alone with the RL relation can now be upgraded to reliable standard candles to measure cosmological parameters such as \(\Omega_{M}\) with the same precision as SNe Ia, but at a large redshift, up to 7.5, when a golden sample of Quasars is chosen.
## 6 Acknowledgement
We thank Beta Lusso and Risaliti Guido for the discussion on the role of selection biases in the sample and to Biagio De Simone to help running a couple of notebooks for the MCMC sampling.
|
2306.17473 | An Orbital Solution for WASP-12 b: Updated Ephemeris and Evidence for
Decay Leveraging Citizen Science Data | NASA Citizen Scientists have used Exoplanet Transit Interpretation Code
(EXOTIC) to reduce 40 sets of time-series images of WASP-12 taken by privately
owned telescopes and a 6-inch telescope operated by the Center for Astrophysics
| Harvard & Smithsonian MicroObservatory (MOBs). Of these sets, 24 result in
clean transit light curves of WASP-12 b which are included in the NASA
Exoplanet Watch website. We use priors from the NASA Exoplanet Archive to
calculate the ephemeris of the planet and combine it with ETD (Exoplanet
Transit Database), ExoClock, and TESS (Transiting Exoplanet Survey Satellite)
observations. Combining these datasets gives an updated ephemeris for the
WASP-12 b system of 2454508.97923 +/- 0.000051 BJDTDB with an orbital period of
1.09141935 +/- 2.16e-08 days which can be used to inform the efficient
scheduling of future space telescope observations. The orbital decay of the
planet was found to be -6.89e-10 +/- 4.01e-11 days/epoch. These results show
the benefits of long-term observations by amateur astronomers that citizen
scientists can analyze to augment the field of Exoplanet research. | Avinash S. Nediyedath, Martin J. Fowler, A. Norris, Shivaraj R. Maidur, Kyle A. Pearson, S. Dixon, P. Lewin, Andre O. Kovacs, A. Odasso, K. Davis, M. Primm, P. Das, Bryan E. Martin, D. Lalla | 2023-06-30T08:38:44Z | http://arxiv.org/abs/2306.17473v5 | New Citizen Science Light Curves of WASP-12 b and Updated Ephemeris by Combining with ETD and ExoClock Datasets
###### Abstract
NASA Citizen Scientists have used EXoplanet Transit Interpretation Code (EXOTIC) to reduce 42 sets of time-series images of WASP-12 taken by privately owned telescopes and the 6-inch telescope operated by the Center of Astrophysics | Harvard & Smithsonian MicroObservatory (MOBs). Of these sets, 24 result in clean transit light curves of WASP-12 b which are included in the NASA Exoplanet Watch website. We use priors from the NASA Exoplanet Archive to calculate the ephemeris of the planet and combine it with ETD (Exoplanet Transit Database) and ExoClock observations. Combining these datasets give an updated ephemeris for the WASP-12 b system of \(2454508.97895\pm 0.000054\) with an orbital period of \(1.0914196\pm 2.3847455\)e-08 days which can be used to inform the efficient scheduling of future space telescope observations.
Transit Photometry -- WASP-12 b -- Citizen Science
## 1 Introduction
WASP-12 b was discovered by Hebb (Hebb et al., 2009), being 1.465 times the mass of Jupiter and 1.937 times the radius of Jupiter, orbiting its F9V host star around every 1.09 days. The extreme gravity is stretching the hot gas giant into an ovoid body, all the while slowly cannibalizing the planet and resulting in a decrease in its orbital period (Yee et al., 2019).
The transit method has been used to monitor the brightness of the exoplanet's host star. This method tracks the brightness of the combined system (exoplanets and host star) with time, looking for changes caused when a planet passes in front of its star blocking some light from reaching the Earth. The method tells us about the size of the planets and the angle at which they orbit the host star relative to our line of sight. From the observation of multiple transits it also provides information on the orbital period to update the ephemeris. This method has become a reliable way of obtaining the mid transit times of exoplanets and is within the reach of amateurs with small telescopes, as has been shown by (Zellem et al., 2020) and (Hewitt et al., 2023).
Since the discovery of WASP-12 b, there have been new ways of investigating exoplanets, such as the James Webb Space Telescope (JWST) which is being used to study the planets' atmospheric chemistry (Seidel et al., 2023). This leads to the need to update the ephemerides of exoplanets to make maximum use of the expensive space telescope time to characterize their atmospheres, etc. As of July 2023, we have seen a total of 457 transit observations of WASP-12 b by professional and amateur astronomers in the datasets of ETD (Exoplanet Transit Database) (Poddany et al., 2010), ExoClock (Kokori et al., 2022) and Exoplanet Watch. In this paper we study 24 transits for WASP-12 b from NASA's _Exoplanet Watch_, a citizen science project which enables members around the world to use their time and effort to observe and reduce observation data and produce light curves, and combine them with observations from the ETD (Poddany et al., 2010) and ExoClock (Kokori et al., 2022) databases which can be used to update the ephemeris of the planet.
## 2 Observations
Thirty-three observations were made with 60-second, unfiltered exposures with 3-minute cadence collected by a 6-inch aperture MicroObservatory telescope located at Mount Hopkins (latitude 31.675\({}^{\circ}\), longitude -110.952\({}^{\circ}\), 1,268m altitude above sea level) in Arizona, using a KAF-1403 ME CCD camera with a pixel scale of 5.2" per pixel and 2\(\times\)2 binning to reduce noise. In addition, nine observations were taken from privately owned telescopes by citizen scientists yielding a total of 42 observation sets from 03 January 2015 to 06 March 2023. All of the data were analyzed using EXOplanet Transit Interpretation Code (EXOTIC), which is a python-based tool developed by the Jet Propulsion Laboratory's _"Exoplanet Watch"_ program for reducing exoplanet transit data. This software can run locally as well as on the cloud via Google's online "Collaboratory" tool (Zellem et al., 2020). Priors for WASP-12 b using nested sampling fitting by EXOTIC are automatically scraped from the NASA Exoplanet Archive (Akeson et al., 2013). EXOTIC generates estimates of mid transit times along with 1\(\circ\) uncertainties based on the resulting posterior distributions.
Observations of WASP-12 for reduction were provided by Exoplanet Watch from the MicroObservatory archive for citizen scientists who did not have their own telescope. Using the AAVSO (American Association of Variable Star Observers) finder chart for WASP-12 (see Fig 1), we identified up to seven non-variable comparison stars: AUID 000-BKG-164, AUID 000-BKG-165, AUID 000-BKG-166, AUID 000-BKK-420, AUID 000-BMX-310, AUID 000-BKG-167, and AUID 000-BKG-168. They were selected based on the AAVSO VSP (Variable Star Plotter) and were used for EXOTIC's reduction of the light curves. EXOTIC aligns the images and determines the optimal inner and outer photometric apertures. The inner aperture encompasses the star's point spread function (PSF) without including the sky background, which fills the space between the outer and inner apertures.
EXOTIC determines the optimal aperture sizes by fitting to a Gaussian PSF model (Mizrachi et al., 2021). To account for changes in sky brightness affecting the measured flux, EXOTIC subtracts the background photon count from the star's flux. Finally, the change in flux of the target star is compared to the light emitted by each of the selected comparison stars, and a "quick fit" is performed. Nested sampling is used to fit the modeled transit to the observations and produces a triangle plot showing the distribution of posteriors to see whether they were Gaussian (see Fig 3). It is a technique commonly used for posterior exploration and parameter estimation in both ephemeris and light-curve fitting, because of its ability to handle complex parameter spaces and efficiently explore regions of high likelihood. From the sampling, estimates of the full posterior distribution of the parameters is calculated, which is valuable for understanding the uncertainties and correlations between the estimates quantities.
EXOTIC's output included a light curve for each series along with the scatter in the residuals, the midpoint time, transit depth, transit duration, semi-major axis relative to the stellar radius, and planetary versus stellar radius. Example light curves are shown (see Fig 4).
These results from EXOTIC were uploaded to the AAVSO Exoplanet Database were then processed by JPL using the CITISENS (Citizen Initiated Transit Information Survey Enabling NASA Science) pipeline to give the results that are shown on the Exoplanet Watch website and which were used in this study.
Figure 1: AAVSO VSP view of WASP 12 Star field.
Figure 2: WASP-12 labeled Star Field in AstroImageJ. Green annotations are used to indicate comparison stars and red annotations are used to indicate the target star.
Figure 3: Nested sampling Posterior triangle plots using EXOTIC. The data points are color-coded to the likelihood of each fit, with darker colors indicating a higher likelihood. Not all posteriors are shown for reasons of space
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Parameter & Value & Uncertainty & Units & Reference \\ \hline RA & 97.624645 & & Decimal & \\ DEC & 29.6722662 & & Decimal & \\ Host Star Metallicity & 0.3 & 0.05 & & (� Oztürk \& Erdem, 2019) \\ Host Star log(g) & 4.17 & 0.03 & Log10(cgs) & (�ztürk \& Erdem, 2019) \\ Host Star Radius & 1.57 & 0.07 & Sol & (Kokori et al., 2022) \\ Host Star Effective Temperature & 6300.0 & 200.0 & K & (Kokori et al., 2022) \\ a/R\({}_{\rm s}\) & 3.0 & 0.016 & & (Chakrabarty \& Sengupta, 2019) \\ Eccentricity & 0.0 & 0.01 & & (�ztürk \& Erdem, 2019) \\ Inclination & 83.52 & 0.03 & Deg & (Chakrabarty \& Sengupta, 2019) \\ Omega & 272.7 & 2.4 & Deg & (Bechter et al., 2014) \\ Orbital Period & 1.09141911 & 6e-08 & Day & (Ivshina \& Winn, 2022) \\ R\({}_{\rm p}\) & 21.71 & 0.63 & R\_Earth & (Chakrabarty \& Sengupta, 2019) \\ R\({}_{\rm p}\)/R\({}_{\rm s}\) & 0.1307 & 0 & & \\ Ephemeris [JD] & 2457010.512173 & 7e-05 & BJD\({}_{\rm TDB}\) & (Ivshina \& Winn, 2022) \\ \hline \end{tabular}
\end{table}
Table 1: Assumed Priors by NASA’s Exoplanet Archive for Exoplanet Watch.
## 3 Data
There were 14 priors from previously published papers that were used by CITISENS (see Table 1) for the transit fitting from EXOTIC. EXOTIC's reduction process produced 42 new light curves of WASP-12 b transits (see examples in Fig 4). Of these, there were 7 duplicate transits taken on the same night by MicroObservatory but which were reduced by different people. Two transits that were observed by TESS were excluded from the analysis since an objective of this paper is to demonstrate the contribution that citizen scientist observations can make to exoplanet ephemeris refinement. There were 9 transits that were consistently showing null detection, which means:
\[(\rm{R_{p}/R_{s}})^{2}\ \text{-}\ 3\sigma=0 \tag{1}\]
where \(\rm{R_{p}/R_{s}}\) is the ratio of the radius of the planet and its star and \(\sigma\) is the uncertainty. Therefore, a total of 24 observations were taken into account for the Observed-Calculated (O-C) plot (see Table 2). Each point on the plot shows the observed mid-transit time minus the expected mid transit time calculated from the ephemeris along with the combined 1\(\sigma\) uncertainty (see Fig 5).
The literature value of \(1.657\pm 0.046\) solar radii (\(11.527\times 10^{5}\) km) for WASP-12 (Chakrabarty & Sengupta, 2019) is used for \(\rm{R_{S}}\) to calculate the radius of the planet in Jupiter radii:
\[\rm{r_{j}=R_{S}\ *(R_{p}/R_{S})\ /\ Rj\pm SEM} \tag{2}\]
Here, the planetary size is calculated to be \(1.937\pm 0.056\) Jupiter radii. Using the MicroObservatory image sets of WASP-12 b transits, we were able to update the ephemeris using following equation:
\[\rm{t_{f}=n\ *P+T_{m}} \tag{3}\]
where \(\rm{t_{f}}\) is a future mid-transit time, P is the period, n is the orbital epoch, and \(\rm{T_{m}}\) is a reference mid- transit time. The linear ephemeris is optimized using nested sampling to derive posterior distributions for the mid-time and period.
NASA Exoplanet Watch's observations gave a \(\rm{T_{mid}=2460009.73115\pm 0.00011\ BJD_{TDB}}\) with an orbital period of \(1.09141889\pm 3.81064507\)e-08 days.This is a clear indication of how advanced and easily accessible it has become to reduce transit data from the perspective of a citizen scientist. Updating the ephemeris of WASP-12 b using amateur observations from Exoplanet Watch will ensure the maximum use is made of expensive non-terrestrial assets such as JWST and ARIEL (Zellem et al., 2020). The ExoClock observations from 12 February 2008 to 20 December 2020 give a \(\rm{T_{mid}=2457024.706177\pm 5.5e-05\ BJD_{TDB}}\) with an orbital period of \(1.091419179\pm 4.3e-08\) days (Kokori et al., 2022). Likewise, ETD observations from 12 February 2008 to 27 December 2021 gave a \(\rm{T_{mid}=2456594.6766}\) with an orbital period of \(1.09141964\) days (Poddany et al., 2010). The ephemerides of the ExoClock and ETD datasets were then forward propagated using the formula (Zellem et al., 2020):
\[\rm{\Delta T_{mid}=(n^{2}\ _{\rm{orbit}}\cdot\Delta P^{2}+2n_{\rm{orbit}} \cdot\Delta P\Delta T_{0}+\Delta T_{0}^{2})\lambda_{2}} \tag{4}\]
Figure 4: Example transit light curves on the WASP-12 b. The gray points represent data from each image in the data set. The blue points represent the average of a set of binned data points, used to fit the light curve. The red lines show the expected variation based on the best fitting EXOTIC model for each transit. Not all transits are shown for reasons of space; all the light curves can be seen at: Exoplanet watch results – Exoplanet Exploration: Planets Beyond our Solar System (nasa.gov)
This was done to match the same epoch as Exoplanet Watch to combine the updated ephemeris. Posteriors were then derived for the updated ephemeris of the combined data using nested sampling (see fig 6) (Pearson et al., 2022).
## 4 Results
Combining the Exoplanet Watch, ETD and ExoClock datasets gives an updated ephemeris for the WASP-12 b system of \(2454508.97872\pm 0.000054\) with an orbital period of \(1.0914196\pm 2.3847455\)e-08 days which is 1.164 minutes different from the original ExoClock dataset, implying that there is a 2-fold improvement in the precision of the period (see fig 7). It is clear that the Exoplanet Watch O-C differs from those of ExoClock and ETD in that it appears to have a linear, rather than a non-linear spread of data points. This is possibly because of the shorter time frame that is covered by the majority of the Exoplanet Watch observations which extend back only around 500 epochs, compared with the ExoClock and ETD data which cover a longer period of observations (i.e., over 4000 epochs). Whilst this is a linear ephemeris that does not take into account the observed changes over the past 5000 epochs, it is nevertheless considered sufficiently accurate to inform the efficient scheduling of future space telescope observations.
This ephemeris was then analyzed with the Lomb-Scargle periodogram to search for non- periodic signals since the data was irregular with uniform sampling intervals (see Fig 8). The periodogram shows that there is a known system of orbital change, which suggests an orbital decay. Using this, it was possible to predict the planet's future behavior along 1\(\sigma\) and 3\(\sigma\) confidence intervals.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Transit & Date(UTC) & Mid-transit & Mid-transit & Observer & Observer \\ Number & & (BJD\({}_{\text{DBB}}\)) & Uncertainty & Observer & Code \\ & & & (days) & & \\ \hline
1 & 2015-01-03 & 2457025.7867 & 0.0038 & Ken Davis & DKEB \\
2 & 2015-11-21 & 2457347.7655 & 0.0058 & Unknown & RFCA \\
3 & 2016-11-02 & 2457694.8263 & 0.0054 & Prithwis Das & DPRA \\
4 & 2017-12-25 & 2458112.8501 & 0.0022 & Martin J. Fowler & FMAA \\
5 & 2019-01-09 & 2458492.6454 & 0.0065 & Unknown & FGIC \\
**6** & **2021-11-14** & **2459532.7834** & **0.0024** & **Douglas Lalla** & **LDJC** \\
**7** & **2022-01-25** & **2459604.81036** & **0.00091** & **Unknown** & **CMIA** \\
**8** & **2022-01-25** & **2459604.81379** & **0.00096** & **Douglas Lalla** & **LDJC** \\
**9** & **2022-02-04** & **2459614.6365** & **0.0022** & **Bryan E. Martin** & **MBEB** \\
**10** & **2022-02-06** & **2459616.8209** & **0.0018** & **Scott Dixon** & **DSC** \\
**11** & **2022-03-13** & **2459651.7466** & **0.002** & **Pablo Lewin** & **LPAC** \\
**12** & **2022-11-20** & **2459903.8588** & **0.0021** & **Anthony Norris** & **NANF** \\
13 & 2022-12-01 & 2459914.7782 & 0.0066 & Unknown & KMUA \\
14 & 2022-12-02 & 2459915.8686 & 0.0037 & Unknown & KNAC \\
15 & 2022-12-02 & 2459925.6921 & 0.0071 & Andre Kovacs & KADB \\
16 & 2022-12-26 & 2459939.8793 & 0.0031 & Unknown & KMUA \\
17 & 2022-12-26 & 2459949.7043 & 0.0042 & Alessandro Odasso & GAS \\
**18** & **2023-01-14** & **245995.5249** & **0.0021** & **Andrew Smith** & **SAJB** \\
19 & 2023-01-18 & 2459962.8037 & 0.0028 & Martin J. Fowler & FMAA \\
20 & 2023-01-29 & 2459973.7129 & 0.0023 & Martin J. Fowler & FMAA \\
21 & 2023-01-30 & 2459974.798 & 0.0039 & Martin J. Fowler & FMAA \\
22 & 2023-02-11 & 2459986.8057 & 0.0071 & Alessandro Odasso & OAS \\
23 & 2023-02-18 & 2459994.4482 & 0.0014 & Andrew Smith & SAJB \\
**24** & **2023-03-06** & **2460009.731** & **0.0056** & **Michael Primm** & **PMIF** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Exoplanet Watch Results for T\({}_{\text{mid}}\) after reduction. Transits in bold indicate that they did not use MicroObservatory for observations. All transits were used for the O-C plot
Figure 5: O-C plot of WASP-12 b by Exoplanet Watch
## 6 Conclusions
This paper presents 24 new mid-transit values and light curves from citizen scientists of ExoPlanet Watch for WASP-12 b using MicroObservatory and individual observations. This confirmed parameters for the planet's size and orbit, supporting its classification as a hot Jupiter-type exoplanet. It demonstrates the functionality of EXOTIC and CITISENS and accessibility of its advanced capabilities for use by citizen scientists. We combined Exoplanet Watch, ETD (**Poddany et al.**, 2010) and ExoClock (**Kokori et al.**, 2022) datasets to give an updated ephemeris for the WASP-12 b system of \(2454508.97895\pm 0.000054\) with an orbital period of \(1.0914196\pm 2.3847455\)e-08 days which can be used to inform the efficient scheduling of future terrestial and non-terrestial observations. Further observations can be used to refine this technique and may be used to more precisely determine causes of variations of exoplanet orbits.
## 7 Acknowledgements
Data used here come from the MicroObservatory telescope archives maintained by Frank Sienkiewicz, who also provides information on weather and delta temperature measurements. MicroObservatory is maintained and operated as an educational service by the Center for Astrophysics \(|\) Harvard & Smithsonian and is a project of NASA's Universe of Learning, supported by NASA Award NNX16AC65A. Additional MicroObservatory sponsors include the National Science Foundation, NASA, the Arthur Vining Davis Foundations, Harvard University, and the Smithsonian Institution.
This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This publication makes use of the EXOTIC data reduction package from Exoplanet Watch, a citizen science project managed by NASA's Jet Propulsion Laboratory on behalf of NASA's Universe of Learning. This work is supported by NASA under award number NNX16AC65A to the Space Telescope Science Institute.
Figure 8: Lomb-Scargle Periodogram Fitting on the Ephemeris to look for period signals.
Figure 6: Nested Posterior triangle plot using UltraNest for the updated ephemeris. The data points are color-coded to the likelihood of each fit, with darker colors indicating a higher likelihood.
Figure 7: Combined O-C plot data with ExoClock, ETD and Exoplanet Watch datasets |
2307.16450 | Fourier transformation based analysis routine for intermixed
longitudinal and transversal hysteretic data for the example of a magnetic
topological insulator | We present a symmetrization routine that optimizes and eases the analysis of
data featuring the anomalous Hall effect. This technique can be transferred to
any hysteresis with (point-)symmetric behaviour. The implementation of the
method is demonstrated exemplarily using intermixed longitudinal and
transversal data obtained from a chromium-doped ternary topological insulator
revealing a clear hysteresis. Furthermore, by introducing a mathematical
description of the anomalous Hall hysteresis based on the error function
precise values of the height and coercive field are determined. | Erik Zimmermann, Michael Schleenvoigt, Alina Rupp, Gerrit Behner, Jan Karthein, Justus Teller, Peter Schüffelgen, Hans Lüth, Detlev Grützmacher, Thomas Schäpers | 2023-07-31T07:16:54Z | http://arxiv.org/abs/2307.16450v1 | Fourier transformation based analysis routine for intermixed longitudinal and transversal hysteretic data for the example of a magnetic topological insulator
###### Abstract
We present a symmetrization routine that optimizes and eases the analysis of data featuring the anomalous Hall effect. This technique can be transferred to any hysteresis with (point-)symmetric behaviour. The implementation of the method is demonstrated exemplarily using intermixed longitudinal and transversal data obtained from a chromium-doped ternary topological insulator revealing a clear hysteresis. Furthermore, by introducing a mathematical description of the anomalous Hall hysteresis based on the error function precise values of the height and coercive field are determined.
## I Introduction
Magnetic topological insulators (MTIs) are characterized by their unique properties in band structure [1; 2; 3]. MTIs can be formed by incorporating magnetic atoms such as chromium, vanadium or manganese into the topological insulator (TI) lattice [4; 5; 6; 7; 8]. Another possibility is proximity inducing magnetic moments with a ferromagnetic insulator in vicinity of a TI [9]. These material platforms are considered to be suitable candidates for Majorana physics [10; 11; 12; 13; 14; 15]. Using MTIs the quantum anomalous Hall effect (QAHE) was first detected experimentally in 2013 by Chang et al. [2]. It is characterized by vanishing bulk conductance and a single spin-polarized edge mode that surrounds the sample [16; 1]. When increasing the temperature or detuning the Fermi energy the (not quantized) anomalous Hall effect is observed in MTIs [17; 18; 19; 20].
A common problem when dealing with experimentally gained magnetotransport data from Hall bars is the intermixing of longitudinal (\(R_{\text{xx}}\)) and transversal (\(R_{\text{xy}}\)) resistance data [21; 22; 23; 24; 25]. Even when excluding errors, there are internal origins for an overlay of the signals that cannot be solved experimentally. Possible reasons are sketched in Fig. 1 on transversal contact pairs of a Hall bar. On the left contact pair inhomogeneous potential fluctuations due to charge puddles are shown [26; 27; 28]. In the middle a geometrical displacement (\(\Delta S\)) of the contacts is sketched that gains importance when approaching smaller structures dependent on the fabrication technique [2]. In this case, a longitudinal signal in the order of \(\Delta S/L\cdot R_{\text{xx}}\) would contribute to the Hall signal \(R_{\text{xy}}\). On the right contact pair possible grain boundaries causing inhomogeneous potential drops are depicted. The consequence of the presented mechanisms is comparable to the one of a diagonal measurement over the Hall bar [29]. Despite these irregularities, the data could hold valuable information about the sample.
In the following, the data processing of intermixed conventional and anomalous Hall data with respect to their symmetries is explained. Subsequently, the analysis routine is demonstrated using the data of the MTI Hall bar. Here, the focus lies on the symmetrization of the Hall data, the longitudinal data can be treated analogously.
Figure 1: Imperfect Hall bar as origin of intermixed longitudinal and transversal magnetotransport data. The orientation of the external magnetic field \(B\) is indicated with an arrow. The examples of charge puddles (left), misaligned contacts (middle) or potential drops at grain boundaries (right) are sketched at opposing contact pairs.
## Data Processing
### Symmetries of Magnetic Topological Insulators
When analyzing measurement data that shows an intermixing of longitudinal and Hall signal, symmetrizing the data (see supplementary material) is a common method to separate the signals from each other, as the longitudinal data is expected to show axial symmetry with respect to the ordinate while the transversal data is point symmetric. Thus, a fast Fourier transformation (FFT) can help separating the respective contributions. In order to do so, the signal is Fourier transformed to the frequency space giving complex values in general. For the longitudinal data all odd contributions given by the imaginary part are filtered out while for the transversal data all even ones given by the real part are omitted. Then, the remaining quantities are transformed back giving the unperturbed signal using inverse FFT.
For MTIs showing the anomalous Hall effect or even the quantum anomalous Hall effect there are major differences. Figure 2 shows such signatures in the longitudinal (a) and Hall resistance (b) for two magnetic field sweeps: The blue curve shows data of the sweep from negative magnetic field to positive and the orange curve shows the data for the sweep in opposite direction [30]. One can see that, compared to e.g. nonmagnetic TIs, the sweeps differ around zero magnetic field. The origin are the magnetic moments \(M\) introduced by the doping material that align in a ferromagnetic order with the external magnetic field \(B\), so an internal magnetization inside the MTI is created [31], similar to diluted magnetic semiconductors [32]. At the coercive magnetic field \(B_{c}\), the external magnetic field is strong enough to switch the orientation of the opposingly aligned internal magnetic moments. In the longitudinal signal around the switching point a shifted peak for each sweep direction is visible. In the Hall signal a hysteretic behaviour is seen that is identified by the coercive magnetic field \(B_{c}\) and the height in resistance \(R_{\mathrm{AH}}\).
Due to the hysteretic behaviour, it becomes apparent, that the symmetrization procedure mentioned before is not directly transferable. Intuitively, one could think of shifting the data by \(\pm B_{c}\) and do the same procedure, but especially when considering the longitudinal data one can see that this would only fit well for the peak position but not for larger magnetic fields. Indeed, one would also loose information about the individual curvature of the anomalous Hall hysteresis. A better but more complex solution is the reallocation of the data points in the data set to again find axial and point symmetry.
Figure 3 illustrates, how the data needs to be restructured in order to have data sets that show a certain symmetry. In Fig. 3 a) the longitudinal data is shown. Colored in blue and orange, respective data set pairs are marked that are symmetric with respect to the ordinate shown with a black mirror line. The Hall hysteresis is depicted in Fig. 3 b). Compared to the longitudinal signal that shows an axial symmetry with respect to the ordinate, the Hall data is point symmetric to the middle of the hysteresis at the coordinate origin as indicated by a black dot. When combining the orange and blue symmetries, it becomes apparent that the two sweep directions are not symmetric in itself but with respect to each other. Thus, for many systems merging both sweep directions to one data set is an equivalent alternative with the same symmetries. Having this new symmetry in mind, the
Figure 2: Anomalous Hall effect. a) The longitudinal data \(R_{\mathrm{xx}}\) of two magnetic field sweeps is shown in arbitrary units (a.u.), where the blue data corresponds to a sweep from negative to positive magnetic field and the orange curve vice versa. The peak indicating zero total magnetic field is shifted in \(B\) by the coercive magnetic field with respect to the zero position. b) The corresponding Hall data \(R_{\mathrm{xy}}\) shows a hysteresis with height \(R_{\mathrm{AH}}\) and width \(B_{c}\).
Fourier symmetrization discussed above can be done.
### Mathematical Description of the Hall Hysteresis
The parameters \(B_{c}\) and \(R_{\mathrm{AH}}\) (cf. Fig. 2) are taken as a suitable measure for the hysteresis. Thus, their accurate determination is crucial. Hence, after the symmetrization process described before, the cleared data set is resorted in the manner of the process depicted in Fig. 2 b) and a fit is performed. Even after symmetrization the ideal data still consists of the hysteresis arising from the anomalous Hall effect superimposed by the classical Hall slope. As the slope may be non-negligible the data is corrected by the point symmetric conventional Hall slope determined far away from the hysteresis. Now, just the anomalous hysteresis is remaining.
As one can see in the transversal signal, the switching of the magnetic moments does not result in a step-like behaviour. Instead, the tails are slightly curved. The reason is that not all the magnetic moments are bound exactly the same way but are assumed to follow approximately a Gaussian distribution, for instance due to defects or due to the inhomogeneity of the energy at the edges of the sample. Therefore, e.g. a Heaviside step function as basic model would neglect these factors. Instead, we employed an error function as a mathematical model that describes the normalized integration of a Gaussian function. The complete model that describes the development of each branch of the hysteresis results in
\[R(B)=R_{\mathrm{AH}}\cdot\mathrm{erf}(A(B\pm B_{c})). \tag{1}\]
Here, \(R_{\mathrm{AH}}\) scales the height of the normalized error function, the term \(B\pm B_{c}\) takes the shift of the curve by the coercive magnetic field \(B_{c}\) with respect to zero magnetic field into account and the parameter \(A\) is a measure of the switching curvature. Further discussion regarding the model can be found in the supplementary material. Figure 4 shows the data from Fig. 2 b). One least square fit for each side using equation 1 is performed and shown with a dotted line. One can see that the fits describe the data quite accurately. Moreover, three error functions with varying parameter \(A\) are plotted in the insert to illustrate how the curvature of the function develops. In the following the method is demonstrated exemplarily using the intermixed anomalous Hall data of a chromium-based MTI.
## III Exemplary analysis
The following data is obtained from a 6 \(\mu\)m wide and 300 \(\mu\)m long MTI Hall bar with a stoichiometry of Cr\({}_{0.15}\)Bi\({}_{0.35}\)Sb\({}_{1.5}\)Te\({}_{3}\). Further information on the fabrication and measurement technique is provided in
Figure 4: Model for the Hall hysteresis. Using the least square method, the error function is fitted to the data taken from Fig. 2 b). The results are shown with dotted lines. Error functions with different \(A\) parameters are sketched in the insert.
Figure 3: Symmetries of MTIs. Compared to Fig. 2, here, the colors mark the parts of the data sets that are supposed to be symmetric to each other. a) The longitudinal data of two magnetic field sweeps is shown in arbitrary units (a.u.). A symmetry axis at \(B=0\) T is shown. b) The corresponding Hall hysteresis shows a point symmetric behaviour. The symmetry point is indicated by a black dot at coordinate origin in the middle of the hysteresis.
the supplementary material. The sample produced slightly asymmetric data. Using the exact same setup, similar samples have been found to have no intermixing of the signal. Thus, the origin for the intermixing seen in this sample is attributed to an internal issue. The analysis is divided into two parts. First, the classical Hall effect is analyzed where high magnetic fields are beneficial for a destingued determination of the slope. After that, a precise measurement around the hysteresis feature is used for the investigation of the AHE.
### Classical Hall Analysis
Figure 5 a) and b) show the longitudinal data and the raw Hall data, respectively. The peaks in the \(R_{\mathrm{xx}}\) signal are slightly affected by the Hall hysteresis. Furthermore, an offset in resistance between the values for high negative and high positive magnetic field is seen. In the transversal signal the intermixing of longitudinal data is even more pronounced due to the ratio of their magnitudes. In the \(R_{\mathrm{xy}}\) signal a dip around zero magnetic field, the typical (inverse) longitudinal curvature and an offset to negative values on the ordinate are observed as disruptive factors. To conclude, the signal is highly intermixed and especially the shape of the Hall signal does not correspond to the expectations.
The aim of this part of the analysis is to get an accurate estimation of the classical Hall slope. Therefore, the Hall data around zero magnetic field is excluded. This allows to handle the symmetrization of the data similar to the one of conventional TIs without need of restructuring the data sets for symmetry reasons. After removing the even contributions in the frequency space the resulting transversal signal is free from longitudinal contributions. With this data, a charge carrier concentration of \(n_{\mathrm{2D}}=1.86\cdot 10^{13}\,\mathrm{cm}^{-2}\) and a mobility of \(\mu=143\,\mathrm{cm}^{2}/\mathrm{Vs}\) are derived from the slope at base temperature using classical Hall analysis and Drude theory. As a check, the value of the charge carrier concentration is also calculated to \(n_{\mathrm{2D}}=1.85\cdot 10^{13}\,\mathrm{cm}^{-2}\) from the non-linear raw data. The values do not really differ, as the exclusion of the symmetric data arising from an intermixture of the longitudinal data does not effect the point symmetric, linear slope.
### Anomalous Hall Analysis
The inserts of Fig. 5 a) and b) show precise measurements around zero magnetic field of the longitudinal and transversal data. Only the data up to \(30\,\mathrm{K}\) is taken into account, as for higher temperatures no hysteretic behavior is observed. First, the slope determined in the previous large magnetic field measurement is subtracted, as this is caused by the classical Hall effect and the purpose of the precise measurement around zero magnetic field is the determination of the
Figure 5: Perturbed magnetoresistance signal of an MTI. The legend is shown in b). a) The raw longitudinal data recorded for different temperatures shows a small influence from the Hall signal. b) The corresponding raw Hall data is shown. The inserts show zoom-ins of precise measurements around zero magnetic field.
anomalous Hall effect properties. As the slope is a point symmetric feature, it has to be excluded separately. In order to remove the contributions from \(R_{\rm xx}\) in the Hall signal, the data of both sweep directions is split at \(B=0\) T and restructured following the technique shown in Fig. 3. The redistribution of the values is indicated in Fig. 6 a) for base temperature. Next, the data is interpolated to ensure an equidistant spacing of the data points for the Fourier transformation. Making use of the point symmetry of the restructured \(R_{\rm xy}\) data, a Fourier analysis is performed that removes all axial symmetric contributions.
The result of the symmetrization process is shown in Fig. 6 b) for multiple temperatures. One can see that the signal is cleared from intermixed perturbations. The symmetric data that is excluded during the symmetrization process is plotted in Fig. S2 in the supplementary material. A fitting of the point symmetric data using equation 1 is made for both branches of each hysteresis. The fits are plotted in Fig. 6 c) together with the data marked with a dashed box in Fig. 6 b). The fit describes the data well, but it can also be seen that a remaining curvature at small magnetic fields of the data measured at higher temperatures causes a slight difference between fit and data that results in an uncertainty in the value of \(A\) for elevated temperatures. The corresponding fit parameters are shown in Fig. 6 d) - f). For the determination of the values the average of both fits for each temperature is taken. As the curves are similar, also due to the symmetrization, only differences below 0.01 % between the parameters of both fits are observed. For base temperature values of \(R_{\rm AH}=463\,\Omega\), \(B_{c}=134\) mT and \(A=59.5\) T\({}^{-1}\) are obtained. One can see that all parameters decrease with increasing temperatures. The decrease of \(R_{\rm AH}\) and \(B_{c}\) indicate a decrease of the AHE towards the Curie temperature \(T_{c}\). For \(T=20\) K the widths of the hysteresis is already close to zero so that for the measurement at \(T=30\) K no real value for \(B_{c}\) could be determined. As the parameter \(A\) scales inversely with the width of the transition, the transition region is broadened with increasing temperature. For \(T\geq 20\) K a larger decrease is observed as the bending of the curve also influences the parameter.
Figure 6: Symmetrization and analysis of the AHE. The legend is shown in b). a) The redistribution for the symmetrization process of the signal at a base temperature of 1.3 K is shown. In b) the symmetrized data is shown. c) Fits of equation 1 are performed for all temperatures and the temperature dependences of the averaged fit parameters \(R_{\rm AH}\), \(B_{c}\) and \(A\) are plotted in d) - f), respectively.
Conclusion
In this article the analysis of intermixed conventional and anomalous Hall data was discussed. First, different reasons for intermixed data were listed. Then, a possible symmetrization process of conventional data using a FFT is shown. Thereafter, the existing axial symmetry of the longitudinal and the point symmetry of the transversal anomalous Hall data were discussed. A method for the restructuring of the data by splitting and recombining it at zero magnetic field is suggested in order to maintain the symmetries. This was followed by the symmetrization process using FFT. Furthermore, a mathematical description based on the error function is introduced in order to describe and fit the hysteresis.
Next, the data of an MTI sample was analyzed that showed intermixed anomalous Hall data. The method is carried out exemplarily for the transversal data of the MTI Hall bar. The result is a clear, symmetric anomalous Hall hysteresis where the height and width are precisely determined using the model based on the error function. Slight deviations in curvature from the fitting model are found for elevated temperatures as in addition to the approximately Gaussian distributed binding of the magnetic moments another temperature dependent component that is expected to be Fermi-Dirac distributed contributes.
Besides clearing the hysteretic data from perturbation, the presented approach offers a precise determination of the width and the height of the hysteresis that are comparable to the ones estimated from the raw data. This approach cannot only be used for magnetic field dependent MTI measurements but may be transferred easily to any perturbed hysteretic behaviour that is based on symmetries.
Finally, it is pointed out that the method needs to be handled with care as it may mimic symmetries also for data where no symmetry is expected. Thus, a close comparison between resulting data, raw data and underlying concepts always needs to be made.
###### Acknowledgements.
We thank Herbert Kertz for technical assistance and Jonas Buchhorn for fruitful discussion. All samples have been prepared at the Helmholtz Nano Facility [33]. This work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 - 390534769, by the German Federal Ministry of Education and Research (BMBF) via the Quantum Futur project 'MajoranaChips' (Grant No. 13N15264) within the funding program Photonic Research Germany and by the QuantERA grant MAGMA via the German Research Foundation under grant 491798118.
|
2309.06489 | The $q-$state Potts model from the Nonperturbative Renormalization Group | We study the $q$-state Potts model for $q$ and the space dimension $d$
arbitrary real numbers using the Derivative Expansion of the Nonperturbative
Renormalization Group at its leading order, the local potential approximation
(LPA and LPA'). We determine the curve $q_c(d)$ separating the first
($q>q_c(d)$) and second ($q<q_c(d)$) order phase transition regions for
$2.8<d\leq 4$. At small $\epsilon=4-d$ and $\delta=q-2$ the calculation is
performed in a double expansion in these parameters and we find $q_c(d)=2+a
\epsilon^2$ with $a\simeq 0.1$. For finite values of $\epsilon$ and $\delta$,
we obtain this curve by integrating the LPA and LPA' flow equations. We find
that $q_c(d=3)=2.11(7)$ which confirms that the transition is of first order in
$d=3$ for the three-state Potts model. | Carlos A. Sánchez-Villalobos, Bertrand Delamotte, Nicolás Wschebor | 2023-09-12T18:03:12Z | http://arxiv.org/abs/2309.06489v1 | # The \(q-\)state Potts model from the Nonperturbative Renormalization Group
###### Abstract
We study the \(q\)-state Potts model for \(q\) and the space dimension \(d\) arbitrary real numbers using the Derivative Expansion of the Nonperturbative Renormalization Group at its leading order, the local potential approximation (LPA and LPA'). We determine the curve \(q_{c}(d)\) separating the first (\(q>q_{c}(d)\)) and second (\(q<q_{c}(d)\)) order phase transition regions for \(2.8<d\leq 4\). At small \(\epsilon=4-d\) and \(\delta=q-2\) the calculation is performed in a double expansion in these parameters and we find \(q_{c}(d)=2+ae^{2}\) with \(a\simeq 0.1\). For finite values of \(\epsilon\) and \(\delta\), we obtain this curve by integrating the LPA and LPA' flow equations. We find that \(q_{c}(d=3)=2.11(7)\) which confirms that the transition is of first order in \(d=3\) for the three-state Potts model.
## I Introduction
Together with the clock models, the \(q-\)state Potts model [1; 2] is the most natural and famous generalization of the Ising model in terms of discrete degrees of freedom. It consists of lattice models where, at each site, a "spin" can be in \(q\) possible states and the Hamiltonian is symmetric under any permutation of these states.
Beyond its academic interest, the \(q-\)state Potts model is physically relevant in different physical instances. For instance, for \(q=3\) in dimension \(d=3\), it describes the liquid crystal nematic-isotropic transition [3], a structural cubic to tetragonal crystal transition [4] as well as the confinement/deconfinement phase transition in pure Yang-Mills theory at finite temperature [5; 6; 7]. In \(d=2\), it describes the lattice gas transition of \({}^{4}\)He atoms adsorbed on graphoil [8; 9; 10]. It has also been suggested that the 4-state model could be relevant to phase transitions in some antiferromagnets [11]. The analytic continuation to \(q=1\) and \(q=0\) enables the study, respectively, of bond percolation and the spanning forest universality classes [12; 13; 14; 15; 16; 17].
From a theoretical point of view, the \(q\)-state Potts model is both a much-studied model for which many exact results are known in dimension \(d=2\), and a model for which the physics in \(d>2\) is poorly understood. For example, the mean field analysis [2; 3; 18; 19; 20] predicts a first order transition in all dimensions for all \(q>2\) which contradicts an exact result by Baxter showing that the transition is of second order in \(d=2\) for \(q\leq 4\)[13]. On the other hand, a simple dimensional argument suggests that for \(q>2\), the upper critical dimension of the model is six, yet the \(\epsilon=6-d\) expansion to the order of two loops [21; 22; 23] leads to physically absurd results with regard to the critical properties of the model.
The origin of these disturbing results is probably that for \(q>2\), the Hamiltonian of the model involves a cubic term which on one hand allows for a systematic perturbative expansion in \(\epsilon=6-d\) but, on the other hand, yields a thermodynamic potential which is unbounded from below. Notice that this \(\epsilon\)-expansion is under control for \(q=0\) and \(q=1\) for which the instability of the potential is unlikely to be a problem. The scaling found in the \(\epsilon\)-expansion for \(q>2\) probably corresponds only to scaling in a metastable state [24; 25] and not to a true second order transition. This is the signal that the critical physics of the Potts model for \(q>2\) is particularly subtle and that nonperturbative methods are needed. It also explains why this subject has been almost abandoned for decades except for some isolated studies using approximate methods [26; 27; 28] and for the recent study based on the conformal bootstrap approach [29].
Most previous studies have focused mainly on the three-dimensional case and indicate that for \(q=d=3\) the phase transition is of first-order [26; 27; 28; 30; 31; 32]. Note that most of these results come from numerical simulations or studies of particular models, and are therefore only valid for those models. However, finding systems with \(q=d=3\) that undergo a first order transition is not proof that all systems undergo such a transition. It only means that, if any, they are outside the parameter region of second order transitions. The only way to decide whether or not a second-order transition is possible for \(q=d=3\) is to prove or disprove that scale-invariance is possible in this case. This is what the renormalization group and the conformal bootstrap method allow for.
As for the conformal bootstrap, it has been extended to non-integer values of \(d\) and suggests that for \(q=3\) the dimension where the transition goes from second to first order is \(d_{c}(q=3)\simeq 2.5\), so that for \(q=3\), the transition is of first order in \(d=3\). It should be noted that the unitarity-based bounds used in the conformal bootstrap approach are not rigorous for non-integer \(d\) values. However, previous works suggest that these unitarity violations for non-integer \(d\) have minor effects [29; 33].
For what follows, it is important to note that a non-perturbative definition of the Potts model for arbitrary real values of \(q\) exists [12] and the model can therefore be formulated for all real values of both \(q\) and \(d\). It is therefore natural to try to determine in the \((d,q)\) plane, the \(d_{c}(q)\) curve separating the small-\(q\) second-order region from the large-\(q\) first-order region. This curve cannot be obtained perturbatively because, as explained above, there is no value of \(d\) where the perturbative expansion is under control. Our aim is to revisit this problem using the nonperturbative renormalization group (NPRG) which is a modern version of Wilson's RG and to compute the \(d_{c}(q)\) curve at least down to \(d=3\). This method has been used previously to study the \(q-\)state Potts model [17], but controlled results have so far only been obtained for \(q=0\) and \(q=1\). In the present work, we study the very different case \(q\geq 2\).
Compared with Wilson's RG, the NPRG shows several technical advantages when approximations are implemented which allows us to better control them (for a recent review of the NPRG and its applications, see [34]). The derivative expansion (DE) is one of these approximation schemes and we use it in the following. It has been widely used in the last twenty five years with undeniable successes both in high energy physics and in statistical mechanics, at and out of equilibrium (see [34] and Sec. III). In recent years, it has been possible to explain the reason for all these successes, that previously remained rather obscure, by exhibiting a "small parameter" associated with the DE [35; 36; 37; 38]. As a by-product, this also explains why this method is so versatile and robust.
Despite what has been stated above and depending on the model, the implementation of the DE can be technically involved. This is precisely the case for the 3-state Potts model, which turns out to be a very difficult case for a variety of reasons, some of them technical and others related to the physics of the problem. The technical reasons are detailed later in this article, but there are some physical reasons that should be mentioned at the outset. First, as already mentioned, there are no limits in which the \(q=3\) case can be treated in a perturbative way. This makes it extremely difficult to test the quality of the approximations. Second, all the known results about the Potts model suggest that \(d_{c}(q=3)<2.5\)[29]. However, the leading order of the DE (usually dubbed the Local Potential Approximation, or LPA) has been tested as a function of \(d\) for a great variety of models and is usually no longer reliable in low dimensions, typically \(d\lesssim 2.5\)[39]. This means that the dimensions in which we expect to find a fixed point (FP) of the RG associated with a second-order phase transition are precisely those for which the application of the LPA becomes doubtful. The LPA is therefore not an option to compute \(d_{c}(q=3)\) which implies going directly to the next order. However, the second order of the DE is technically and numerically very difficult and goes far beyond this first study of \(d_{c}(q)\).
Fortunately, as was observed a long time ago by Newman _et al._, near \(d=4\) and \(q=2\), although the \(q-\)state Potts model is not perturbative in the usual sense, a modified perturbative theory makes it possible to determine the shape of the curve \(d_{c}(q)\)[40]. Indeed, for small \(\delta=q-2\), the model is close to the Ising model which can be controlled by perturbation theory in \(\epsilon=4-d\). This allowed these authors to prove under very mild assumptions that the curve \(q_{c}(d)\) behaves near \(d=4\) as \(q_{c}(d=4-\epsilon)=2+a\epsilon^{2}\) with \(a\) a constant that is not determined by perturbation theory. This semi-perturbative regime is an ideal starting point for implementing approximate but nonperturbative methods which is what we are doing below.
In fact, the authors of Ref. [40] have implemented a nonperturbative approximation to compute the curve \(q_{c}(d)\) in the context of Wilson's RG. They have truncated the exact RG flow by projecting it onto a restricted space of coupling terms involving at most eleven couplings, that is, up to terms of the potential of order 6 in the fields. Unfortunately, this approximation is too short to achieve a converged determination of \(q_{c}(d)\) below \(d\sim 3.4\) and the most interesting case corresponding to \(d=3\) has so far remained inaccessible. In the present work, we implement a similar scheme in the context of the NPRG and we include thirty couplings, that is, up to terms of the potential of order 9 in the fields. This allows us to reach the important \(d=3\) case. We obtain \(q_{c}(d=3)=2.11(7)\). Compared with previous works, our study of \(q_{c}(d)\) has the double advantage of starting from a fully controlled point around \(d=4\) and \(q=2\) and being able to reach \(d=3\).
The article is organized as follows. We present the \(q-\)state Potts model, its symmetries and the associated mean-field analysis in Sec. II. In Sec. III, we present the NPRG method and the approximation scheme (the DE) that we implement at leading order in the present work. In Sec. IV, we build the tensors and scalars of the permutation group \(\mathcal{S}_{q}\) relevant for the \(q-\)state Potts model and necessary to derive the RG flow equations. In Sec. V, the main result of the article is presented: the curve \(q_{c}(d)\). We conclude by a summary and an outlook. Some technical details are given in Appendices.
## II The model and its main properties
### Two equivalent lattice formulations of the Potts model
The \(q\)-state Potts model is one of the simplest generalizations of the Ising model in which each spin can have \(q\) possible states, all playing equivalent roles [1; 2]. As a result, the model is \(\mathcal{S}_{q}\) symmetric, that is, is invariant under all permutations of the states. The simplest lattice Hamiltonian showing this symmetry is:
\[H=-J\sum_{\langle i,j\rangle}\delta_{\sigma_{i},\sigma_{j}} \tag{1}\]
where the spins \(\sigma_{i}\in\{1,2,\ldots,q\}\), the sum is performed on nearest neighbor sites of a \(d\)-dimensional lattice and the model is ferromagnetic (anti-ferromagnetic) if \(J>0\) (\(J<0\)). In the following, we are only interested in the ferromagnetic Potts model.
The model can be written equivalently in terms of vector spins \(\mathcal{S}_{i}\) with \(n=q-1\) components. At each lattice site \(i\), the spin \(\vec{S}_{i}\) belongs to the set \(\{\vec{e}^{(1)},\vec{e}^{(2)},\ldots,\vec{e}^{(q)}\}\) where the \(\vec{e}^{(\alpha)}\) are vectors joining the barycenter of a \(n\)-dimensional regular hyper-tetrahedron to its vertices, see Fig. 1. The Hamiltonian
Figure 1: Vectors \(\vec{e}^{(\alpha)}\) describing the possible values of \(\vec{S}_{i}\) for \(q=2,3\) and 4 in the Hamiltonian of Eq. (2).
is then:
\[\tilde{H}=-\tilde{J}\sum_{\langle i,j\rangle}\vec{S}_{i}\cdot\vec{S}_{j}. \tag{2}\]
and is therefore very similar to the ferromagnetic O(\(n\)) model up to the difference that the spins \(\vec{S}_{i}\) point in a discrete set of directions. The \(q\) vectors \(\vec{e}^{(\alpha)}\) are not independent since:
\[\sum_{\alpha=1}^{q}\vec{e}^{(\alpha)}=0. \tag{3}\]
They satisfy:
\[\vec{e}^{(\alpha)}\cdot\vec{e}^{(\beta)}=A\delta_{\alpha\beta}+B \tag{4}\]
because all vectors play a symmetric role. This relation shows that the two Hamiltonians \(H\) and \(\tilde{H}\) are equivalent (up to an additive constant) under the condition that \(J=A\tilde{J}\).
### The Ginzburg-Landau model
In the field theoretical approach to critical phenomena, it is convenient to work with fields \(\vec{\varphi}(x)\) that are unconstrained, that is, whose direction is not necessarily one of the \(\vec{e}^{(\alpha)}\) and whose components vary between \(-\infty\) and \(+\infty\). A potential \(U(\vec{\varphi})\) contributing to the Hamiltonian replaces the "hard constraints" satisfied by the vectors \(\vec{S}\) in Eq. (2) with "soft constraints" that penalize configurations of the \(\vec{\phi}\) different from those of the \(\vec{S}\). The resulting Hamiltonian is called the Ginzburg-Landau (GL) Hamiltonian. It reads on the lattice:
\[H_{\rm GL}[\vec{\varphi}]=-J\sum_{\langle i,j\rangle}\vec{\varphi}_{i}\cdot \vec{\varphi}_{j}+\sum_{i}U(\vec{\varphi}_{i}) \tag{5}\]
and, after rescalings, its continuum version is:
\[H_{\rm GL}[\vec{\varphi}]=\int d^{d}x\left(\frac{1}{2}\big{(}\partial_{\mu} \vec{\varphi}(x)\big{)}^{2}+U(\vec{\varphi}(x))\right). \tag{6}\]
In \(H_{\rm GL}\), the potential \(U\) must have its \(q\) minima pointing in the direction of the vertices of a \((q-1)\)-dimensional tetrahedron. The problem of building \(H_{\rm GL}\) thus boils down to that of building a general \({\cal S}_{q}\)-invariant potential \(U\).
Notice that in general the hard constraints satisfied by the \(\vec{S}^{\prime}\) can be recovered on the \(\vec{\varphi}\) in the limit where \(\exp(-U(\vec{\varphi}))\) becomes a Dirac function that selects only the configurations of the \(\vec{S}^{\prime}\). The original model, Eq. (2), and the Ginzburg-Landau model are thus expected to be in the same universality class when they both undergo a continuous transition. In most cases, a truncation of \(U(\vec{\varphi})\) keeping only the nontrivial terms of lowest degree in the fields is sufficient to pick up one model belonging to the universality class. However, as we show below, the RG flow we are interested in couples all invariants and it is therefore mandatory to build all of them.
### Scalars and tensors
In the following, we need the construction of the invariant tensors and of the scalars of the model which requires the explicit construction of the vectors \(\vec{e}^{(\alpha)}\). These different constructions have been done in the literature [41; 17; 42] and we recall them below for the sake of completeness. Let us first show that the normalization of the vectors \(\vec{e}^{(\alpha)}\) can be important from a practical point of view.
The constants \(A\) and \(B\) in Eq. (4) are not independent. Taking the square of the identity (3), one finds that for \(\alpha\neq\beta\):
\[\vec{e}^{(\alpha)}\cdot\vec{e}^{(\beta)}=-\frac{1}{q-1}|\vec{e}^{(\alpha)}|^{2} \tag{7}\]
and thus
\[\vec{e}^{(\alpha)}\cdot\vec{e}^{(\beta)}=\frac{|\vec{e}^{(\alpha)}|^{2}}{q-1} (q\,\delta_{\alpha\beta}-1). \tag{8}\]
Whenever the limit \(n\to 0\) has to be taken, it is convenient to choose the normalization: \(|\vec{e}^{(\alpha)}|^{2}=q-1=n\)[17]. Since we are interested in finite values of \(n\), we choose:
\[|\vec{e}^{(\alpha)}|=\sqrt{\frac{2n}{n+1}} \tag{9}\]
from which follows
\[\vec{e}^{(\alpha)}\cdot\vec{e}^{(\beta)}=2\big{(}\delta_{\alpha\beta}-\frac{1 }{n+1}\big{)}. \tag{10}\]
The general construction of the vectors \(\vec{e}^{(\alpha)}\) is presented in Appendix A together with some of their properties.
We can now build the \({\cal S}_{q}\)-invariants contributing to \(U(\vec{\phi})\), that is, invariants that do not include any derivative of the fields. As any permutation of \(q\) objects can be decomposed into a succession of permutations between two objects, it is sufficient to require the invariance of the potential \(U\) under all permutations \(R^{(\alpha,\vec{\mu})}\) interchanging the vectors \(\vec{e}^{(\alpha)}\) and \(\vec{e}^{(\beta)}\) without modifying the others.
A general polynomial in the coordinates \((\varphi_{i_{1}},\ldots,\varphi_{i_{n}})\) of \(\vec{\varphi}\) involving only terms of degree \(p\) can be written:
\[\tilde{T}^{(p)}_{i_{1}i_{2}\ldots i_{p}}\varphi_{i_{1}}\ldots\varphi_{i_{p}}, \tag{11}\]
where Einstein's convention is used, as will be done in the rest of the article. Without loss of generality, \(\tilde{T}^{(p)}_{i_{1}i_{2}\ldots i_{p}}\) can be taken completely symmetric. The previous polynomial is invariant under \({\cal S}_{q}\) if and only if \(\tilde{T}^{(p)}\) is a completely symmetric invariant tensor of \({\cal S}_{q}\) of rank \(p\). In this case, the transformation of the \(\varphi_{i}\) under \({\cal S}_{q}\) is compensated by the invariance of \(\tilde{T}^{(p)}\) and the polynomial is invariant.
Obviously, all O(\(n\))-invariant tensors are invariant under \({\cal S}_{q=n+1}\) since it is a subgroup of O(\(n\)). For the O(\(n\)) group, the invariant tensors are all linear combinations of tensor products of the Kronecker delta. We call them the isotropic tensors. For example, the completely symmetric tensors of order two and four are:
\[\tilde{T}^{(2)}_{ij}=\delta_{ij},\hskip 28.452756ptS^{(4)}_{ijkl}=\delta_{ij} \delta_{il}+\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}. \tag{12}\]
Once contracted with the fields, they yield powers of the unique O(\(n\)) invariant:
\[\rho=\frac{1}{2}(\psi_{1}^{2}+\varphi_{2}^{2}+\cdots+\varphi_{n}^{2}) \tag{13}\]
since
\[\tilde{T}^{(2)}_{ij}\varphi_{i}\varphi_{j}=2\rho,\qquad\quad S^{(4)}_{ijkl} \varphi_{i}\varphi_{j}\varphi_{k}\varphi_{l}=12\rho^{2}. \tag{14}\]
For the \(\mathcal{S}_{q}\) group, there are many other algebraically independent invariant tensors. We call them anisotropic tensors because their presence in a Hamiltonian is the signature of the explicit O(\(n\)) symmetry breaking down to the \(\mathcal{S}_{q}\) symmetry. We now show how to build the simplest one which is of rank 3.
From the basis vectors \(\vec{e}^{(\alpha)}\) it is easy to build a rank-3 completely symmetric tensor:
\[\bar{T}^{(3)}_{ijk}=\frac{1}{2}\sum_{\alpha=1}^{q}e^{(\alpha)}_{i}e^{(\alpha) }_{j}e^{(\alpha)}_{k}. \tag{15}\]
A tensor is invariant under \(\mathcal{S}_{q}\) if by applying any permutation \(R^{(\beta,\gamma)}\) it remains unchanged. This is obvious for \(\bar{T}^{(3)}\) from its definition since \(R^{(\beta,\gamma)}\) acts only on the two terms of the sum in Eq. (15) where \(\alpha=\beta\) and \(\alpha=\gamma\) and exchange them. An invariant polynomial of order three is therefore:
\[\bar{\tau}_{3}=\frac{1}{2}\bar{T}^{(3)}_{ijk}\varphi_{i}\varphi_{j}\varphi_{k}. \tag{16}\]
Let us notice that \(\bar{T}^{(3)}\equiv 0\) for \(q=2\) because in this case, \(\vec{e}^{(1)}=-\vec{e}^{(2)}\) are one-component vectors and \(\bar{T}^{(3)}_{111}=0\). This is expected since \(q=2\) corresponds to the Ising model which is known to have \(\rho\) as only invariant. We show in Appendix B that for any value of \(q\), \(\bar{T}^{(3)}\) must have an even number of indices equal to 1 to be non-zero.
For any \(q\geq 3\), \(\bar{T}^{(3)}\neq 0\). We compute in Appendix B 1 two components of \(\bar{T}^{(3)}\) for any \(q\geq 3\):
\[\bar{T}^{(3)}_{112}=-\bar{T}^{(3)}_{222}=-\frac{1}{\sqrt{3}} \tag{17}\]
which implies that
\[\bar{\tau}_{3}\Big{|}_{\varphi_{3}=-\varphi_{n}=0}=\frac{1}{2}\big{(}3\bar{T }^{(3)}_{112}\varphi_{1}^{2}\varphi_{2}+\bar{T}^{(3)}_{222}\varphi_{2}^{3} \big{)}=\frac{1}{2\sqrt{3}}(\varphi_{2}^{3}-3\varphi_{1}^{2}\varphi_{2}) \tag{18}\]
independently of the values of \(n\). This expression contains all the terms of \(\bar{\tau}_{3}\) when \(q=3\) but, of course, for \(q>3\) other terms including \(\varphi_{3},\varphi_{4},\cdots\) contribute to \(\bar{\tau}_{3}\). Whether the projection of \(\bar{\tau}_{3}\) onto the \((\varphi_{1},\varphi_{2})\) plane is independent of \(q\) depends crucially on the choice of normalization condition, Eq. (9). With other normalizations of \(\vec{e}^{(\alpha)}\) or of \(\bar{T}^{(3)}\) this projection may depend on \(q\) via a multiplicative factor or through a permutation of indices (see [17], for example).
Let us now generalize the construction above to general tensors. The obvious generalization of Eq. (15) is:
\[\bar{T}^{(p)}_{i_{1}i_{2}\ldots i_{p}}=\frac{1}{2}\sum_{\alpha=1}^{q}e^{( \alpha)}_{i_{1}}e^{(\alpha)}_{i_{2}}\ldots e^{(\alpha)}_{i_{p}}. \tag{19}\]
and the proof that it is invariant under \(\mathcal{S}_{q}\) is identical: any permutation \(R^{(\alpha,\beta)}\) leaves the sum in Eq. (19) unchanged because it only exchanges two of its terms. It follows that
\[\bar{\tau}_{p}=\frac{1}{2}\bar{T}^{(p)}_{i_{1}i_{2}\ldots i_{p}}\varphi_{i_{1 }}\ldots\varphi_{i_{p}}. \tag{20}\]
is invariant under \(\mathcal{S}_{q}\). The explicit construction made in Appendix B shows that for integer values of \(q\) the tensors \(\{\bar{T}^{(2)},\bar{T}^{(3)},\ldots,\bar{T}^{(q)}\}\) are independent. It is important to notice that this is also a complete set of independent tensors because one cannot construct, for any group, more than \(n\) independent invariants out of a \(n-\)component vector. This implies that all higher order invariant terms are sum of products of \(\bar{\tau}_{p}\) with \(p\leq q\).
The tensors \(\bar{T}^{(p)}\) enjoy many algebraic properties reviewed in Sect. IV.
### The mean-field approximation
We recall below that the Potts model undergoes a first order transition at mean field level for all values of \(q>2\)[2; 3; 18; 19; 20]. To show this, it is sufficient to consider the GL Hamiltonian in its continuum version:
\[H_{\text{GL}}[\vec{\phi}]=\int d^{d}x\,\Big{(}\frac{1}{2}(\partial_{\mu}\vec{ \phi}(x))^{2}+r\rho(x)+\frac{v}{3!}\bar{\tau}_{3}(x)+\frac{u}{6}\rho^{2}(x)+ \cdots\Big{)} \tag{21}\]
and to show that the transition cannot be of second order.
The spirit of the mean-field approximation is either to neglect all fluctuations or, at least, to neglect long wavelength fluctuations. In this approximation, the Gibbs free energy is a smooth function of \(\vec{\phi}=\langle\vec{\phi}\rangle\) that can be expanded as \(H_{\text{GL}}\) in Eq. (21), but with effective couplings. Therefore, at small magnetization, the free energy per unit volume evaluated for a constant field is:
\[\frac{1}{V}\Gamma(\vec{\phi},T)=r_{\text{eff}}(T)\,\rho+\frac{v_{\text{eff}}(T )}{3!}\bar{\tau}_{3}+\mathcal{O}(|\vec{\phi}|^{4}), \tag{22}\]
where \(V\) is the space volume, \(r_{\text{eff}}(T)\) and \(v_{\text{eff}}(T)\) are effective parameters depending smoothly on the temperature and \(\rho\) and \(\bar{\tau}_{3}\) are given by Eqs. (13) and (16) with \(\vec{\phi}\) replaced by \(\vec{\phi}\).
If the transition were continuous, the magnetization would go to zero as \(T\) goes to the transition temperature \(T_{c}\) which requires \(r_{\text{eff}}(T_{c})=0\). Now, if \(v_{\text{eff}}(T_{c})\neq 0\), the free energy does not have a minimum at \(\vec{\phi}=0\) for \(T=T_{c}\) because the trilinear term dominates at small fields and behaves for arbitrary values of \(q\) as:
\[\frac{1}{V}\Gamma(\phi_{1}=\phi_{3}=\phi_{4}=\cdots=0,T_{c})=\frac{1}{2\sqrt{3 }}\frac{v_{\text{eff}}(T_{c})}{3!}\phi_{2}^{3}+\mathcal{O}(\phi_{2}^{4}). \tag{23}\]
Therefore, the free-energy shows an inflexion point at zero field and not a minimum and the transition cannot be continuous, except if \(v_{\text{eff}}(T_{c})=0\). However, this would not correspond to a critical point but to a tricritical point because two control parameters must be tuned at the same time to impose both \(r_{\text{eff}}(T_{c})=0\) and \(v_{\text{eff}}(T_{c})=0\). Therefore, at mean-field,
the model cannot be critical and if there is a transition corresponding to the tuning of one parameter only, it must be of first order.
Of course, the expansion in Eq. (22) does not yield a free energy bounded from below but this is nothing but the consequence of the Taylor expansion made at small fields. Including higher powers in the field such as a \(|\tilde{\phi}|^{4}\) term, the free energy can become stable. We show in Fig. 2 how it typically deforms when \(r_{\rm eff}\) is varied and how the first order transition occurs at mean field level.
### Critical behavior in \(d=2\) and upper critical dimensions
The Potts model model has been solved exactly in \(d=2\)[13] and it has been proven that the transition is of first order for \(q>4\) and continuous for \(q\leq 4\). This shows that, at least in low dimensions, the model shows important fluctuations that invalidate the mean-field analysis near the transition. Reciprocally, one can expect that for any \(q\), the mean field approximation should be a reasonable approximation for large enough \(d\). Usually, one defines the upper critical dimension \(d_{c}\) of a given model as the dimension above which the universal critical properties are exactly taken into account by the mean field approximation. This implies that for \(d>d_{c}\), the critical fluctuations are Gaussian. The scaling analysis is therefore performed around the Gaussian FP and is dominated by the couplings of largest engineering dimensions. For example, for the Ising model, the most relevant coupling is \(u\) the scaling dimension of which is \(4-d\). The critical dimension is thus \(d_{c}=4\) and the critical theory is described by the Gaussian FP for \(d\geq 4\).
For \(q>2\), the situation is very different. The most relevant interaction coupling with respect to the Gaussian FP is the triangular one, whose Gaussian scaling dimension is \((6-d)/2\). As a consequence, if the transition were of second order the upper critical dimension of the model would be six, because above this dimension there are no relevant interactions at the Gaussian FP. A perturbative expansion in \(\epsilon=6-d\) has been devised in [21; 22; 23]. Nevertheless, since the only relevant coupling gives rise to a potential that is not bounded from below, the results are not related to a critical transition, but to an expansion around a metastable state [24; 25].
Obviously, the potential needs to be bounded from below and this requires an even operator, that is, in the simplest case, a quartic term. For this term to be relevant, \(d\) cannot be greater than \(4\) and we therefore expect that it is only below four dimensions that the transition can be of second order 1. A schematic representation of the curve that determines the boundary between a first-order and a second-order transition summarising the information known in the literature so far is shown in Fig. 3.
Footnote 1: This observation only applies when \(q\in\mathbb{N}\) and it is unclear whether requiring the potential to be bounded from below should apply to the analytic extension of the Potts model to noninteger values of \(d\) or \(q<2\). In fact, the perturbative expansion performed in \(\epsilon=6-d\) seems to be under control for \(q<2\)[43; 17] whereas it is not for \(q>2\)[24; 25]
A very interesting result was found by Newman _et al._ on this subject in Ref. [40]. Using an extension of the model to non-integer values of both \(q\) and \(d\) these authors propose to study perturbatively the vicinity of the Ising model, that is, \(q=2\) by making a double expansion in \(\epsilon=4-d\) and \(\delta=q-2\). They find a line \(q_{c}(d)\) (or, equivalently, \(d_{c}(q)\)) below which the transition is of second order and above which it is of first order. It starts at \(q=2\) and \(d=4\) and \(d_{c}(q)\) can be interpreted as the upper critical dimension for a given, generically non-integer, value of \(q\) because for \(d>d_{c}(q)\) the transition is of first order as predicted by the mean-field approximation. For a given \(d\), they find two FPs for \(q<q_{c}(d)\): a critical and a tricritical one. These two FPs collide when \(q=q_{c}(d)\) and become complex for \(q>q_{c}(d)\): the transition becomes then of first order. Notice that this scenario of switching from a
Figure 2: Free energy per unit volume at the mean-field approximation for \(q=3\) and \(\phi_{1}=0\). Units are chosen to make the magnetization and the free energy dimensionless. Three couplings have been retained: \(r_{\rm eff},v_{\rm eff}\) and \(u_{\rm eff}\) which is the coupling of the \(|\tilde{\phi}|^{4}\) term. This term has been included to make the free energy bounded from below. At fixed \(v_{\rm eff}=-10\) and \(u_{\rm eff}=1\), the first order transition is induced by the variations of \(r_{\rm eff}\) and occurs for \(2<r_{\rm eff}<3\).
second to a first order transition is compatible with what is known in \(d=2\) where at \(q=4\), the critical and tricritical FPs coincide and the transition becomes of first order for \(q>4\)[13; 44; 45]. It turns out that the calculation performed in [40] becomes unreliable for \(d\lesssim 3.4\) and thus cannot address the case \(q=3\) in \(d=3\) which is the most important case. We discuss Newman's results in some detail in Sect. V.1.
The calculation of \(d_{c}(q=3)\) has been recently addressed with the Conformal Bootstrap approach in Ref. [29]. The annihilation of the critical and tricritical FPs is also observed and it has been shown that \(d_{c}(q=3)\lesssim 2.5\). This results agrees with the common wisdom that for \(q=d=3\) the transition is of first order [2]. Another interesting piece of information for the \(q_{c}(d)\) curve concerns the approach to \(d=1\) given by \(q_{c}(1+\epsilon)\approx\exp(2/\epsilon)\)[26; 46].
We are interested in the following in computing the line \(q_{c}(d)\) defined above and in particular the value of \(q_{c}(d=3)\). Since the perturbative method, that is, the \(\epsilon=6-d\) expansion does not work for the Potts model we rely on a nonperturbative renormalization group method which is the modern version of Wilson's RG.
## III The nonperturbative renormalization group
In this section, we give a very brief overview of the Nonperturbative Renormalization Group (NPRG) and the approximation scheme that is used in this paper, the Local Potential Approximation, which is the leading order of the Derivative Expansion. Even if the content can be read in a much more detailed way in many reviews (as [34]), we include it for completeness.
### Nonperturbative Renormalization Group Equations
The NPRG is based on Wilson's idea of integrating progressively short-distance degrees of freedom, that is, modes with a wavenumber larger than some scale \(k\) while keeping the long-distance modes frozen. This is done by adding to the Hamiltonian (or euclidean action) of the model a quadratic term that acts as an infrared regulator [47], \(H[\vec{\varphi}]\to H[\vec{\varphi}]+\Delta H_{k}[\vec{\varphi}]\) with:
\[\Delta H_{k}[\vec{\varphi}]=\frac{1}{2}\int_{q}\varphi_{i}(-q)R_{k}(q^{2}) \varphi_{i}(q). \tag{24}\]
Here and below \(\int_{q}=\int\frac{d^{d}q}{(2\pi)^{d}}\). To act as a well-behaved infrared regulator, \(R_{k}(q^{2})\) must satisfy:
* \(R_{k}(q^{2})\) is a \(C^{\infty}\) function of the momentum squared;2 Footnote 2: This requirement can be relaxed in certain approximations.
* \(R_{k}(q^{2})\sim Z_{k}k^{2}\) for \(q\ll k\), where \(Z_{k}\) is a field renormalization factor to be specified below;
* \(R_{k}(q^{2})\to 0\) very fast when \(q\gg k\).
The infrared regularized free-energy \(W_{k}[J]\) can be defined as usual [48; 49; 50]:
\[e^{W_{k}[J]}=\int\mathcal{D}\varphi\ e^{-S[\vec{\varphi}]-\Delta H_{k}[\vec{ \varphi}]+\int_{q}(J_{c})\varphi_{i}(x)} \tag{25}\]
with \(\int_{x}=\int d^{d}x\). Notice that the free energy \(W[J]\) of the original model is recovered in the limit \(k\to 0\) since \(R_{k=0}\equiv 0\): \(W_{k=0}[\vec{J}]=W[\vec{J}]\).
The regularized effective action \(\Gamma_{k}[\vec{\phi}]\) is defined as a slightly modified Legendre transform of \(W_{k}[J]\):
\[\Gamma_{k}[\vec{\phi}]=\int_{x}\phi_{i}(x)J_{i}^{\phi}(x)-W_{k}[\vec{J}^{0}]- \Delta S_{k}[\vec{\phi}] \tag{26}\]
where \(\vec{J}^{0}\) is a function of \(\vec{\phi}\), determined implicitly by inverting the relation:
\[\phi_{i}(x)=\frac{\delta W_{k}}{\delta J_{i}(x)}\bigg{|}_{x_{i}=J^{0}}. \tag{27}\]
From the properties of the regulator \(R_{k}(q^{2})\) listed above and Eq. (26), it can then be shown [34] that at a microscopic scale \(k=\Lambda\), that must be much higher than any other dimensionful scale in the problem, \(\Gamma_{\Lambda}[\vec{\phi}]\sim H[\vec{\phi}]\). This provides the initial condition of the exact RG flow given below in Eq. (30).
The Gibbs free energy \(\Gamma_{k}[\vec{\phi}]\) is the generating functional of infrared-regularized one-particle irreducible (1PI) proper vertices. In the following, we omit the \(k\)-dependence of the propagator and proper vertices to alleviate the notation. Once evaluated in a constant field \(\vec{\phi}\), the Fourier transform of these vertices is defined by:
\[\Gamma_{i_{1}\ldots i_{n}}^{(n)}(p_{1},\ldots, p_{n-1};\vec{\phi})=\int_{x}\ e^{i\sum_{n=1}^{n-1}x_{n};p_{n}}\] \[\times\Gamma_{i_{1}\ldots i_{n}}^{(n)}(x_{1},\ldots,x_{n-1},0; \vec{\phi}), \tag{28}\]
where
\[\Gamma_{i_{1}\ldots i_{n}}^{(n)}(x_{1},\ldots,x_{n};\vec{\phi})=\left.\frac{ \delta^{n}\Gamma_{k}[\vec{\phi}]}{\delta\phi_{i_{1}}(x_{1})\ldots\delta\phi_{ i_{n}}(x_{n})}\right|_{\vec{\phi}(x)\equiv\vec{\phi}}. \tag{29}\]
The dependence of \(\Gamma_{k}[\vec{\phi}]\) on \(k\) or, equivalently, on the RG "time" \(t=\log(k/\Lambda)\)[48; 49; 50] is given by:
\[\partial_{t}\Gamma_{k}[\vec{\phi}]=\frac{1}{2}\int_{x,y}\partial_{t}R_{k}(x-y )G_{ii}[x,y;\vec{\phi}]. \tag{30}\]
Here \(R_{k}(x-y)\) is the inverse Fourier transform of \(R_{k}(q^{2})\) and \(G_{ij}[x,y;\vec{\phi}]\) is the full propagator in an arbitrary external field defined as the inverse of the two-point vertex function:
\[\int_{z}G_{il}[x,z;\vec{\phi}]\Big{[}\frac{\delta^{2}\Gamma_{k}[\vec{\phi}]}{ \delta\phi_{l}(z)\delta\phi_{j}(y)}+R_{k}(z-y)\delta_{ij}\Big{]}=\delta(x-y) \delta_{ij}. \tag{31}\]
The scale-dependent effective potential is defined as the Gibbs free energy per unit volume evaluated in a constant field \(\vec{\phi}\):
\[U_{k}(\vec{\phi})=\frac{1}{V}\Gamma_{k}(\vec{\phi}) \tag{32}\]
It follows from Eq. (30) that it satisfies an exact flow equation:
\[\partial_{t}U_{k}(\vec{\phi})=\frac{1}{2}\int_{q}\partial_{t}R_{k}(q^{2})G_{ii}( q,\vec{\phi}) \tag{33}\]
where \(G_{ij}(q,\vec{\phi})\) is the Fourier transform of the propagator evaluated in the constant field \(\vec{\phi}\).
Equations for the \(n\)-point vertices in a constant external field can be obtained from Eq. (30) by applying \(n\) functional derivatives. The flow equation of \(\Gamma^{(n)}\) is then expressed in terms of all the vertices up to \(\Gamma^{(n+2)}\) which results in an infinite hierarchy of coupled NPRG equations. Solving this hierarchy requires approximations for most interacting theories (for a counter-example, see [51]).
Compared with other common approaches to field theory, the NPRG framework has the advantage of allowing approximations beyond perturbation theory. We now present the most widely used approximation in the context of NPRG, the derivative expansion.
### The derivative expansion and the issue of the infinite number of invariants
The DE is an approximation scheme consisting in replacing \(\Gamma_{k}[\vec{\phi}(x)]\) by its series expansion in the gradient of the field truncated at a finite order. For instance, for the Ising model, the DE truncated at its lowest order, called the Local Potential Approximation (LPA), consists in approximating \(\Gamma_{k}[\phi(x)]\) by:
\[\Gamma_{k}^{\rm LPA}[\phi]=\int_{x}\Bigl{(}U_{k}(\phi)+\frac{1}{2}(\partial_{ \mu}\phi)^{2}\Bigr{)}. \tag{34}\]
At fourth order of the DE and again for the Ising model it consists in approximating it by:
\[\Gamma_{k}^{\partial^{2}}[\phi]=\int_{x}\Bigl{(}U_{k}(\phi)+\frac {1}{2}Z_{k}(\phi)(\partial_{\mu}\phi)^{2}\] \[+\frac{1}{2}W_{k}^{\omega}(\phi)(\partial_{\mu}\phi_{\nu}\phi)^{2 }+\frac{1}{2}\phi W_{k}^{b}(\phi)(\partial^{2}\phi)(\partial_{\mu}\phi)^{2}\] \[+\frac{1}{2}W_{k}^{\omega}(\phi)\Bigl{(}(\partial_{\mu}\phi)^{2} \Bigr{)}^{2}+\mathcal{O}(\partial^{6})\Bigr{)}. \tag{35}\]
To order \(\partial^{2}\), it consists in keeping both \(U_{k}\) and \(Z_{k}\) and neglecting all other functions of the expansion. Due to the \(\mathbb{Z}_{2}\) symmetry, the functions \(U_{k},Z_{k},W_{k}^{\omega,k,c},\cdots\) are functions of \(\rho=\phi^{2}/2\) only. Once one of these _ansatze_ is plugged into Eq. (30) the NPRG equation boils down to a set of coupled partial differential equations on the functions involved in the _ansatz_.
It has recently been demonstrated, both theoretically and empirically, that the DE is controlled by a small parameter for quantities defined at zero external momenta such as \(U_{k},Z_{k},\dots\) from which can be computed thermodynamical quantities such as the correlation length, the magnetic susceptibility, the critical exponents, the universal equation of state, etc [35]. Corrections to the leading order (LPA) are typically suppressed by a factor of the anomalous dimension \(\eta\)[35]. This makes the LPA a very well-suited approximation in cases where \(\eta\) is small. Moreover, successive orders of the DE are suppressed by an expansion parameter which is rather small, of order \(1/4\). Empirically the DE shows a rapidly converging behavior at least up to order \(\partial^{6}\). This is to be contrasted with the usual perturbative expansions that are at best asymptotic Borel summable expansions requiring resummations techniques.
The convergence of the DE has been tested on the O(\(N\)) models in \(d=3\) by calculating critical exponents [36; 52] and universal amplitude ratios [37]. In many cases, it gives the world best results for these quantities. The NPRG has also been used in many other contexts, such as disordered [53; 54] or non-equilibrium systems [55; 56; 57], almost systematically giving highly accurate results. It also makes it possible to calculate quantities that are beyond the reach of perturbative methods. Examples are nonuniversal quantities [58; 59; 60; 56] or FPs that are nonperturbative such as the strong coupling FP of the Kardar-Parisi-Zhang equation in \(d=2\) and \(d=3\)[57] or the breaking of supersymmetry in the random field Ising model in \(d\simeq 5\)[53; 54], see the review [34] for an extensive bibliography.
Our aim is to apply the DE at its leading order to the \(q\)-state Potts model. It is convenient to use both the LPA and a variant of the LPA, called the LPA', that consists in implementing a nontrivial field renormalization \(Z_{k}\) on top of the LPA. The LPA' ansatz is
\[\Gamma_{k}^{\rm LPA'}[\vec{\phi}]=\int d^{d}x\left(U_{k}(\vec{\phi})+\frac{1}{ 2}Z_{k}(\partial_{\mu}\vec{\phi})^{2}\right) \tag{36}\]
where \(Z_{k}\) is approximated by a field-independent quantity. The LPA is a simplification of the LPA' in which \(Z_{k}\) is constrained to remain \(1\) all along the RG flow.
A difficulty specific to the \(q\)-state Potts model is that the number of invariants is \(n=q-1\) whereas it remains equal to one for the O(\(N\)) models independently of the value of \(N\). The potential \(U_{k}\) in Eq. (36) is therefore a function of \(n\) variables, a much more complicated situation than for the O(\(N\)) models.
As stated before, \(q=2\) corresponds to the Ising universality class that has been largely studied in the literature, including high orders of the DE. The next integer value we are most interested in, presents a major difficulty: the value of \(d_{c}(q=3)\) is expected to be around \(2.5\)[29]. Below this dimension, the LPA is expected to be a poor approximation because \(\eta^{\rm LPA}=0\) whereas \(\eta\) is probably not small for \(d<2.5\). This is particularly visible in the behavior of the effective potential at criticality, that is, \(U_{k=0}(\vec{\phi})\) at \(T=T_{c}\), which is a power-law with exponent: \(2d/(d-2+\eta)\). This power-law is clearly incompatible with \(\eta=0\) in \(d=2\) and the LPA is therefore invalid in this dimension.
The LPA' improves this situation since \(\eta^{\rm LPA'}\neq 0\) but this approximation is not under full control. The second order of the DE would be necessary to estimate the confidence level of the LPA; but its implementation is challenging and then, for integer values of \(q\) larger than two, a reliable analysis within the DE is very difficult. Moreover here, as in reference [17], we restrict ourselves to the simplest implementation of the DE that consists in performing a field expansion of \(U_{k}(\vec{\phi})\) on top of the LPA or LPA'. A drawback of the field expansion of the LPA or LPA' is that it may fail to converge in small dimensions
even if converges in \(d=3\)[61]. We study this in detail in the following. Our aim is to show that we can push the expansion of \(U_{k}(\vec{\phi})\) to orders high enough for our results to converge in \(d=3\).
A difficulty with the program described above comes from the continuation of \(q\) to real values. Although all algebraic properties of the tensors defined in Sect. II.3 and IV can be straightforwardly continued to real values of \(q\), the RG flows for noninteger values of \(q\) are tremendously more complicated than for integer values of \(q\). The reason is simple: for \(q\in\mathbb{N}\), the number of independent invariants \(\tau_{p}\) is finite and equals to \(n=q-1\) which implies that the couplings associated with the invariants \(\tau_{p^{\prime}n}\) decouple from the flows of the other couplings with \(p\leq n\) [Note that this is only explicit if we choose a well-adapted parametrization of the tensors, which requires the definition of improved tensors, see Section IV.]. For \(q\notin\mathbb{N}\), the above decoupling does not occur and it is necessary to keep in the _ansatz_ for \(\Gamma_{k}[\vec{\phi}]\) the infinity of invariants of the model whatever the order of the DE. At LPA for instance, the potential \(U_{k}\) for a noninteger value of \(q\) is a function of infinitely many invariants and it is therefore impossible to work functionally, even in principle. Fortunately, an expansion of \(U_{k}\) in powers of the fields, similar to the expansion of the Hamiltonian in Eq. (21), does not show such a difficulty because a monomial of a given order in the fields involves only a finite number of invariants.
## IV Invariants, tensors and improved tensors
We show below that the NPRG flow of the coupling constants involved in the field expansion of \(U_{k}\) requires the computation of contractions of several \(\bar{T}^{(p)}\) tensors. These contractions are computed in Appendix B and we review them below for completeness. We also show that since the number of independent tensors for a given \(q\in\mathbb{N}\) is finite and equal to \(n=q-1\), it is possible and convenient to build a set of improved tensors \(T^{(p)}\) such that \(T^{(p>q)}\equiv 0\) for any given \(q\in\mathbb{N}\). The extension of these tensors to noninteger values of \(n\) is also given below.
### Tensors and improved tensors
As proven in Appendix B, the contraction of two tensors \(\bar{T}\) is given by:
\[\bar{T}^{(p)}_{i_{1}i_{2}-i_{p-1}k}\bar{T}^{(p^{\prime})}_{i_{1}i_{2}-i_{p^{ \prime}-1}k}=\bar{T}^{(p+p^{\prime}-2)}_{i_{1}i_{2}-i_{p-1}j_{1}j_{2}-i_{p^{ \prime}-1}}-\frac{2}{n+1}\bar{T}^{(p-1)}_{i_{1}i_{2}-i_{p-1}}\bar{T}^{(p^{ \prime}-1)}_{j_{1}i_{2}-j_{p^{\prime}-1}} \tag{37}\]
which implies for instance that:
\[\bar{T}^{(3)}_{ijm}\bar{T}^{(3)}_{klm}=\bar{T}^{(4)}_{i_{2}kl}-\frac{2}{n+1} \delta_{ij}\delta_{kl}. \tag{38}\]
Thus, the contraction of two tensors yields in general a higher rank tensor. However, for \(q\in\mathbb{N}\), the tensors with \(p+p^{\prime}-2>q\) in Eq. (37) cannot be independent of the lower rank tensors and we show below that they are sums of products of these tensors.
Another useful property shown in Appendix B is:
\[\bar{T}^{(p)}_{i_{1}i_{2}-i_{p-2}jj}=\frac{2n}{n+1}\bar{T}^{(p-2)}_{i_{1}i_{2} -i_{p-2}} \tag{39}\]
An important feature of identities (37) and (39) is that they can be extended to non-integer values of \(q\) as done in [17]. We use this extension in Sec. IV.3.
As said above in Section II.3, the \(\bar{T}^{(p)}\) tensors can be divided into isotropic and anisotropic tensors depending on whether they are O(\(n\))-invariant tensors or not. However, this classification does not entirely fix what an anisotropic tensor is because to any of these tensors can be added an isotropic one while remaining anisotropic. For instance, \(\bar{T}^{(4)}\) can be modified by adding a multiple of \(S^{(4)}\). It is therefore possible to modify the anisotropic tensors in such a way that they satisfy some extra properties. Following Ref. [40], we employ traceless tensors and more generally, we define the improved tensors \(T^{(p)}\) by requiring that their full contraction with lower rank tensors is zero. For instance, for \(p>2\), \(T^{(p)}\) must satisfy:
\[T^{(p)}_{i_{1}i_{2}-i_{p-3}kl}T^{(2)}_{kl}=0, \tag{40}\]
that is, any partial trace must be zero. Notice that for \(\bar{T}^{(3)}\):
\[\bar{T}^{(3)}_{iij}=\frac{1}{2}\sum_{\alpha=1}^{q}e^{(\alpha)}_{i}e^{(\alpha)} _{j}=\frac{2n}{n+1}\frac{1}{2}\sum_{\alpha=1}^{q}e^{(\alpha)}_{j}=0 \tag{41}\]
and thus \(T^{(3)}=\bar{T}^{(3)}\). However this is no longer true for \(\bar{T}^{(p>3)}\), as can be seen on Eq. (39). The construction of \(T^{(4)}\) is simple and we find that the traceless condition (40) imposes that
\[T^{(4)}_{ijkl}=\bar{T}^{(4)}_{ijkl}-\frac{2n}{(n+1)(n+2)}S^{(4)}_{ijkl}. \tag{42}\]
Notice that \(T^{(4)}_{ijkl}T^{(3)}_{ijk}=0\) and \(T^{(4)}\) is therefore the improved tensor of rank 4.
It can be shown that for \(p\leq 5\), the traceless condition (40) is sufficient to fully determine the improved tensors. That is, all other constraints coming from the contraction of \(T^{(p\leq 5)}\) with \(T^{(p^{\prime}<p)}\) are automatically satisfied when Eq. (40) is. Starting from \(T^{(6)}\) this is no longer true and the contractions with \(T^{(3)},T^{(4)},\dots\) have to be taken into account to fully determine the improved tensors.
The \(T^{(p)}\) defined above have many good properties. For example, we show in Appendix B that for \(n=1\) or \(n=2\), \(T^{(4)}\) has, at most, only three nonvanishing components \(T^{(4)}_{1111},T^{(4)}_{1122}\) and \(T^{(4)}_{2222}\) that are all proportional whatever the values of \(n\):
\[T^{(4)}_{1111}=T^{(4)}_{2222}=3T^{(4)}_{1122}=\frac{(n-1)(n-2)}{(n+1)(n+2)}. \tag{43}\]
This implies the important property that \(T^{(4)}\equiv 0\) for \(n=1\) and \(n=2\). More generally, for any given \(q\in\mathbb{N}\), any improved tensor \(T^{(p)}\equiv 0\) for integer \(p>q\).
### Invariants
Once the tensors \(T^{(p)}\) or \(\bar{T}^{(p)}\) have been defined, the monomial in \(\phi_{i}\) invariant under \(\mathcal{S}_{q}\) can be constructed as in Eq. (16). For instance, the improved invariants are:
\[\tau_{p}=\frac{1}{2}\mathcal{T}^{(p)}_{i_{1}i_{2}\cdots i_{p}}\phi_{i_{1}}\phi_{ i_{2}}\cdots\phi_{i_{p}}. \tag{44}\]
The field expansion of the potential \(U_{k}\) is the sum of the products of these invariants weighted by coupling constants, see Eq. (47) below for the expansion truncated to order 9 in powers of the fields. Notice that using either the \(\tau_{p}\) or \(\bar{\tau}_{p}\) invariants in this field expansion boils down to a linear redefinition of the couplings in front of them, which is immaterial for the calculation of physical quantities. The only advantage of using improved invariants is to make manifest the fact that when \(n\in\mathbb{N}\), only a finite set of invariants survive. This property is not explicit in the non-improved version and is not easy to check.
An interesting property of both improved and non improved invariants generalizes the one given for \(\tau_{3}\) in Eq. (18). It is due to the normalization conventions that we have employed. Consider two Potts models having \(q\) and \(q^{\prime}\) states respectively, with \(q^{\prime}>q\). Then the invariants corresponding to the \(q^{\prime}-\)state Potts model projected onto the space \(\phi_{q}=\phi_{q+1}=\cdots=\phi_{q^{\prime}-1}=0\) are all identical to the invariants of the \(q-\)state Potts model. This is trivial for the O(\(n\))-invariant \(\rho\) defined in Eq. (13):
\[\rho^{(q)}=\rho^{(q^{\prime})}\big{|}_{\phi_{q}=\phi_{q+1}=\cdots=\phi_{q^{ \prime}-1}=0} \tag{45}\]
and it can be shown to hold for all invariants:
\[\tau_{p}^{(q)}=\tau_{p}^{(q^{\prime})}\big{|}_{\phi_{q}=\phi_{q+1}=\cdots=\phi _{q^{\prime}-1}=0}. \tag{46}\]
### Flow equations in the Local Potential Approximation
As said above, the study of the \(q-\)state Potts model for noninteger values of \(q\) requires to perform a field expansion. For the LPA or LPA' defined in Eq. (36), this amounts to expanding the potential \(U_{k}(\vec{\phi})\equiv U_{k}(\phi_{1},\phi_{2},\dots)\) in powers of the fields \(\phi_{i}\). To order 9, the potential reads:
\[U_{k}(\vec{\phi}) =u_{2}\rho+\frac{u_{4}}{6}\rho^{2}+\frac{u_{6}}{90}\rho^{3}+ \frac{u_{8}}{2520}\rho^{4}\] \[+\frac{v_{3}}{3}\tau_{3}+\frac{v_{5}}{30}\rho\tau_{3}+\frac{v_{6} }{180}\tau_{3}^{2}+\frac{v_{7}}{630}\rho^{2}\tau_{3}+\frac{v_{8}}{5040}\rho \tau_{3}^{2}\] \[+\frac{v_{9a}}{22680}\rho^{3}\tau_{3}+\frac{v_{9b}}{45360}\tau_{3 }^{3}+\frac{u_{4}}{12}\tau_{4}+\frac{w_{6}}{180}\rho\tau_{4}\] \[+\frac{w_{7}}{1260}\tau_{3}\tau_{4}+\frac{w_{8a}}{5040}\rho^{2} \tau_{4}+\frac{w_{8b}}{10080}\tau_{4}^{2} \tag{47}\] \[+\frac{w_{9}}{45360}\rho\tau_{3}\tau_{4}+\frac{x_{5}}{60}\tau_{5 }+\frac{x_{7}}{1260}\rho\tau_{5}+\frac{x_{8}}{10080}\tau_{3}\tau_{5}\] \[+\frac{x_{9a}}{45360}\rho^{2}\tau_{5}+\frac{x_{9b}}{90720}\tau_{4 }\tau_{5}+\frac{y_{6}}{360}\tau_{6}+\frac{y_{8}}{10080}\rho\tau_{6}\] \[+\frac{y_{9}}{90720}\tau_{3}\tau_{6}+\frac{27}{2520}\tau_{7}+ \frac{z_{9}}{90720}\rho\tau_{7}+\frac{s_{8}}{20160}\tau_{8}\] \[+\frac{c_{9}}{181440}\tau_{9}\]
where the \(u_{a}\)'s are for terms involving only \(\rho\) and the index \(a\) is the power of the fields, the \(v_{a}\)'s for terms involving \(\tau_{3}\) and \(\rho\), the \(w_{a}\)'s for terms involving \(\tau_{4}\), \(\tau_{3}\) and \(\rho\), and so on. Notice that there are two terms of order 8 involving \(\tau_{4}\) and we have called them \(w_{8a}\) and \(w_{8b}\). Below, as usual, we call \(r=u_{2}\) and \(u=u_{4}\).
The flow of \(U_{k}(\vec{\phi})\), given in Eq. (33), requires the computation of the full propagator, that is, of \(\left(\Gamma_{k}^{(2)}(q,\vec{\phi})+R_{k}(q)\right)^{-1}\). In the LPA or LPA', \(\Gamma_{k}^{(2)}(q,\vec{\phi})\) is computed from Eqs. (36) and (47):
\[\Gamma_{ij}^{(2)}(q;\vec{\phi})=Z_{k}q^{2}\delta_{ij}+\frac{\partial^{2}U_{k}( \vec{\phi})}{\partial\phi_{i}\partial\phi_{j}}=(Z_{k}q^{2}+r)\delta_{ij}+ \frac{\partial^{2}U_{k}^{\rm int}(\vec{\phi})}{\partial\phi_{i}\partial\phi_{j}} \tag{48}\]
where we have separated in \(U_{k}\) the quadratic term and the interaction part \(U_{k}^{\rm int}(\vec{\phi})\). The latter includes at least a cubic term and its second derivative involves at least one field. Thus, expanding the propagator in powers of the field is equivalent to expanding in powers of
\[U_{ij}^{(2),{\rm int}}(\vec{\phi})=\frac{\partial^{2}U_{k}^{\rm int}(\vec{\phi} )}{\partial\phi_{i}\partial\phi_{j}}. \tag{49}\]
Defining the propagator at zero field by:
\[G_{ij}(q)=\frac{\delta_{ij}}{Z_{k}q^{2}+R_{k}(q^{2})+r}\equiv\delta_{ij}G(q), \tag{50}\]
the expansion in powers of the field of the LPA flow equation of \(U_{k}\) consists in inserting in Eq. (33) the expansion
\[G_{ij}(q;\vec{\phi})= \delta_{ij}G(q)-G^{2}(q)U_{ij}^{(2),{\rm int}}+G^{3}(q)U_{q}^{(2 ),{\rm int}}U_{ij}^{(2),{\rm int}}\] \[-G^{4}(q)U_{il}^{(2),{\rm int}}U_{lm}^{(2),{\rm int}}U_{mj}^{(2), {\rm int}}+\cdots. \tag{51}\]
The flow equations of all the coupling constants can then be obtained from the flow of \(U_{k}\) by projection onto the invariants \(\tau_{p}\) or \(\bar{\tau}_{p}\), depending on whether we want to work with ordinary or improved invariants. This requires tensor contractions which are straightforward using Eqs. (37) and (39) but which become increasingly tedious as the order of the truncation increases. Notice that the tensor contractions are simple for the nonimproved tensors and are more involved with improved tensors. Thus, for practical purpose, it is simpler to first work with nonimproved tensors and only at the end of the calculation to switch to improved couplings, if necessary.
Before discussing the flow equations of the coupling constants involved in Eq. (47), let us define their dimensionless and renormalized counterparts. They are defined by [34]:
\[U_{k}(\vec{\phi}) =4\omega_{d}\,k^{d}\tilde{U}_{k}(\vec{\tilde{\phi}})\] \[\vec{\phi} =2\sqrt{\omega_{d}}\,Z_{k}^{-1/2}k^{(d-2)/2}\vec{\tilde{\phi}} \tag{52}\]
where \(\omega_{d}=\left(2^{d}\pi^{d/2}\Gamma(d/2)d\right)^{-1}\) and \(\eta_{k}=-\partial_{i}\log Z_{k}\) is the running anomalous dimension. A rescaling by the factor \(\omega_{d}\) has been implemented so as to cancel large numbers coming from
angular integration. The corresponding equation for the dimensionless potential is:
\[\partial_{t}\tilde{U}_{k}(\tilde{\vec{\phi}}) +d\tilde{U}_{k}(\tilde{\vec{\phi}})-\frac{d-2+\eta_{k}}{2}\tilde{ \phi}_{i}\frac{\partial\tilde{U}_{k}(\tilde{\vec{\phi}})}{\partial\tilde{\phi}_ {i}}\] \[=\frac{k^{-d}}{2\times 4\omega_{d}}\int_{q}\partial_{i}R_{k}(q^{2})G _{ii}(q,\vec{\phi}) \tag{53}\]
The field expansion of \(\tilde{U}_{k}(\tilde{\vec{\phi}})\) to order \(9\) is similar to the one of Eq. (47) with the invariants \(\rho,\tau_{3},\tau_{4},\dots\) and the coupling constants \(u_{a},v_{a},w_{a},\dots\) replaced by their dimensionless counterparts. By collectively calling \(g_{m}\) a coupling constant of a term involving \(m\) fields, its dimensionless counterpart \(\tilde{g}_{m}\) is given by:
\[g_{m}=\tilde{g}_{m}k^{d-m(d-2)/2}Z_{k}^{m/2}(4\omega_{d})^{(2-n)/2}. \tag{54}\]
The flows of the dimensionless couplings \(\tilde{u}_{a},\tilde{v}_{a},\tilde{w}_{a},\dots\) are obtained by expanding both sides of Eq. (53) in powers of the invariants. They only involve the integrals
\[I_{n}(r)=\int\frac{d^{d}q}{(2\pi)^{d}}\frac{\partial_{t}R_{k}(q^{2})}{\left(Z _{k}q^{2}+R_{k}(q^{2})+r\right)^{n}} \tag{55}\]
or their dimensionless counterparts defined by
\[I_{n}(r)=\frac{4\omega_{d}k^{d+2-2n}}{Z_{k}^{n-1}}\tilde{I}_{n}(\vec{r}). \tag{56}\]
From now on, we work only with dimensionless couplings and integrals and omit the tildes for simplicity.
For the sake of concreteness and because we need them in the following, we give below the flow equations for the couplings corresponding to four fields or less. The other ones are given in a supplementary material. These equations are:
\[\partial_{t}r=(\eta-2)r-\frac{(n+2)}{6}uI_{2}+\frac{2(n-1)}{n+1}v_{3}^{2}I_{3} \tag{57}\]
\[\partial_{t}v_{3} =\frac{1}{2}(d+3\eta-6)v_{3}+\left[2u+\frac{6(n-2)}{n+2}w_{4} \right]v_{3}I_{3}\] \[-\frac{n+6}{20}v_{5}I_{2}-\frac{6(n-2)}{n+1}v_{3}^{3}I_{4} \tag{58}\]
\[\partial_{t}u =(d+2\eta-4)u-\left[\frac{n+4}{10}u_{6}+\frac{6(n-1)}{5(1+n)(2+n )}v_{6}\right]I_{2}\] \[+\left[\frac{n+8}{3}u^{2}+\frac{12(n-1)(n+6)}{5(n+1)(n+2)}v_{3}v _{5}\right.\] \[+\left.\frac{24(n-2)(n-1)}{(n+1)(n+2)^{2}}w_{4}^{2}\right]I_{3}\] \[-\frac{12(n-1)\Big{(}(n+2)(n+6)u_{4}+12(n-2)w_{4}\Big{)}}{(n+1)(n +2)^{2}}v_{3}^{2}I_{4}\] \[+\frac{48(n-1)(3n-4)}{(n+1)^{2}(n+2)}v_{3}^{4}I_{5} \tag{59}\]
\[\partial_{t}w_{4} =(d+2\eta-4)w_{4}+\Bigg{[}v_{3}\bigg{(}\frac{8(n-3)(n+2)}{(n+1)(n +6)}x_{5}+\frac{12}{5}v_{5}\bigg{)}\] \[+6\frac{n^{2}-3n-2}{(1+n)(2+n)}w_{4}^{2}+4uw_{4}\bigg{]}I_{3}\] \[+12\Bigg{[}3\Big{(}\frac{2n+4-n^{2}}{(n+1)(n+2)}\Big{)}w_{4}-u \Bigg{]}v_{3}^{2}I_{4}\] \[-\frac{1}{30}((n+8)w_{6}+9v_{6})I_{2}+\frac{24(n-3)}{n+1}v_{3}^{4 }I_{5}. \tag{60}\]
These flow equations must be completed by the expression of \(\eta_{k}\). At LPA, \(\eta_{k}=0\) since \(Z_{k}=1\) for all \(k\). At LPA\({}^{*}\), the value of \(\eta_{k}\) depends on the value of the field where it is computed. We choose here \(\vec{\phi}=\vec{0}\). Then, the value of \(\eta_{k}\) is obtained from the flow equation of \(\Gamma_{ij}^{(2)}(p^{2};\vec{\phi})\) expanded at order \(p^{2}\) and evaluated at \(\vec{\phi}=\vec{0}\). It is given by:
\[\eta_{k}=2\frac{n-1}{n+1}v_{3}^{2}I_{\eta}. \tag{61}\]
Eq. (61) involves a new dimensionless integral:
\[I_{\eta} =\frac{Z_{k}^{2}k^{6-d}}{4\omega_{d}}\int_{q}\partial_{t}R_{k}(q^ {2})G^{4}(q)\Big{\{}Z_{k}+R_{k}^{\prime}(q^{2})\] \[+2\frac{q^{2}}{d}\Big{[}R_{k}^{\prime\prime}(q^{2})-2G(q)(Z_{k}+ R_{k}^{\prime}(q^{2}))^{2}\Big{]}\Big{\}}. \tag{62}\]
After some redefinitions of the couplings, we have checked that our equations coincide at order \(\phi^{6}\) with those of Ref. [17] except for some typos in this reference, confirmed by the authors.
A nice property of our flow equations is manifest on Eqs. (57) to (60). When \(n=1\), the flows of \(r=u_{2}\) and \(u=u_{4}\) no longer depend on \(v_{3},v_{5}\) or \(w_{4}\) and when \(n=2\), the flows of \(r,u\) and \(v_{3}\) no longer depend on \(w_{4}\). One can check that this is a general phenomenon, independent of the LPA\({}^{*}\): when \(n=1\) (Ising model) the flows of all the \(u\)'s are independent of the \(v\)'s, \(w\)'s, \(x\)'s, etc; when \(n=2\) the flows of all the \(u\)'s and \(v\)'s are independent of the \(w\)'s, \(x\)'s, etc, and so on. This property is on one hand trivial because for \(n=2\) for instance, the potential depends only on \(\rho\) and \(\tau_{3}\) and their flow equations cannot involve other couplings. On the other hand, this property, that is independent of the choice of tensors, is manifest on the RG flow equations only with improved tensors, which is the advantage of working with these tensors.
It is important to realize that the argument above does not imply that for \(n=1\) for instance, the couplings \(v\)'s, \(w\)'s, \(x\)'s, etc, vanish. They only decouple from the flows of the \(u\)'s in the limit \(n\to 1\) because in these flows they always contribute together with a prefactor proportional to \(n-1\).
In the following, we use the \(\Theta\)-regulator [62]:
\[R_{k}^{\theta}(q)=Z_{k}(k^{2}-q^{2})\Theta(1-q^{2}/k^{2}). \tag{63}\]
This regulator is particularly convenient because it allows us to analytically compute the integrals:
\[I_{n} =\left(1-\frac{\eta}{d+2}\right)\frac{1}{(1+r)^{n}}\] \[I_{\eta} =\frac{1}{2}\frac{1}{(1+r)^{4}}. \tag{64}\]
Moreover, it has been shown empirically on the O(\(N\)) models that at the LPA order, the best critical exponents are obtained with this regulator [61; 62] and although it does not regularize the DE from order \(\partial^{4}\), it is optimal in this sense at LPA.
## V The critical behavior of the \(q\)-state Potts model and the line \(d_{c}(q)\)
With the flow equations obtained to order \(\phi^{9}\), we can study the existence and stability of the FPs of the \(q-\)state Potts model for continuous values of both \(d\) and \(q\). At order 9 of the field expansion of the LPA and LPA' flow equations, we calculate numerically the shape of the \(q_{c}(d)\) curve separating in the \((d,q)\) plane the region of first-order phase transition that lies above this curve and the second-order region that lies below it, see Fig 3. We show that below this curve coexist a critical and a tricritical FP that collide when \(q\to q_{c}(d)^{-}\) and disappear - more precisely become complex - for \(q>q_{c}(d)\).
Before doing that, following Ref. [40], we start this study by first analyzing the FPs existing in the neighborhood of the \(q=2\) and \(d=4\) that, in many aspects, can be solved exactly, that is, without requiring the LPA or LPA' approximations.
### Fixed points in \(d=4-\epsilon\) and \(q=2+\delta\)
The critical behavior of the Ising model is associated with the Wilson-Fisher FP in \(d<4\). When \(d\to 4^{-}\), the Wilson-Fisher FP approaches the Gaussian FP which is tricritical and both FP collide in \(d=4\) which is therefore the upper critical dimension of this model. We can therefore expect that \(q_{c}(d=4)=2\). We should of course retrieve this from our flow equations except for one subtlety: when embedded in a set of more general models, here the \(q-\)state Potts models, a critical FP can become multicritical because the other couplings can be relevant at this FP. We therefore have to restudy the stability of the Ising FP, in particular when \(n\) is close to 1 and \(d\) close to 4 to determine \(q_{c}(d)\) in the vicinity of \(d=4\). This requires to determine the set of all FPs and we start by the perturbative ones, that is, those are close to the Gaussian FP when \(d\to 4\). It is interesting to note that perturbative multicritical FPs for \(q=0\) and \(q=1\) have been found in a perturbative analysis performed in \(d=10/3-\epsilon\)[63].
#### v.1.1 Perturbative fixed points in \(d=4-\epsilon\)
As well-known [34], the LPA' flow of the potential is one-loop exact when \(d\to 4\) and the couplings that are irrelevant with respect to the Gaussian FP can be neglected at leading order in \(\epsilon=4-d\). The relevance of the couplings near the Gaussian FP is given by dimensional analysis. By inspection, we find that only \(r\), \(v_{3}\), \(u\) and \(w_{4}\) with respective dimensions 2, \(1+\epsilon\), \(\epsilon\) and \(\epsilon\) are relevant with respect to the Gaussian FP in \(d=4-\epsilon\). This FP is therefore pentacritical. All other perturbative FPs are at least tricritical because the scaling dimensions of \(r\) and \(v_{3}\) cannot become negative when moving from the Gaussian FP to a perturbative FP which, by definition, is at a distance of order \(\epsilon\) from the Gaussian FP. As a consequence, no perturbative FP can control a second order phase transition near \(d=4\).
Neglecting all couplings associated with terms of degree higher than 4, we find that the FP equation for \(v_{3}\) is given by:
\[v_{3}^{*}\left(\frac{1}{2}+\left(2u^{*}+\frac{6(n-2)}{n+2}w_{4}^{*}\right)I_ {3}-\frac{6(n-2)}{n+1}(v_{3}^{*})^{2}I_{4}\right)=0. \tag{65}\]
Its only solution with couplings at most of order \(\epsilon\) is \(v_{3}^{*}=0\). Notice that around the Gaussian FP, \(v_{3}\) is the only relevant coupling constant associated with an odd term in the fields. Therefore, all perturbative FPs have an extra \(\mathbb{Z}_{2}\) symmetry consisting in changing all fields in their opposite. The total symmetry group is therefore enlarged to \(S_{q}\times\mathbb{Z}_{2}\) at these FPs.
We have found three Non-Gaussian FPs with couplings of order \(\epsilon\). The first one is the usual O(\(n\))-invariant Wilson-Fisher FP with \(v_{3}=w_{4}=0\) and \(u\neq 0\). Notice that it is only for \(n=1\) that this FP is in the Ising universality class: for a generic noninteger value of \(n\) it is the extension to real values of \(n\) of the Wilson-Fisher FP. Two other FPs with \(w_{4}\neq 0\) exist and are of order \(\epsilon\). We call them P\({}_{1}\) and P\({}_{2}\). Their stability depends on the value of \(n\). In all cases, there is one tricritical FP and two tetracritical ones. The tricritical FP is the Wilson-Fisher FP for \(n\leq 4\), P\({}_{1}\) for \(4<n\leq 5\) and P\({}_{2}\) for \(n>5\). In \(d=3.9\), the flows in the coupling constant space \((u,w_{4})\) are shown in Fig. 4 for various values of \(n\). It is shown how the P\({}_{1}\) and \(O(n)\) FPs exchange their stability at \(n=4\) and the same for P\({}_{1}\) and P\({}_{2}\) at \(n=5\).3
Footnote 3: To avoid the most relevant direction, we have replaced \(r\) by zero in the flow equations to represent this figure. This does not modify the leading behavior of the flow equations for \(u\) and \(w_{4}\) near the FPs for \(d\simeq 4\).
We conclude from the above discussion that if there exists a second order transition it cannot be controlled by a purely perturbative FP. It is shown in the next section that near \(d=4\) and \(n=1\) one can prove by a double expansion in \(\epsilon=4-d\) and \(\delta=n-1\) the existence of a critical FP. Although, as just discussed, this FP is not fully perturbative, several of its properties can be analysed perturbatively, so we will refer to it as "semi-perturbative".
#### v.1.2 Semi-perturbative fixed points in \(d=4-\epsilon\) and \(q=2+\delta\)
As we have seen above in Eqs. (57) to (60), the flows of the couplings of the Ising model are recovered from the general flow equations of the \(q-\)state Potts model because in the \(n\to 1\) limit, the \(v,w,x,\dots\) couplings decouple from the flows of the \(u\)'s as they are always accompanied by a factor \(n-1\). This is also the case for all isotropic couplings, not only those included in the LPA', see Sec. IV.3. As shown by Newman, this is sufficient to derive a double expansion in \(\epsilon=4-d\) and \(\delta=n-1\) for all FPs, including the critical one, where the isotropic sector can be analyzed perturbatively, except for some non-perturbative constants that come from the anisotropic sector, see below.
Figure 4: Renormalization group flow in \(d=3.9\) in the plane \((u,w_{4})\) for \(n\in\{1,3,4,4.5,5,8\}\) and \(k=4\). For the sake of simplicity, we have imposed \(r=0\) and \(v_{3}=0\). In the flow for \(n=4\), the \(O(n)-\)invariant and P\({}_{1}\) FPs coincide. In the flow for \(n=5\), the P\({}_{1}\) and P\({}_{2}\) FPs coincide.
We show now that in this double expansion the flows of the O(\(n\))-invariant couplings, that is, of the \(u\)'s, become perturbative without having recourse to the LPA or LPA'. In the limit \(\delta\to 0\), they become the flows of the couplings of the Ising model in \(d=4-\epsilon\) and for \(\delta\) nonvanishing and small these flows are modified by terms of order \(\delta\). For \(n=1\), the flow of \(u\) reads:
\[\partial_{t}u=-\epsilon u+3u^{2}I_{3}+\mathcal{O}(\epsilon^{3}) \tag{66}\]
where \(I_{3}\) is
\[I_{3}=1+\mathcal{O}\left(\epsilon\right) \tag{67}\]
independently of the choice of regulator \(R_{k}(q)\). Eq. (66) follows from the scaling in \(\epsilon\) of the couplings near the Wilson-Fisher FP: \(r,u\sim\mathcal{O}(\epsilon)\), the other isotropic couplings such as \(u_{6}\) are, at least, of order \(\epsilon^{3}\), \(\eta\sim\mathcal{O}(\epsilon)^{2}\) and \(\Gamma^{(4)}\sim u+\mathcal{O}(\epsilon)^{2}\).
For \(n=1+\delta\) with \(0<\delta\ll 1\), the flow of \(u\) depends on all the couplings and is therefore modified by a term proportional to \(\delta\):
\[\partial_{t}u=-\epsilon u+3u^{2}I_{3}+A\,\delta+\mathcal{O}(\delta^{2})+ \mathcal{O}(\epsilon^{3})+\mathcal{O}(\epsilon\delta). \tag{68}\]
where \(A\) depends on both isotropic and anisotropic couplings. From Eq. (68), we thus find two possible FPs that we call SP\({}_{\pm}\) and that correspond to:
\[u_{\pm}^{*}=\frac{\epsilon\pm\sqrt{\epsilon^{2}-12I_{3}A_{*}\delta}}{6I_{3}}. \tag{69}\]
Let us assume that at the FP, \(A_{*}>0\) which is what we find at LPA and LPA'. Then, these FPs can only exist if
\[\epsilon^{2}\geq 12I_{3}A_{*}\delta, \tag{70}\]
or, equivalently,
\[q\leq q_{c}(d)=2+\frac{\epsilon^{2}}{12I_{3}A_{*}}+\mathcal{O}(\epsilon^{3}) \equiv 2+a\epsilon^{2}+\mathcal{O}(\epsilon^{3}). \tag{71}\]
It is easy to compute the stability of these two FPs and we find that SP\({}_{+}\) is once unstable and is thus critical while SP\({}_{-}\) is twice unstable and is thus tricritical. This is consistent with the cartoon of the LPA' Renormalization Group flow shown in Fig. 5 and with the fact that when \(\delta\to 0\) at fixed \(\epsilon\), SP\({}_{+}\) collides with the Wilson-Fisher FP of the Ising model and SP\({}_{-}\) with the Gaussian FP.
As expected, the mechanism for switching from a first to a second order transition is the collision between two FPs, one being critical (SP\({}_{+}\)) and the other one (SP\({}_{-}\)) tricritical. This occurs when \(u_{+}^{*}=u_{-}^{*}\) which determines the equation of the curve \(q_{c}(d)\).
Up to now, the determination of \(q_{c}(d)\) is exact in the infinitesimal neighborhood of \(d=4\) and \(q=2\), except for the value of \(A_{*}\) that we have assumed to be positive. However, to calculate \(a\) or \(A_{*}\) an approximation going beyond perturbation theory must be performed. Here we use the LPA' to compute them. To do so requires to obtain the leading behavior of the right hand side of Eq. (60) in both \(\epsilon\) and \(\delta\). We have shown in Sec. V.1.1 that for all the perturbative FPs in \(\epsilon\), \(v_{3}^{*}=0\) and all of them are multicritical. Therefore, the critical FP, if any, cannot be fully perturbative. We assume now, and this will be checked below, that this critical FP corresponds to \(v_{3}^{*}\neq 0\). The exact analysis performed above shows that for SP\({}_{\pm}\), \(r,u\sim\mathcal{O}(\epsilon)\), \(u_{\pm 6}^{*}\sim\delta\) and \(v^{*},w^{*},\cdots\sim\mathcal{O}(1)\).
Inserting these scalings in Eq. (60) and performing the dou
Figure 5: Renormalization group flow in \(d=3.6\) in the plane \((v_{3},u)\) for \(n\in\{1,1.009,1.01\}\) and \(k=4\). For the sake of simplicity, we have imposed \(r=0\) and \(\partial_{t}w_{4}=0\). The latter equation has two solutions and we choose the only one involving a critical FP.
ble expansion in \(\epsilon\) and \(\delta\), we obtain:
\[A_{\text{LPA'}}(v) =-\frac{1}{5}v_{6}I_{2}+\big{(}\frac{14}{5}v_{3}v_{5}-\frac{4}{3}w_ {4}^{2}\big{)}I_{3}+8v_{3}^{2}w_{4}I_{4}-4v_{3}^{4}I_{5}\] \[\quad-\frac{1}{2}\lim_{\delta\to 0}\frac{u_{6}}{\delta}I_{2}. \tag{72}\]
It is important to notice that although the LPA is one-loop exact, the calculation of \(A\) is not controlled by a one-loop analysis because it depends on anisotropic couplings that are not small near \(d=4\). To overcome this difficulty, we use here the LPA and LPA', see Sec. IV.3, which makes the calculation of \(A\) in Eq. (72) approximate. Another source of error in our calculation of \(A\) comes from the field expansion that we have to implement when \(n\) is not an integer. This error is however under control as can be seen in Table 1 where it is manifest that the coupling constants of lowest orders involved in Eq. (72) converge rather fast with the order of the field truncation. It is also important to notice that since the couplings \(v^{*},w^{*},\ldots\) are of order 1, SP\({}_{\pm}\) are not fully perturbative FPs even with respect to the double expansion in \(\epsilon\) and \(\delta\) and we call them for this reason semi-perturbative FPs, hence their names SP\({}_{\pm}\).
We have computed the coefficient \(a\) in Eq. (71) up to order 9 in the field expansion both by using Eqs. (71) and (72) in \(d=4\) and by extrapolating the curve \(q_{c}(d)\) to \(d=4\). This curve is obtained as the location of the collision of SP\({}_{+}\) and SP\({}_{-}\) when \(q\) is varied. When they collide, the first irrelevant eigenvalue of the linearized flows around these FPs vanishes. From a numerical point of view, we find it more convenient and accurate to characterize the curve \(q_{c}(d)\) as the value of \(q\) at fixed \(d\) where this eigenvalue vanishes rather than looking for the value of \(q\) where both FPs have disappeared. The two methods used to compute \(A\), either by extrapolation of the curve \(q_{c}(d)\) to \(d=4\) and by using Eq. (72) in \(d=4\) yield the same results up to numerical errors. This shows that the scalings in \(\delta\) of the different coupling constants assumed to derive Eq. (72) are indeed correct.
The constant \(I_{3}A_{\text{LPA}}(v)\) in Eq. (69) is dimensionless and it is easy to check that it is therefore the same when expressed in terms of dimensionful or dimensionless quantities. This allows us to use the FP values of the couplings to compute it. Moreover, all integrals involved in Eq. (72) can be computed in \(d=4\) by taking \(\eta=r=0\) and with the \(\Theta-\)regulator defined in Eq. (63), they are all equal to one.
We show in Table 1 the values of the non-O(\(n\))-invariant couplings in \(d=4\) and of the parameter \(a\) defined in Eq. (71) for different orders \(k\) of the field truncation. The evolution of \(a\) with \(k\) clearly indicates that this number converges to \(0.104(2)\) where the error bar takes only into account the error induced by the field truncation and not the one coming from neglecting the higher orders of the derivative expansion.
Let us finally discuss the case \(d>4\). For \(q>2\) and imposing \(u^{*}\) to be real again requires that (70) is fulfilled. However, both \(u^{*}_{\pm}\) are negative. While the meaning of FP potentials for noninteger values of \(q\) is not obvious, it is reasonable to assume that negative values of \(u\) are unacceptable which would imply that the phase transition is of first order. On the other hand, if \(\delta<0\) both \(u^{*}_{\pm}\) are real and only \(u^{*}_{+}\) is positive and the transition is of second order. We conclude that for \(d>4\), the transition is probably of second order if and only if \(q\leq 2\), as previously suggested in the literature [2; 17; 40].
### The critical line \(q_{c}(d)\)
In this subsection, we extend the analysis performed above around \(d=4\) to lower dimensions. As discussed in the previous section, we use the LPA' and a field expansion truncated to order \(k\leq 9\). The resulting equations are (57) to (60) for lower order couplings and can be found in the supplementary material for higher order couplings.
As said above, the curve \(q_{c}(d)\) is the location in the \((d,q)\)-plane where the critical and tricritical FPs collide. Equivalently, it is for each \(d\), the value of \(q\) for which both the first irrelevant eigenvalue of the flow at the critical FP vanishes and
\begin{table}
\begin{tabular}{l|l l l l l l|l} order \(k\) & 4 & 5 & 6 & 7 & 8 & 9 & LPA’ \\ \hline \(v_{3}\) & 0.921 & 1.020 & 1.021 & 1.003 & 0.996 & 0.999 & 0.999(3) \\ \(w_{4}\) & 0.772 & 1.237 & 1.325 & 1.278 & 1.250 & 1.256 & 1.256(6) \\ \(v_{5}\) & & -1.027 & -1.520 & -1.543 & -1.484 & -1.482 & -1.482(2) \\ \(x_{5}\) & & 1.143 & 1.664 & 1.640 & 1.556 & 1.560 & 1.560(6) \\ \(v_{6}\) & & & -1.584 & -2.266 & -2.339 & -2.308 & -2.31(3) \\ \(w_{6}\) & & & -2.483 & -3.087 & -2.802 & -2.696 & -2.7(1) \\ \(y_{6}\) & & & 1.294 & 1.524 & 1.309 & 1.253 & 1.25(6) \\ \(v_{7}\) & & & & 0.857 & 0.452 & -0.002 & 0.0(5) \\ \(w_{7}\) & & & & -7.462 & -9.819 & -9.651 & -9.7(1) \\ \(x_{7}\) & & & & -0.971 & 0.873 & 2.196 & 2(1) \\ \(z_{7}\) & & & & -0.142 & -1.132 & -1.678 & -1.7(6) \\ \(u_{6}/\delta\) & & & -0.809 & -1.336 & -1.104 & -1.105 & -1.105(1) \\ \hline a & 0.053 & 0.084 & 0.113 & 0.098 & 0.106 & 0.104 & 0.104(2) \\ \end{tabular}
\end{table}
Table 1: Anisotropic coupling constants and \(\lim_{\delta\to 0}u_{6}/\delta\) for \(d=4\) and \(q=2\) at SP\({}_{\pm}\). The coefficient \(a\) defined in Eq. (71) is computed at successive orders of the field truncation. The column LPA’ represents an estimate of the nontruncated LPA’ result obtained from the highest implemented order \(k=9\) and the error is the difference with the previous order \(k=8\). Notice that this error bar takes care only of the error coming from the field expansion and not from the truncation of the DE at LPA’.
Figure 6: Second most relevant eigenvalue as a function of \(q\) for the \(k=4\) truncation in \(d=3.9\) for the critical (red) and the tricritical (orange) FPs.
the second most irrelevant eigenvalue of the flow at the tricritical FP vanishes. For \(q>q_{c}(d)\), the two FPs have disappeared and, hence, the transition is of first order. More precisely, for \(q>q_{c}(d)\), the two FPs are complex. An example of the variation of the second most relevant eigenvalue \(e_{2}\) with \(q\) is shown in Fig. 6 for \(d=3.9\) and \(k=4\). The curve \(q_{c}(d)\) is given in Fig. 7 for \(k=4,\cdots,9\).
At each order \(k\) of the field expansion, we have numerically found that at sufficiently small values of \(d\) the collision of FPs does no longer occur. 4 We consider that our calculation is no longer under control in these dimensions because of a lack of reliability caused by the field expansion. As expected, typically the dimension where the field expansion no longer works decreases when \(k\) increases and the larger \(k\) the better the convergence of the field expansion. Quite unexpectedly, we observe, without being able to explain it, that the convergence of the expansion is much better for odd values of \(k\). For instance, even if \(k=8\) yields results for \(d\in[2.9,3]\cup[3.4,4]\) compatible with those obtained both for \(k=7\) and \(k=9\), the mechanism of annihilation of FPs for \(d\in[3,3.4]\) does not take place which is clearly an anomaly. This kind of anomaly does not occur for odd values of \(k\) and moreover the dimension where the expansion does no longer work is systematically much smaller for odd \(k\) than for even \(k\).
Footnote 4: More precisely, we find that the curve \(q_{c}(d)\) can either show at small \(d\) an unphysical jump at a given order \(k\) (e.g. for \(d=3\) and \(k=6\)) or can vary so much from order \(k\) to order \(k+1\) that it is a clear indication of the nonconvergence of the field expansion, at least for the values of \(k\) we are able to implement. We also observe that the convergence is better when we consider only the results obtained for odd values of \(k\).
Our results clearly show that our determination of \(q_{c}(d)\) at the level of the LPA' is under control at least for \(d\in[2.9,4]\). We find in particular at LPA': \(q_{c}(3)=2.10\) for \(k=7\) and \(q_{c}(3)=2.11\) for \(k=9\) and thus \(q_{c}(3)=2.11(1)\). It is important to realize that the error bar given previously only takes into account the error induced by the field truncation to order \(k=9\) and not the error coming from truncating the DE at LPA', which produces a supplementary error.
A rough estimate of the error coming from the truncation of the DE at its lowest order is the difference between the determinations of \(q_{c}(d)\) with either the LPA (see Fig. 8) or the LPA'. The rationale behind this choice is that the LPA' includes the RG evolution of \(Z_{k}\) whereas the LPA does not. This is of course only an indication of the impact of the renormalization of the derivative terms on \(q_{c}(d)\) and should not be taken as a precise value of the error bar. In particular, the \(Z_{k}\) term that differentiates the LPA and LPA' is an isotropic (\(O(n)-\)invariant) term. Therefore, when \(d\) approaches four, its contribution is suppressed for reasons discussed in Sect. V.1. This implies that for \(d\to 4^{-}\), the error coming from neglecting higher orders of the DE is underestimated for quantities that are sensitive to the anisotropic sector.
Figure 8: Curve \(q_{c}(d)\) for each order \(k\) of the field expansion in LPA. The lower dimensions that can be reached for each order of the field expansion are: \(d=3.4\) for \(k=4\), \(d=2.7\) for \(k=5\), \(d=3.2\) for \(k=6\), \(d=3.0\) for \(k=7\), \(d=3.1\) for \(k=8\) and \(d=2.9\) for \(k=9\). Notice that for \(k=8\) no reliable determination of \(q_{c}(d)\) can be obtained in the range \(3.1<d<3.5\).
Figure 7: Curve \(q_{c}(d)\) for each order \(k\) of the field expansion in LPA’. The lower dimensions that can be reached for each order of the field expansion are: \(d=3.4\) for \(k=4\), \(d=2.6\) for \(k=5\), \(d=3.1\) for \(k=6\), \(d=2.7\) for \(k=7\), \(d=2.9\) for \(k=8\) and \(d=2.5\) for \(k=9\). Notice that for \(k=8\) no reliable determination of \(q_{c}(d)\) can be obtained in the range \(3.0<d<3.4\).
Figure 9: Curve \(q_{c}(d)\) in LPA’ and \(k=9\) (red triangles). The red region represents an estimate of the confidence intervals of our results. The black points are previous results: \(q_{c}(d=4-\epsilon)=2+O(\epsilon^{2})\)[40] and \(q_{c}(d=3)=2.15\)[28], \(2.2\)[26], \(2.45\)[30] and \(2.57\)[64].
A general estimate of the errors generated by successive orders of DE has been proposed and tested successfully for both \(O(N)\) models [35; 36; 37] and for a model with \(\mathbb{Z}_{4}\) anisotropies [39]. This requires to take into account at least the second order of the DE, which is beyond the scope of the present work. We can, however, expect that our rough error bar estimate is appropriate for quantities dominated by the isotropic sector, such as the exponents \(\nu\) or \(\eta\).
We show in Fig. 9 our final estimate of the curve \(q_{c}(d)\). The central values are obtained for \(k=9\) with LPA'. An estimate of the confidence intervals is represented by a red region obtained by summing the errors coming from the field expansion and the DE. For \(k=9\), we find \(q_{c}^{\text{LPA}}(d=3)-q_{c}^{\text{LPA'}}(d=3)=0.06\) which is much larger than the error coming from the field truncation which is only \(0.01\). Our final estimate is therefore \(q_{c}(d=3)=2.11(7)\).
For \(d<3\), the difference between the LPA and the LPA' can be qualitative. In particular for \(k=9\), the procedure to estimate the curve \(q_{c}(d)\) does no longer work for the LPA below \(d=2.9\) whereas the LPA' works down to \(d=2.5\). For \(d\simeq 2.9\), we consider that our approximation scheme is no longer reliable even if \(q_{c}(d)\) can be computed. Another indication of the limitations of our approximations at low dimension comes from the fact that the anomalous dimension along the curve \(q_{c}(d)\) grows rapidly for dimensions \(d<3\), as can be seen in Fig. 10.
Let us finally notice that since for \(q=2\) and \(q=3\) only the invariants \(\rho\) and \(\tau_{3}\) play a role we could have naively expected that they would go on playing a dominant role for all values in this range of \(q\). This turns out to be wrong and for \(k=9\) for instance it is quantitatively important to include all invariants up to \(\tau_{9}\).
## VI Summary and outlook
In the present paper, we study the \(q-\)state Potts model for arbitrary real values of \(q\) and \(d\) at leading order of the Derivative Expansion (Local Potential Approximation) of the Non-Perturbative Renormalization Group. We also implement an improved variant commonly reffered to as the LPA' which allows for an anomalous dimension of the field. Our main goal is to compute the curve \(q_{c}(d)\) which is the boundary between a first and a second order phase transition region, respectively for \(q>q_{c}(d)\) and \(q<q_{c}(d)\).
For \(q\in\mathbb{N}\), the free energy associated with the Potts model depends on \(q-1\) independent invariants under the permutation symmetries. On the other hand, for non-integer \(q\), it depends on infinitely many invariants. The implementation of the LPA and LPA' therefore requires an additional approximation, so as to deal with only a finite number of them. The field expansion of the effective potential is such an approximation because it keeps only a finite number of these invariants when truncated to a finite order [40; 17]. In a previous work [40], such an expansion was implemented in the context of Wilson's RG up to the sixth power of the field. This did not allow the authors to go reliably to dimensions below \(d\sim 3.4\). We extend this result with NPRG up to the ninth order which allows us to reach \(d=3\) in a controlled way. We obtain \(q_{c}(d=3)=2.11(7)\), which is in line with previous studies and which confirms that the phase transition for \(q=3\) in \(d=3\) is of first order [28; 29; 30; 32; 64].
The study of successive orders of the field expansion allows us to test its convergence. We observe that it deteriorates progressively as the dimension is decreased, as expected. In addition, the comparison between the LPA and the LPA' allows us to obtain a rough estimate of the influence of the renormalization of derivative terms and thus to analyze how reliable these approximations are. This analysis shows that at this level of approximation our estimate of \(q_{c}(d)\) is no longer under control below \(d\lesssim 2.8\) and that it is neither possible to reach \(d=2\) in a controlled way nor the dimension where \(q_{c}=3\).
A possible extension of the present work is to analyze the \(q=0\) and \(q=1\) cases which were analyzed by similar methods in Ref. [17]. The most interesting dimension, \(d=3\), was out of reach of this study that was performed at order \(k=6\) of the field expansion. It would be interesting to see whether the expansion up to order \(k=9\) that we have implemented in the present work allows us to reliably study the three dimensional case also for these values of \(q\).
In addition to these physical applications, there are two different ways of going beyond the present analysis.
First, if we want to keep \(q\) arbitrary, we have to deal with an infinite number of invariants which forces us to perform a field expansion. Even if we could imagine including second order DE terms, which is most probably extremely tedious, it is not at all clear that going on performing a field expansion would allow us reaching \(d=2\) because of problems of the non-convergence of this expansion in low dimensions. A possible way out of this difficulty could be to work fully functionally with the isotropic invariant \(\rho\) and to expand in the anisotropic couplings (see, for example, [65; 39]). This inter
Figure 10: Anomalous dimension \(\eta(q=q_{c})=\eta_{c}\) as a function of \(d\) for all implemented orders of the field expansion. These values of \(\eta_{c}\) suggest that the LPA’ is not sufficiently reliable to compute \(q_{c}\) below \(d<3\).
mediate procedure seems feasible and we plan to implement it in the near future.
Second, we can avoid the problem of the infinite number of invariants by considering only integer values of \(q\) and, at first, \(q=3\) where there are only two invariants, \(\rho\) and \(\tau_{3}\). As explained above, a reliable determination of \(d_{c}(q=3)\) surely requires to implement the second order of the derivative expansion. We are currently analyzing the corresponding flow equations using the same techniques developed in [39] for \(\mathbb{Z}_{4}\)-invariant systems and in [66] for the study of frustrated magnetic systems. We expect that the implementation of the second-order of the DE will enable us not only to reliably calculate \(d_{c}(q=3)\) and study the two-dimensional case, but ideally also to determine error bars in dimensions where LPA' results are available.
###### Acknowledgements.
We are very grateful to Alessandro Codello for valuable comments on the manuscript. C. S. and N. W. thank for the support of the Programa de Desarrollo de las Ciencias Basicas (PEDECIBA). This work received the support of the French-Uruguayan Institute of Physics project (IFUUR) and from the grant of number FCE-1-2021-1-166479 of the Agencia Nacional de Ingestigacion e Innovacion (Uruguay).
|
2309.09782 | Modulation to the Rescue: Identifying Sub-Circuitry in the Transistor
Morass for Targeted Analysis | Physical attacks form one of the most severe threats against secure computing
platforms. Their criticality arises from their corresponding threat model: By,
e.g., passively measuring an integrated circuit's (IC's) environment during a
security-related operation, internal secrets may be disclosed. Furthermore, by
actively disturbing the physical runtime environment of an IC, an adversary can
cause a specific, exploitable misbehavior. The set of physical attacks consists
of techniques that apply either globally or locally. When compared to global
techniques, local techniques exhibit a much higher precision, hence having the
potential to be used in advanced attack scenarios. However, using physical
techniques with additional spatial dependency expands the parameter search
space exponentially. In this work, we present and compare two techniques,
namely laser logic state imaging (LLSI) and lock-in thermography (LIT), that
can be used to discover sub-circuitry of an entirely unknown IC based on
optical and thermal principles. We show that the time required to identify
specific regions can be drastically reduced, thus lowering the complexity of
physical attacks requiring positional information. Our case study on an Intel
H610 Platform Controller Hub showcases that, depending on the targeted voltage
rail, our technique reduces the search space by around 90 to 98 percent. | Xhani Marvin Saß, Thilo Krachenfels, Frederik Dermot Pustelnik, Jean-Pierre Seifert, Christian Große, Frank Altmann | 2023-09-18T13:59:57Z | http://arxiv.org/abs/2309.09782v1 | # Modulation to the Rescue: Identifying Sub-Circuitry in the Transistor Morass for Targeted Analysis
###### Abstract.
Physical attacks form one of the most severe threats against secure computing platforms. Their criticality arises from their corresponding threat model: By, e.g., passively measuring an integrated circuit (IC)'s environment during a security-related operation, internal secrets may be disclosed. Furthermore, by actively disturbing the physical runtime environment of an IC, an adversary can cause a specific, exploitable misbehavior. The set of physical attacks consists of techniques that apply either globally or locally. When compared to global techniques, local techniques exhibit a much higher precision, hence having the potential to be used in advanced attack scenarios. However, using physical techniques with additional spatial dependency expands the parameter search space exponentially. In this work, we present and compare two techniques, namely laser logic state imaging (LISI) and lock-in thermography (LIT), that can be used to discover sub-circuitry of an entirely unknown IC based on optical and thermal principles. We show that the time required to identify specific regions can be drastically reduced, thus lowering the complexity of physical attacks requiring positional information. Our case study on an Intel Ho10 Platform Controller Hub showcases that, depending on the targeted voltage rail, our technique reduces the search space by around 90 % to 98 %.
Hardware Security, Reverse Engineering, Integrated Circuits, ASIC +
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
showed for a given micro-controller unit (MCU), that flip-flops (FFs) can be identified, which represent lucrative targets for LFI in general. While previous work successfully identified areas of interest in specific circumstances, the general identification of regions on a black box silicon die still poses a hard task (Bartos et al., 2017).
In this work, we propose the identification of sub-circuitry based on the modulation of specific, physically isolated voltage supplies. The modulation of a particular circuitry of interest via its voltage supply causes local physical effects, which can be measured by techniques commonly encountered in the IC failure analysis (FA) domain. By modulating a single voltage rail while leaving the others unmodified, the external modulation manifests, e.g., in local temperature variation or a change in amplitude and phase of the reflected light when scanning over the chip with a laser.
**Our contributions.** We propose lock-in thermography (LIT) and laser logic state imaging (LLSI) as techniques for fast and targeted reverse engineering to simplify and speed up following analysis and attacks. As a case study, we evaluate our approach on a recent and highly complex technology, i.e., a system-on-chip (SoC) manufactured by Intel along their \(12_{\text{th}}\) Gen. processor series. In this regard, we build a custom printed circuit board (PCB) in order to be able to precisely control the individual power rails in an isolated manner. Based on our prototype, we show that the position of isolated functional blocks can be identified on the die. Finally, we compare LIT and LLSI concerning their reverse engineering capabilities, resolution, and acquisition time.
## 2. Background
Failure analysis (FA) represents one of the last steps of the overall application-specific integrated circuit (ASIC) manufacturing process. After a wafer of ICs has been manufactured by the semiconductor fabrication plant, the so-called yield determines the ratio of functional and non-functional ICs. For a semiconductor product to be profitable and manufacturers to remain competitive, the yield must be maximized at all costs. However, the semiconductor manufacturing process of advanced ICs is tremendously complex, i.e., not every part of the process can be controlled in its entirety. While this so-called process variation may be utilized positively to build intrinsic physical unclonable functions (PUFs) (Kleiner et al., 2016), it also implies that a certain percentage of manufactured silicon malfunctions once the variation exceeds a given threshold.
FA is centered around localizing and characterizing a single IC's malfunction to tweak future production parameters, thus increasing the yield of future production runs. A variety of FA techniques exists, each exhibiting advantages and disadvantages in localizing or characterizing a specific kind of fault. In this work, we utilize two such FA methods, namely lock-in thermography (LIT), which is based on thermal principles, and laser logic state imaging (LLSI), which is based on laser scanning microscopy. In this section, both techniques are briefly introduced. Moreover, power delivery networks (PDNs) of modern SoCs are briefly discussed, as they are key to our approach.
### Thermal Analysis of Integrated Circuits
In FA, LIT is employed to localize thermally active regions, which indicate resistive defects in ICs. Failure analysts use this method to, for example, localize resistive shorts between different metal lines, gate oxide breakdowns, and other faults that cause an increase in contact resistance. These resistive defects lead to higher power dissipation and, thus, to a _local_ temperature increase. As the local temperature increase implies an increase in mid-range infrared (IR) emanation (i.e., \(\lambda\in[3..5]\) m), it can be captured by an IR-sensitive camera with high resolution. LIT is based on capturing the thermal radiation in the mid-range IR spectrum emitted by an object.
Resistive defects usually cause power dissipation in the mW range, which translates to local temperature differences in the mK range. However, the sensitivity of high-end IR sensors lies in the 10 mW range. Hence, to be able to measure the small temperature differences, lock-in amplification is mandatory. In LIT, we inject a periodic signal into the DuT, which is also fed into the lock-in amplifier as a reference. The lock-in amplifier then relates the thermal signal captured by the IR camera to the reference, filtering and amplifying the thermal information correlating to the induced modulation.
Moreover, it is worth noting that even a fully powered-off IC may exhibit a strong IR contrast in emissivity at room temperature due to the difference in emissivity of different materials and structures used in manufacturing. Hence, thermographic sensors can be used to record an IC's pattern through the backside.
Figure 2 depicts a typical LIT setup. In this work we exclusively consider complex SoCs exhibiting multiple voltage supplies as DuT. Further, a high-resolution mid-range IR camera is required to capture temperature deviations based on a fine scale. Different lenses can be used to increase the spatial resolution of the IR camera. Every LIT setup requires an external electrical stimulus fed into the DuT. This is commonly achieved using a switchable power supply unit (PSU) that provides the external modulation in the form of a square wave of a given amplitude. The lock-in amplifier detects a low-amplitude thermal signal that correlates with the induced signal by performing integration over time. Finally, the PC receives temperature amplitude and phase information and stores the results for later analysis.
### Laser-Based Analysis of Integrated Circuits
Modern ICs are comprised of numerous metal layers on the chip's front side, making any analysis through the front side impossible. Therefore, analysis is commonly executed through the chip's backside. Since silicon is transparent to near-infrared (NIR) light, laser scanning microscopes (LSMs) can be used to access the active area containing the transistors without preparing the silicon backside of the chip. One approach to localizing faults is stimulating the
Figure 2. Typical LIT setup.
DuT with a laser and measuring the change in resistance, voltage, or current consumption at the device's terminals. On the other hand, some part of the laser irradiation is modulated by the electrical characteristics in the chip and reflected at metal interfaces, see Fig. 3a. Consequently, this reflected light contains information about the internal voltages of the chip. In LSM, a detector captures the reflected light and translates its magnitude and phase into a corresponding signal. The approach is part of a family of FA methods, referred to as _optical probing_ techniques. When pointing the laser at one location of interest, a waveform depicting voltage over time can be acquired. The corresponding technique is called electro-optical probing (EOP). Besides, an activity map can be created when scanning the laser over a larger area of interest and analyzing the reflected light at each point. The technique is called electro-optical frequency mapping (EOFM), and due to its spatial capabilities, we will focus on EOFM in the following.
#### 2.2.1. Electro-Optical Frequency Mapping
EOFM is an optical probing technique that allows the creation of a two-dimensional activity map of a circuit area. Provided a particular frequency and a bandwidth, EOFM analyzes the reflected light using a narrow-band frequency filter and maps the resulting amplitude onto the scanning position. In this way, all transistors switching at the frequency of interest appear as bright spots in the activity map. To not influence the electrical behavior of the DuT, wavelengths above \(1.1\,\mathrm{\SIUnitSymbolMicro m}\) are used for optical probing techniques. Apart from debugging internal signals in ICs, optical probing can be used to attack devices. For instance, EOFM in combination with EOP has been used to extract sensitive data from a field-programmable gate array (FPGA) (Kumar et al., 2016) or to break logic locking schemes (Kumar et al., 2016).
#### 2.2.2. Laser-Logic State Imaging
LLSI is an extension of EOFM proposed by Niu et al. (Niu et al., 2017). Instead of setting the frequency of EOFM to the frequency of a logic signal generated by the device, a periodic signal is injected into the DuT's power supply, as depicted in Fig. 3b. In other words, the DuT's power supply is modulated around the nominal supply voltage with a small peak-to-peak sine signal. EOFM is then used to search for activity based on the introduced modulation frequency. Using LLSI, the logic states of combinatorial and sequential logic can be extracted under the constraint that the clock is stopped for the duration of the measurement (Bauer et al., 2013; Bauer et al., 2013). Apart from transistor states, LLSI measurements reveal the location of capacitive elements, such as decoupling capacitors. Consequently, LLSI can be used to localize circuitry connected to the power supply rail under modulation.
### Power Delivery Networks in ASIC Design
The PDN of an ASIC is responsible for transmitting current from the package pads to the logic blocks and single transistors. Its design poses a special difficulty since it is responsible for maintaining a stable voltage during load, voltage fluctuations, and spikes. Several other factors, such as the prevention of abrasion effects, overly excessive heat in single spots, and parasitic effects, make the design of PDNs a hard task.
Since modern SoCs consist of a vast number of different components and all of these components have different characteristics w.r.t. their power consumption, hardware designers decided to supply different components with different physically isolated voltage rails. Furthermore, a SoC might require different voltages, where I/O cells operate at a different voltage level than internal logic cells. It is further possible to perform power gating on specific supplies during low power sleep, while only powering the wake-up logic. Other reasons might be that only one component on the SoC consumes excessive power, such as in modern desktop processors, where the high-performance power network is cut off from other maintenance logic. All these requirements lead to modern complex SoCs having complex PDNs with multiple voltage rails.
## 3. Experimental Setup
### Device under Test
In order to thoroughly evaluate our novel approach, we decided to utilize a complex, recent-technology SoC manufactured by Intel, which is referred to as the Platform Controller Hub (PCH) (Bauer et al., 2013). In the past, an Intel mainboard's chipset was defined by a north bridge and a south bridge, which determined the interconnection between different components. The north bridge was handling high-frequency signaling, whereas the south bridge was taking care of lower-frequency communication. Due to the constant increase of integration in microelectronics, the north bridge has been integrated into the central processing unit (CPU) silicon die, whereas the south bridge's functionality as well as other communication protocols (e.g., USB-3 or PCIe) have been merged into another silicon die, referred to as PCIH. It is worth noting that Intel's root of trust is a sub-component of the PCIH, whereas, for AMD-based systems, the root of trust is placed within the CPU silicon. Because of the high degree of integrated components, Intel's PCIH exposes 12 physically isolated voltage rails, which need to be supplied by five different voltage levels. For saving space and resources, the rails requiring the same voltage level are typically tied together on a PCB level whenever possible. While this holds true for all commercially available mainboards, tying together the supply of multiple voltage rails prevents isolated modulation.
### Custom Printed Circuit Board
As the goal of this work is to detect several regions of interest by modulating their supply voltages, we have placed our DuT on a
Figure 3. Principle of optical probing (a) and electrical setup for LISI (b). The supply voltage modulation leads to a detectable pattern in the reflected light, mapped onto the scanning position and shown as a 2D activity map.
custom designed PCB, which grants us isolated access to each of the voltage rails. Our custom PCB is depicted in Fig. 4. The PCH must be supplied with 5 different voltage levels, which are used to supply power to 12 different, physically isolated voltage rails. Different voltage levels may be provided by the SMA connectors [(1)] a jumper [(2)] then either connects or disconnects a specific voltage rail to the external voltage. A set of specific voltage rails has further been connected indirectly via shunt resistors and current sense amplifiers [(3)] to the DuT [(3)]. By this, power-based SCA attacks are possible for a selected number of voltage rails. However, we keep performing SCA on the different voltage rails of the PCH as future work. Moreover, different boot configurations can be chosen by configuring the jumpers in [(5)].
### Measurement Setup
#### 3.3.1. LIT Setup
The LIT setup is equal to the one depicted in Fig. 2. The DuT is represented by Intel's PCH, which is mounted on our custom PCB. By exposing each voltage rail in a physically isolated fashion, we are able to modulate each rail without affecting the others. Here, the modulation takes place based on a periodic square wave signal in the \(40-60\) Hz range, which can be generated directly by a software controlled PSU. The silicon die's mid-IR emanation in the field of view of the optical lens is sampled by the camera. The recorded data is forwarded to the lock-in amplifier, which is also provided with the switched power supply as a reference.
#### 3.3.2. LISI Setup
While using the same DuT (i.e., Intel's 610 PCH on custom PCB), each voltage rail can be modulated by a much higher frequency than it is possible for LIT, thus decimating noise. As common PSUs are incapable of providing modulation in the MHz range, a Bias-Tee in combination with a function generator and a DC PSU have been used to generate a \(2\,\mathrm{MHz}\) sine-modulated voltage supply signal. For conducting the LISI measurements, we use a Hamamatsu PHEMOS-1000 FA microscope, which offers lenses of \(5\times\), \(20\times\), and \(50\times\) magnification.
## 4. Evaluation
In this section, we showcase the effectiveness of our technique. By modulating different voltage rails of the PCH utilizing our PCB design, we can clearly distinguish between different regions. We present the results of two different measurements, namely modulating vcc_core_0p82 as well as vcc_usb_0p821. We have selected these two scenarios as a representative subset, as they highlight the different outcomes of our measurements.
As a metric to quantify the reduction in search space achieved by our technique, we compute the area that responds to the external modulation. The evaluation takes place based on thresholding, i.e., if a signal within a region exceeds a threshold, we classify it as being affected by our modulation, otherwise, it is classified as unaffected. As without modulation an adversary is required to scan the entire die, we compare the die's overall area to our identified regions to quantify the area reduction using our technique.
Footnote 1: Following Intel’s nomenclature: [https://www.intel.com/content/www/us/en/products/sku/218829/intel-h610-chipset/specifications.html](https://www.intel.com/content/www/us/en/products/sku/218829/intel-h610-chipset/specifications.html)
The minimum time a physical attack with spatial information requires can be approximated by considering the number of positions to be tested, the time per attempt, and the number of attempts per position. In addition to the aforementioned parameters, when considering FI also all combinations of the fault's parameters have to be considered (e.g., offset and strength).
As an example, when considering LFI, a magnification of \(50\times\) is commonly required to induce enough energy within a spatially limited radius for the photoelectric effect to cause logical misbehavior at the transistor level. A \(50\times\) lens commonly corresponds to a transistor-focused laser spot size of about \(1\,\mathrm{\SIUnitSymbolMicro m}\). Hence, a step size in either x or y of \(1\,\mathrm{\SIUnitSymbolMicro m}\) must not be exceeded. In our case study, the silicon die is \(8\,\mathrm{mm}\) wide and \(12\,\mathrm{mm}\) high, which - based on a step size of \(1\,\mathrm{\SIUnitSymbolMicro m}\) in x and y - results in \(96,000,000\) possible positions. Even when considering a single attempt per position (\(n=1\)), a single combination of fault parameters (\(\mathrm{comb}_{\mathrm{params}}=1\)) and a time per attempt of \(0.1\,\mathrm{s}\) (\(t_{\mathrm{attempt}}=0.1\)), \(111\) days would be required to scan the whole die area. For this simplified approximation, the time required to move the stage and re-focus the laser along the Z-axis is neglected.
### Modulation of vcc_core_prim_0p82
As the name implies, vcc_core_prim_0p82 appears to power the primary core logic contained inside the PCH, whereas \(\mathtt{op82}\) indicates an electrical potential of \(0.82\,\mathrm{V}\). In the following, we present the results of performing LIT as well as LISI based on a modulation of vcc_core_prim_0p82. As the hereby identified regions represent core logic components, these form potentially lucrative areas for further physical attacks.
#### 4.1.1. LIT
The results of modulating vcc_core_prim_0p82 and performing LIT as described in Section 3.3.1 are depicted in Fig. 4(a). Here, a yellow overlay indicates that after the LIT process, a strong increase in temperature was recognized in the corresponding region, whereas purple indicates, that minor temperature deviation has been noted which matches the induced modulation frequency. The
Figure 4. DuT mounted on a custom designed PCB in order to physically isolate the voltage supplies.
remaining regions are completely unaffected by the external modulation. By modulating vcc_core_prim_0p82, we obtained a LIT signal that covers about 18.9% of the chip area. This corresponds to a search space reduction of 81.1% compared to an exhaustive scan. However, in order to even further narrow down the search space, we continue to analyze the different emissivity characteristics of different structures. As depicted in the thermal image, different areas of different intensity values were captured. While the solid yellow areas, where the highest intensity is observed, can be expected to belong to power supply circuitry (i.e., PDN structures), the yellow-purple sprinkled areas are promising candidates for synthesized logic cores. The difference is depicted in more detail in Fig. 5(a). Using bare eyes, the remaining search space can therefore be cut again, leading to a potential target chip area of only 15.4 %.
#### 4.1.2. Llsi
The results of modulating vcc_core_prim_0p82 and scanning over the die as described in Section 3.3.2 are depicted in Fig. 4(b). Again, yellow indicates that the modulation in the reflected light shows a strong correlation in amplitude with our injected stimulus, whereas purple indicates, that the modulation of the reflected light slightly diminishes. All remaining regions are not affected by modulation at all. It is worth noting that the regions appearing speckled in the LIT measurements show up as speckled again. However, the regions identified by LISI as well as the speckle pattern are much more precise and sharp. Their difference in the same region as before is depicted in Fig. 5(b). By modulating vcc_core_prim_0p82, we obtained an LISI signal that covers about 16.3 % of the chip area, i.e., 2.6 % less area than measured by LIT. This corresponds to a search space reduction of 83.7 % compared to an exhaustive scan. Same as before, by considering the differences of solid PDN area and speckled logic area, this time the search space can even be reduced to 10.9 %, i.e., 4.5 % less than with LIT.
### Modulation of vcc_usb_0p82
While the previous measurement revealed that LIT and LISI are both capable of identifying PDNs as well as their supplied logic, with this experiment we would like to show that these techniques can also be used to uniquely identify regions that are right next to each other without any interference. The vcc_usb_0p82 appears to power the USB logic contained inside the PCH, whereas 0p82 indicates an electrical potential of 0.82 V. In the following, we present the results of performing LIT by modulating vcc_usb_0p82. While the results of performing LISI are similar, they have been omitted due to space constraints. However, high-resolution images of applying LISI are provided in the appendix in Fig. 13.
The results of modulating the vcc_usb_0p82 voltage and performing LIT as described in Section 3.3.1 are depicted in Fig. 7. It is important to note that compared to the previous measurement, a relatively small area of the die shows a thermal correlation to the modulation. As before, a yellow overlay indicates a strong increment in local IR emissivity correlating to the modulation, whereas purple indicates a weaker emissivity. All other regions are unaffected by the external modulation of vcc_usb_0p82. By modulating the USB supply voltage, we successfully identified this part of the SoC, which handles the USB protocol communication. It only covers 1.2 % of the die area. Moreover, when superimposing the results of the previous measurement (i.e., the modulation of vcc_core_prim_0p82), the proximity of the results becomes observable. By this, we provide proof that our technique for reverse engineering can be used with high spatial resolution.
Figure 5. LIT and LISI amplitudes overlaid on the optical image for the vcc_core_prim_0p82 rail.
Figure 6. Comparison of LIT and LISI in one region of interest to show the possibility of distinguishing between power supply and logic areas.
## 5. Discussion
In this work, we have utilized LIT and LISI to discover the position of a specific circuitry of our target. For both setups, we provided external modulation of a given frequency to discover regions connected to physically isolated PDNs. Since LIT and LISI exhibit similar capabilities and results during our evaluation, we discuss the main differences between both techniques before concluding this work.
### Spatial Resolution and Acquisition Time
In this work, the spatial resolution of LISI was much higher than this of LIT. This is due to the fact that for LIT, commonly only low-magnification lenses with sufficiently good optical properties are available. Due to the poor properties of the lenses, a higher magnification drastically increases the measurement time. During our measurements, only weak signals have been recorded with lenses of 10\(\times\) magnification. Nevertheless, the LIT images presented in this work, captured with a 1\(\times\) lens, could compete with the results obtained by applying LISI. Vice-versa, LISI measurements with a reasonable signal-to-noise ratio could only be obtained with the 20\(\times\) lens and above, making the scan comparably slow. While for LIT the scanning time was in the range of a few hours for the entire chip, scanning the die in an automated fashion using LISI with the 20\(\times\) lens took roughly one day. Consequently, for a first overview, LIT can deliver sufficient and fast results. When higher magnification for more detailed analysis is required, LISI should be considered.
### Setup Cost and Availability
While the LIT setup used in this work can be acquired for around $200K, a setup for optical probing costs at least $1M. Consequently, LIT can be considered the more cost-efficient solution. However, there is always the possibility to rent FA equipment or even to hire a failure analyst in a much more affordable way.
### Backside Silicon Access
Direct access to the silicon surface is a strict requirement for optical probing methods. Moreover, the silicon substrate must fulfill specific properties (e.g., polished surface, no highly-doped silicon). Although flip-chip packages have become more relevant over the past years, less complex ICs are still packaged by other means, which often encapsulate the IC in a plastics or ceramic case. Hence, to perform optical inspection, the IC has to be decapsulated and polished, which is a tedious and risky process, as it may result in a broken DuT. Methods used range from chemical to mechanical processes, and each step must be taken carefully to leave the device operable after decapsulation.
In this regard, LIT has an advantage over LISI: it is a FA method that does not strictly require the silicon backside to be exposed. LIT measurements are typically also possible through a package, though the spatial resolution decreases when compared to silicon is accessible. We expect that LIT delivers results that are acceptable for EMFI, as it is a less location-dependent physical attack than, e.g., LFI. Although we did not perform experiments with this scenario, it is an intriguing approach for further investigation.
## 6. Conclusion
In this paper, we presented a novel method leveraging LIT and LISI to identify specific parts of circuitry on a large, fully unknown SoC. Advanced high-performance ICs always expose multiple voltage rails, which provide the power to different sub-circuitry. The modulation of different voltage supplies allows optical as well as thermal techniques to map a voltage rail to specific regions that are powered by the corresponding supply. As voltage rails commonly need to be labeled, an adversary may deduce semantic information about the identified circuitry. While not focusing on introducing a specific attack, we provide a building block that makes physical attacks requiring spatial information even feasible.
Moreover, we have provided proof that our method works well for a recent-technology Intel PCH, where we were able to identify subcircuits with ease. By using our novel approach, it was possible to identify the exact positions and sizes of USB, RTC, and core logic, thus drastically reducing the search space of a subsequent attack.
|
2310.20273 | Geometric phase and wave-particle duality of the photon | The concepts of geometric phase and wave-particle duality are interlinked to
several fundamental phenomena in quantum physics, but their mutual relationship
still forms an uncharted open problem. Here we address this question by
studying the geometric phase of a photon in double-slit interference. We
especially discover a general complementarity relation for the photon that
connects the geometric phase it exhibits in the observation plane and the
which-path information it encases at the two slits. The relation can be seen as
quantifying wave-particle duality of the photon via the geometric phase, thus
corroborating a foundational link between two ubiquitous notions in quantum
physics research. | Elvis Pillinen, Atri Halder, Ari T. Friberg, Tero Setälä, Andreas Norrman | 2023-10-31T08:40:24Z | http://arxiv.org/abs/2310.20273v1 | # Geometric phase and wave-particle duality of the photon
###### Abstract
The concepts of geometric phase and wave-particle duality are interlinked to several fundamental phenomena in quantum physics, but their mutual relationship still forms an unchatred open problem. Here we address this question by studying the geometric phase of a photon in double-slit interference. We especially discover a general complementarity relation for the photon that connects the geometric phase it exhibits in the observation plane and the which-path information it encases at the two slits. The relation can be seen as quantifying wave-particle duality of the photon via the geometric phase, thus corroborating a foundational link between two ubiquitous notions in quantum physics research.
_Introduction._--The geometric phase [1], which is the phase that a physical system acquires as it evolves along a curved trajectory in the underlying parameter space, is a universal concept within the physical sciences [2]. It is encountered in particle physics, condensed-matter physics, fluid dynamics, and astrophysics, among other branches [3], and offers unique opportunities for emerging quantum technologies [4; 5]. Wave-particle duality is another central notion in modern physics that restricts the coexistence of interferometric "which-path information" (particle behavior) and fringe visibility (wave behavior) of quantum objects [6]. It is perhaps the most recognized manifestation of quantum complementarity [7; 8; 9] and has been observed in a wide variety of quantum physical systems, such as elementary particles [10; 11; 12], atoms [13; 14; 15], molecules [16; 17], and even antimatter [18].
In optical physics, the geometric phase arises from the change in the light's polarization state [19; 20; 21; 22; 23] and it has found numerous applications in advanced light manipulation [24]. Even a single photon can carry the geometric phase [25], which can be seen as a specific wave facet of the photon. For classical light fields, the geometric phase was very recently observed in the continuous, periodic polarization-state pattern in two-slit interference [26; 27]. The recognition that light has the ability to exhibit such interferometric polarization-state modulation has further revealed fundamental aspects of wave-particle duality of the photon [28; 29]. These facts hint that the geometric phase is profoundly linked to the dual wave-particle nature of light at the single-photon level, but exactly how has remained an unresolved physical problem.
In this Letter, we investigate the geometric phase of the photon in double-slit interference and show that it is directly connected, in a deeply complementary manner, to the photon's particle characteristics at the two slits. In particular, we formulate a fundamental wave-particle duality relation for the photon in terms of the geometric phase that it displays in the interference plane and the which-path information that it carries in the slit plane. Our work thus establishes a link between two elementary notions in physics and provides foundational insights into the nature of the photon.
_System under study._--Let us consider monochromatic quantum light impinging on two identical slits (pinholes) located at \(\mathbf{r}_{1}\) and \(\mathbf{r}_{2}\) in an opaque screen \(\mathcal{A}\) (see Fig. 1). The light emerging from the slits is observed on another screen \(\mathcal{B}\) far from \(\mathcal{A}\) at position \(\mathbf{r}\) in the paraxial domain. Under these circumstances, the electric field operator at \(\mathcal{B}\) is [28; 30]
\[\hat{\mathbf{E}}(\mathbf{r})=K\Big{[}\hat{\mathbf{E}}(\mathbf{r}_{1})\frac{e ^{ikr_{1}}}{r_{1}}+\hat{\mathbf{E}}(\mathbf{r}_{2})\frac{e^{ikr_{2}}}{r_{2}} \Big{]}, \tag{1}\]
where \(K\) is a constant, \(k\) is the wave number in free space, and \(r_{m}=|\mathbf{r}-\mathbf{r}_{m}|\) with \(m\in\{1,2\}\). Moreover, each of the electric field operators \(\hat{\mathbf{E}}(\mathbf{r}_{1})\) and \(\hat{\mathbf{E}}(\mathbf{r}_{2})\) in the slit plane \(\mathcal{A}\) contains two orthogonal polarization modes (\(x\) and \(y\)), characterized by the annihilation operators \(\hat{a}_{1x},\hat{a}_{1y}\) and \(\hat{a}_{2x},\hat{a}_{2y}\). We are especially interested in the case where the light field is in an arbitrary, pure single-photon state
\[\begin{split}|\Psi\rangle=c_{1x}\left|1,0,0,0\right\rangle+c_{1y }\left|0,1,0,0\right\rangle\\ +c_{2x}\left|0,0,1,0\right\rangle+c_{2y}\left|0,0,0,1\right\rangle.\end{split} \tag{2}\]
Here \(|n_{1x},n_{1y},n_{2x},n_{2y}\rangle\) is a four-mode Fock state, with \(n_{m\mu}\) denoting the photon number in the mode \(\mu\in\{x,y\}\) at slit \(m\in\{1,2\}\), and \(|c_{1x}|^{2}+|c_{1y}|^{2}+|c_{2x}|^{2}+|c_{2y}|^{2}=1\), with \(|c_{m\mu}|^{2}\) giving the probability to find the photon in the corresponding mode.
Figure 1: System under study. A photon in a four-mode state \(|\Psi\rangle=c_{1x}|1,0,0,0\rangle+c_{1y}|0,1,0,0\rangle+c_{2x}|0,0,1,0\rangle +c_{2y}|0,0,0,1\rangle\) strikes screen \(\mathcal{A}\) with two small slits at positions \(\mathbf{r}_{1}\) and \(\mathbf{r}_{2}\). The interfering light is observed at position \(\mathbf{r}\) in plane \(\mathcal{B}\) where periodic intensity and polarization-state fringes appear after repeated experimental runs.
The average intensity and polarization-state distributions of the one-photon light at \(\mathcal{B}\) are completely specified by the four Stokes parameters [31]
\[S_{j}(\mathbf{r})=\left\langle\Psi|\hat{\mathbf{E}}^{\dagger}(\mathbf{r}) \boldsymbol{\sigma}_{j}\hat{\mathbf{E}}(\mathbf{r})|\Psi\right\rangle,\ \ j\in\{0,1,2,3\}, \tag{3}\]
where the dagger stands for the adjoint, \(\boldsymbol{\sigma}_{0}\) is the \(2\times 2\) identity matrix, and \(\{\boldsymbol{\sigma}_{1},\boldsymbol{\sigma}_{2},\boldsymbol{\sigma}_{3}\}\) are the Pauli matrices. From Eqs. (1)-(3) we obtain
\[\begin{split} S_{j}(\mathbf{r})=&\ S_{j}^{\prime}( \mathbf{r})+S_{j}^{\prime\prime}(\mathbf{r})+2[S_{0}^{\prime}(\mathbf{r})S_{0 }^{\prime\prime}(\mathbf{r})]^{1/2}\\ &\times|s_{j}(\mathbf{r}_{1},\mathbf{r}_{2})|\cos(\alpha_{j}-k \Delta\mathbf{r}),\ \ j\in\{0,1,2,3\},\end{split} \tag{4}\]
with \(S_{j}^{\prime}(\mathbf{r})\) and \(S_{j}^{\prime\prime}(\mathbf{r})\) being the Stokes parameters on \(\mathcal{B}\) when only the slit at \(\mathbf{r}_{1}\) or \(\mathbf{r}_{2}\) is open, respectively, and \(\Delta r=r_{1}-r_{2}\). Furthermore, \(|s_{j}(\mathbf{r}_{1},\mathbf{r}_{1})|\) and \(\alpha_{j}\) are the magnitudes and phases of
\[s_{j}(\mathbf{r}_{1},\mathbf{r}_{2})=\frac{S_{j}(\mathbf{r}_{1},\mathbf{r}_{2 })}{[S_{0}(\mathbf{r}_{1})S_{0}(\mathbf{r}_{2})]^{1/2}},\ \ j\in\{0,1,2,3\}, \tag{5}\]
which in turn are the intensity-normalized versions of
\[S_{j}(\mathbf{r}_{1},\mathbf{r}_{2})=\left\langle\Psi|\hat{\mathbf{E}}^{ \dagger}(\mathbf{r}_{1})\boldsymbol{\sigma}_{j}\hat{\mathbf{E}}(\mathbf{r}_{2 })|\Psi\right\rangle,\ \ j\in\{0,1,2,3\}. \tag{6}\]
The quantities in Eq. (6) are the coherence (two-point) Stokes parameters that contain all the information on the first-order vector-field correlations between \(\mathbf{r}_{1}\) and \(\mathbf{r}_{2}\) in the slit plane \(\mathcal{A}\)[32, 33, 34, 29]. The conventional (one-point) Stokes parameters at the slits are obtained from Eq. (6) as \(S_{j}(\mathbf{r}_{1},\mathbf{r}_{1})=S_{j}(\mathbf{r}_{1})\) and \(S_{j}(\mathbf{r}_{2},\mathbf{r}_{2})=S_{j}(\mathbf{r}_{2})\).
Equation (4) particularly shows that the polarization state (and intensity) on \(\mathcal{B}\) varies periodically when moving the observation point \(\mathbf{r}\) transversally along the screen. In addition, since any such one-photon light is completely first-order coherent at \(\mathcal{A}\)[28] and thereby fully polarized on \(\mathcal{B}\)[35], so that \(S_{1}^{2}(\mathbf{r})+S_{2}^{2}(\mathbf{r})+S_{3}^{2}(\mathbf{r})=S_{0}^{2}( \mathbf{r})\), the polarization-state evolution in the observation plane occurs on the surface of the associated Poincare sphere (cf. Fig. 2) [31, 34]. These observations will be of central importance when we next assess the geometric phase on screen \(\mathcal{B}\).
_Geometric phase_.--To ascertain the geometric phase exhibited by the photon, we employ the quantum kinematic approach introduced by Mukunda and Simon [36]. Let us consider a generic pure quantum state that evolves along some smooth curve \(\gamma\) within the Hilbert space, i.e., \(\left|\psi(\gamma_{1})\right\rangle\rightarrow\left|\psi(\gamma_{2})\right\rangle\), where \(\left|\psi(\gamma_{1})\right\rangle\) is the initial state and \(\left|\psi(\gamma_{2})\right\rangle\) is the final state. In this case, the geometric phase \(\Phi_{\mathrm{G}}\) can be expressed as the difference
\[\Phi_{\mathrm{G}}=\Phi_{\mathrm{T}}-\Phi_{\mathrm{D}}, \tag{7}\]
where the first term
\[\Phi_{\mathrm{T}}=\arg\left\langle\psi(\gamma_{1})|\psi(\gamma_{2})\right\rangle \tag{8}\]
is the total phase between the initial and final states, and the second term
\[\Phi_{\mathrm{D}}=\mathrm{Im}\left[\int_{\gamma_{1}}^{\gamma_{2}}\left\langle \psi(\gamma)|\frac{d}{d\gamma}|\psi(\gamma)\right\rangle d\gamma\right] \tag{9}\]
is the dynamic phase along the path.
We recall from our above discussion that light in the pure single-photon state given by Eq. (2) is fully polarized in the observation plane, with its polarization state evolving cyclically on the surface of the Poincare sphere. It has further been shown that for any such polarized light field the evolution path has the form of a circle [37]. We can therfore describe the polarization-state evolution of the photon on screen \(\mathcal{B}\) in terms of the following qubit:
\[\left|\psi(\gamma)\right\rangle=\cos(\theta/2)\left|R\right\rangle+e^{i\gamma \phi}\sin(\theta/2)\left|L\right\rangle. \tag{10}\]
Here \(\theta\in[0,\pi]\) and \(\phi\in[0,2\pi]\) are the polar and azimuthal angles on the Poincare sphere as depicted in Fig. 2, \(\left|R\right\rangle\) and \(\left|L\right\rangle\) are right-handed and left-handed circular polarization bases, respectively, and \(\gamma\in[0,1]\) is a continuous real parameter. Our choice of a constant angle \(\theta\) simplifies but does not reduce the generality of the analysis, since circular paths of any other orientation can always be transformed into the form of Eq. (10) by a suitable rotation. The chosen value range for \(\phi\), corresponding to one circular loop (cycle) on the Poincare sphere, spans over a single spatial period in the interference pattern. We note, however, that the relation between \(\phi\) and \(\Delta r\) is in general not linear.
We can now connect the polarization-state evolution on the Poincare sphere with the quantum kinematic approach. On substituting Eq. (10) into Eqs. (7)-(9), with \(\gamma_{1}=0\) and \(\gamma_{2}=1\), we find that the geometric phase
Figure 2: Polarization-state evolution of the photon on the Poincaré sphere spanned by the intensity-normalized Stokes parameters \(\{s_{1},s_{2},s_{3}\}\). The polarization states are specified by the spherical polar angles \(0\leq\theta\leq\pi\) and \(0\leq\phi\leq 2\pi\), with \(\theta=0\) and \(\theta=\pi\) representing right-handed and left-handed polarization states \(\left|R\right\rangle\) and \(\left|L\right\rangle\), respectively. The blue curve represents the dynamical phase \(\Phi_{\mathrm{D}}\) between the initial state \(\left|\psi_{1}\right\rangle\) and final state \(\left|\psi_{2}\right\rangle\), whereas the red curve is the geodesic between these states. The geometric phase \(\Phi_{\mathrm{G}}\) equals half the green surface area enclosed by the curves.
displayed by the photon in the observation plane \(\mathcal{B}\) is
\[\begin{split}\Phi_{\mathrm{G}}=&\arctan\left[\frac{\sin^{ 2}(\theta/2)\sin\phi}{\cos^{2}(\theta/2)+\sin^{2}(\theta/2)\cos\phi}\right]\\ &-\frac{\phi}{2}\big{(}1-\cos\theta\big{)}.\end{split} \tag{11}\]
Equation (11) is fully general in the sense that it covers both cyclic (\(\phi=2\pi\)) and noncyclic (\(\phi<2\pi\)) evolution. It is formally similar to the classical geometric phase [27] and consistent with the so-called geodesic rule [38; 39; 40; 41]: the start and end points in any noncyclic evolution should be connected with a geodesic, and the geometric phase is specified by half the enclosed solid angle (surface area). Some important features can be concluded from Eq. (11). The phase magnitude is bounded as \(0\leq|\Phi_{\mathrm{G}}|\leq\pi\), and in any cyclic evolution it reduces to \(|\Phi_{\mathrm{G}}|=\pi(1-\cos\theta)\). For the special cases \(\theta=0\) and \(\theta=\pi\), corresponding to the north and south poles on the Poincare sphere, respectively, we have \(|\Phi_{\mathrm{G}}|=0\). Likewise, when the polarization path is on the equator, \(\theta=\pi/2\), the geometric phase is zero for any trajectory within the range \(0\leq\phi<\pi\), while it suddenly changes to \(|\Phi_{\mathrm{G}}|=\pi\) for \(\pi<\phi\leq 2\pi\).
_Which-path information_.--To quantify the which-path information (WPI) carried by the photon at the two slits, we utilize the following two measures (applicable to any mixed state represented by a density operator \(\hat{\rho}\)) [28]:
\[D_{0}=\frac{|S_{0}(\mathbf{r}_{1})-S_{0}(\mathbf{r}_{2})|}{S_{0}(\mathbf{r}_{ 1})+S_{0}(\mathbf{r}_{2})},\ \ D_{S}=\frac{|\mathbf{S}(\mathbf{r}_{1})-\mathbf{S}(\mathbf{r}_{2})|}{S_{0}( \mathbf{r}_{1})+S_{0}(\mathbf{r}_{2})}. \tag{12}\]
The quantity \(D_{0}\) is called the intensity distinguishability and it describes the intensity difference between the slits. The quantity \(D_{S}\), with \(\mathbf{S}(\mathbf{r}_{m})=[S_{1}(\mathbf{r}_{m}),S_{2}(\mathbf{r}_{m}),S_{3} (\mathbf{r}_{m})]\) in slit \(m\in\{1,2\}\), is the Stokes (or polarization) distinguishability that characterizes the polarization-state difference in the slit plane. The denominators in Eq. (12) ensure that \(0\leq D_{0}\leq 1\) and \(0\leq D_{S}\leq 1\).
For the single-photon state in Eq. (2), the two general measures in Eq. (12) turn into
\[D_{0}=|p_{1}-p_{2}|,\ \ D_{S}=\sqrt{1-4|c_{1x}^{*}c_{2x}+c_{1y}^{*}c_{2y}|^{2}}, \tag{13}\]
where \(p_{m}=|c_{mx}|^{2}+|c_{my}|^{2}\) is the path probability of the photon to pass via the slit \(m\in\{1,2\}\). At this one-photon level \(D_{0}\) represents the path predictability [6; 28], i.e., the possibility to correctly guess which of the slits the photon traverses based on its initial state preparation. For example, when \(p_{1}\gg p_{2}\) the probability of detecting the photon in slit 1 is much larger than finding it in slit 2, yielding a high path predictability (\(D_{0}\approx 1\)). In contrast, for \(p_{1}\approx p_{2}\) the path predictability is negligible (\(D_{0}\approx 0\)). Likewise, in this single-photon case \(D_{S}\) translates into the path distinguishability [6; 28], which describes one's ability to discriminate the photon's path with respect to its polarization state at the two slits. Especially, maximum path distinguishability (\(D_{S}=1\)) is reached whenever the photon is orthogonally polarized in the slit plane. For instance, if we were to measure \(x\)-polarized (\(y\)-polarized) light when \(c_{1y}=c_{2x}=0\), then a count signal in the detection plane would directly reveal that the photon has passed the first (second) slit. On the other hand, minimum path distinguishability (\(D_{S}=0\)) occurs if \(\mathbf{S}(\mathbf{r}_{1})=\mathbf{S}(\mathbf{r}_{2})\). Generally, however, the WPI of the photon is characterized by partial path predictability (\(0<D_{0}<1\)) and distinguishability (\(0<D_{S}<1\)).
We further introduce the quantity
\[d=\frac{D_{0}}{D_{S}}, \tag{14}\]
which is the ratio between the two different WPI species. For any polarized light, including the one-photon state of Eq. (2), the two measures in Eq. (12) obey \(D_{0}\leq D_{S}\)[28]. The WPI ratio in Eq. (14) therefore satisfies \(0\leq d\leq 1\), physically meaning that the photon carries more WPI in terms of its polarization state than in terms of its path probabilities. The lower limit \(d=0\) is reached only if the path predictability is zero (\(D_{0}=0\)), whereas the upper bound \(d=1\) is saturated only for scalar light (\(D_{0}=D_{S}\)). In particular, for any polarized light field under cyclic evolution the geometric phase is related to the intensity and polarization-state differences at the slits according to \(|\Phi_{\mathrm{G}}|=\pi[1-|S_{0}(\mathbf{r}_{1})-S_{0}(\mathbf{r}_{2})|/| \mathbf{S}(\mathbf{r}_{1})-\mathbf{S}(\mathbf{r}_{2})|]\)[26; 37]. On connecting this expression with Eqs. (11) and (14), we then find that the polar angle \(\theta\) in the Poincare sphere representation is directly linked to the WPI ratio \(d\) as
\[d=\cos\theta. \tag{15}\]
Equation (15) covers also the noncyclic case as \(\theta\) remains unaffected when varying the azimuthal angle \(\phi\).
_Wave-particle duality_.--We are now in a position to present the main result of this Letter. By using Eq. (15) and standard trigonometric identities, we can first write the geometric phase in Eq. (11) as
\[\Phi_{\mathrm{G}}=\arctan\left(\frac{\sin\phi}{\eta+\cos\phi}\right)-\frac{ \phi}{2}(1-d), \tag{16}\]
where \(\eta=(1+d)/(1-d)\). We observe that the geometric phase in Eq. (16) is now expressed solely in terms of the WPI ratio \(d\) and the azimuthal angle \(\phi\) (corresponding to the distance along the observation screen). Equation (16) thus provides an exact link between the WPI ratio at the slits and the geometric phase in the detection plane.
By eventually introducing the \(\pi\)-normalized geometric phase \(\Phi_{\mathrm{G}}^{\prime}=\Phi_{\mathrm{G}}/\pi\), which is bounded as \(0\leq|\Phi_{\mathrm{G}}^{\prime}|\leq 1\), we discover from Eq. (16) the complementarity relation
\[|\Phi_{\mathrm{G}}^{\prime}|+d\leq 1. \tag{17}\]
This result is obtained by first considering the maximum possible value of \(|\Phi_{\mathrm{G}}^{\prime}|\) when \(0<d<1\), and then separately analyzing the special scenarios of \(d=0\) and \(d=1\)
When \(0<d<1\), we find the maximum by taking the derivative of Eq. (16) with respect to \(\phi\). The corresponding zero yields \(\phi=2\pi\), which stands for cyclic evolution. Substituting this value into Eq. (16) results in a strict complementarity identity for any cyclic evolution, i.e.,
\[|\Phi^{\prime}_{\rm G}|+d=1,\ \ {\rm if}\ \phi=2\pi. \tag{18}\]
For noncyclic evolution we then necessarily have
\[|\Phi^{\prime}_{\rm G}|+d<1,\ \ {\rm if}\ \phi<2\pi. \tag{19}\]
The special case of \(d=0\) is encountered solely if the path predictability is zero (\(D_{0}=0\)). In this situation Eq. (16) leads to \(\Phi^{\prime}_{\rm G}=-\lfloor(\phi+\pi)/2\pi\rfloor\), where \(\lfloor\cdots\rfloor\) denotes the floor function. We thus find \(|\Phi^{\prime}_{\rm G}|=0\) if \(0\leq\phi<\pi\) and \(|\Phi^{\prime}_{\rm G}|=1\) if \(\pi<\phi\leq 2\pi\), both of which satisfy Eq. (17). The last case with \(d=1\) is the trivial scalar-light scenario (\(D_{0}=D_{S}\)) for which Eq. (16) directly gives \(|\Phi^{\prime}_{\rm G}|=0\), which is also encompassed by Eq. (17).
Equation (17) constitutes the main result of our work. It can be viewed as a fundamental quasificator of wave-particle duality of the photon in terms of the geometric phase in the observation plane \(\mathcal{B}\) (wave aspect) and the WPI ratio in the slit plane \(\mathcal{A}\) (particle aspect). As underlined by Eq. (18), for cyclic evolution these two attributes are strictly mutually exclusive, i.e., reducing (increasing) \(d\) increases (reduces) \(|\Phi_{\rm G}|\) such that their combined sum equals exactly one.
_Conclusions_.--In summary, we have explored the geometric phase of a single photon in the double-slit setup. In particular, we discovered a fundamental wave-particle duality relation for the photon that interconnects the geometric phase it exhibits in the observation plane and the which-path information it encompasses in the slit plane. This complementarity relation sets no restrictions on the polarization state at the slits and covers both cyclic and noncyclic polarization evolution in the interference plane. In the cyclic case, the general inequality turns into a tight identity which states that the geometric phase and which-path information are of strictly complementary nature. Due to their universal physical characters, we expect that similar features between the geometric phase and wave-particle duality exist in other quantum systems as well. Our work thereby unifies fundamenal notions in physics, reveals uncharted facets of the dual wave-particle nature of light, and identifies future directions towards research on the geometric phase.
_Acknowledgments_.--The authors would like to thank Robert Fickler, Rafael Barros, and Jaime Moreno for fruitful discussions. This research was supported by the Research Council of Finland (Grant Nos. 354918, 349396, and 346518).
|
2310.20515 | LoRa Multi-Hop Networks for Monitoring Underground Mining Environments | Internet of Things applications have gained widespread recognition for their
efficacy in typical scenarios, such as smart cities and smart healthcare.
Nonetheless, there exist numerous unconventional situations where IoT
technologies have not yet been massively applied, though they can be extremely
useful. One of such domains is the underground mining sector, where enhancing
automation monitoring through wireless communications is of essential
significance. In this paper, we focus on the development, implementation, and
evaluation of a LoRa-based multi-hop network tailored specifically for
monitoring underground mining environments, where data traffic is sporadic, but
energy efficiency is of paramount importance. We hence define a synchronization
framework that makes it possible for the nodes to sleep for most of the time,
waking up only when they need to exchange traffic. Notably, our network
achieves a sub 40us proven synchronization accuracy between parent-child pairs
with minimum overhead for diverse topologies, rendering it highly viable for
subterranean operations. Furthermore, for proper network dimensioning, we model
the interplay between network's throughput, frame size, and sampling periods of
potential applications. Moreover, we propose a model to estimate devices' duty
cycle based on their position within the multi-hop network, along with
empirical observations for its validation. The proposed models make it possible
to optimize the network's performance to meet the specific demands that can
arise from the different subterranean use cases, in which robustness, low power
operation, and compliance with radio-frequency regulations are key requirements
that must be met. | Luca Scalambrin, Andrea Zanella, Xavier Vilajosana | 2023-10-31T14:55:55Z | http://arxiv.org/abs/2310.20515v1 | # LoRa Multi-Hop Networks for Monitoring Underground Mining Environments
###### Abstract
Internet of Things applications have gained widespread recognition for their efficacy in typical scenarios, such as smart cities and smart healthcare. Nonetheless, there exist numerous unconventional situations where IoT technologies have not yet been massively applied, though they can be extremely useful. One of such domains is the underground mining sector, where enhancing automation monitoring through wireless communications is of essential significance. In this paper, we focus on the development, implementation, and evaluation of a LoRa-based multi-hop network tailored specifically for monitoring underground mining environments, where data traffic is sporadic, but energy efficiency is of paramount importance. We hence define a synchronization framework that makes it possible for the nodes to sleep for most of the time, waking up only when they need to exchange traffic. Notably, our network achieves a sub 40 \(\mu\)s proven synchronization accuracy between parent-child pairs with minimum overhead for diverse topologies, rendering it highly viable for subterranean operations. Furthermore, for proper network dimensioning, we model the interplay between network's throughput, frame size, and sampling periods of potential applications. Moreover, we propose a model to estimate devices' duty cycle based on their position within the multi-hop network, along with empirical observations for its validation. The proposed models make it possible to optimize the network's performance to meet the specific demands that can arise from the different subterranean use cases, in which robustness, low power operation, and compliance with radio-frequency regulations are key requirements that must be met.
synchronized wireless networks, underground mining monitoring, IoT, energy efficiency, multi-hop LoRa
## I Introduction
In recent years, the Internet of Things (IoT) has witnessed remarkable growth and impact across conventional domains, such as smart agriculture, smart cities and smart healthcare [1]. However, there are numerous unexplored applications where IoT could make a significant positive contribution, as is the case with monitoring in the underground mining sector [2]. This sector poses challenging working conditions for laborers, and the automation of monitoring processes could not only accelerate on-site measurements, including those of air quality, pressure, temperature, and structural vibrations, but also considerably reduce the risks faced by workers, which have resulted in numerous fatalities over time [3]. One of the primary challenges in implementing IoT-based monitoring in such scenarios is the communication problem, as they predominantly consist of rock wall tunnels, where radio propagation is extremely challenging due to high signal loss [4]. Consequently, monitoring devices must exhibit reliable communication capabilities, and they should be easily deployable, considering that the locations can be difficult to access.
Among the various IoT solutions available, LoRaWAN stands out as one of the most commonly used for monitoring rugged environments with low data rate requirements, as is the case for underground domains. This is mainly due to its robust long-range coverage, achieved through a spread spectrum technique denominated chirp spread spectrum (CSS), where the signal is modulated by chirp pulses [5]. Nevertheless, LoRaWAN's star topology architecture, in which sensing nodes wirelessly connect to a central point known as _Gateway_, presents significant limitations in subterranean galleries, primarily because a single hop is insufficient to cover the long distances that can be found in such scenarios. To overcome this challenge, a possible solution is to implement a multi-hop network, a concept extensively utilized in the past with the advent of technologies such as WirelessHART and 802.15.4-6TISCH [6, 7]. However, these technologies were not engineered to face the challenges of harsh underground environments, and they were optimized for higher capacities than those provided by LoRa. As a consequence, fitting all the required signaling within LoRa data rates while remaining compliant with duty cycle regulations becomes unfeasible, which leads to the need for the development of a new LoRa-based multi-hop framework.
Building an energy-efficient multi-hop structure requires coordination between devices so communication processes can occur in a synchronized manner. This coordination is usually rooted in clock synchronization approaches that ensure a common notion of time among the entire network [8]. A usual method to achieve synchronization is to rely on a network protocol in which parent nodes act as time reference for their children, using data and/or control traffic to minimize the clock drift between peers [9]. However, maintaining network synchronization often comes at the expense of sending extra packets, which can negatively impact the network. Consequently, the LoRa multi-hop protocol must keep overhead to
a minimum, so as to run efficiently on extremely constrained devices whilst adhering to regional regulations.
In this work, we propose a Time-Division Multiple Access (TDMA) scheme explicitly designed to support multi-hop LoRa-based communication in tree-shaped wireless networks. The proposed protocol has been implemented and tested on real devices to prove its effectiveness and practicality. To summarize, the primary contributions of this work are:
1) Design, implementation and testing of a TDMA multi-hop protocol for LoRa-based wireless networks, with a sub \(40\)\(\mu\)s father-child synchronization accuracy;
2) Theoretical analysis of the inter-dependencies of frame size, throughput and sampling period in a multi-hop network;
3) Model to estimate devices duty cycle depending on their position within the multi-hop network, jointly with measurements for validation.
All these contributions together aim to provide a wider perspective on the feasibility of LoRa multi-hop for industrial scenarios, considering the limiting factors for its practical adoption, such as the band regulations. As far as we know, this is the first study that includes such a perspective. The remainder of this paper is organized as follows: the related work is briefly commented on Sec. II. In Sec. III we detailed the TDMA architecture designed together with a theoretical analysis of the more important parameters, whilst the experimental setup is presented in Sec. IV and the results are provided in Sec. V. Finally, conclusions and future research lines are discussed in Sec. VI.
## II Related Work
In recent years, there has been a growing interest in the design and implementation of LoRa multi-hop networks, due to their high versatility in monitoring applications. In [10], the authors proposed a LoRa multi-hop architecture where the gateway is in charge of sending queries to the child nodes when a sampling operation is required. A significant limitation of this asynchronous design is that end nodes must be kept in a reception state continuously, making it unsuitable for battery-powered devices. Subsequently, Abrardo _et al._[11] proposed an extension of LoRaWAN architecture for monitoring underground infrastructures in Italy. However, the solution only provides a line topology, which is not representative of the most common real-world underground scenarios, and there is also a lack of experimentation, as all the reported results were obtained entirely from simulations. In [12], Ebi _et al._ proposed an interesting synchronous mechanism for monitoring urban drainage systems centered on a relay node that collects data from the other devices. However, the relay requires some special hardware, _i.e._, a double front-end capable of handling both the LoRaWAN layer and the mesh protocol, impacting directly on the device's cost. In addition, the achieved synchronization accuracy is not optimal for the hardware employed, leading to longer reception times that require more energy.
A recent study by Mugerwa _et al._[13] addresses the challenge of packet loss in LoRaWAN networks for devices that are distant from the gateway. Their approach is similar to the one proposed by the LoRa-Alliance, in which devices closer to the gateway take on the role of relays. However, a notable limitation of the method is that it only provides an additional hop, which proves insufficient for underground mining environments. In [14], a library that facilitates the integration of LoRa-enabled devices into a mesh network with a routing protocol of distance-vector type is presented. Unfortunately, this library is exclusively suitable for end devices that can be held in the listening state indefinitely, as it is also the case of the work carried out in [15, 16], rendering it impractical for most underground monitoring applications.
Given the inadequacy of the mentioned approaches for monitoring in underground scenarios, this study proposes a TDMA protocol specifically designed for battery-powered LoRa-enabled devices to be deployed in underground tunnels/galleries, where only one permanent-power sourced gateway is needed at the periphery of the network, as exemplified in Fig. 1.
## III TDMA Architecture
In this section, we provide a detailed explanation of the TDMA protocol designed, together with a theoretical analysis between frame size and throughput, which is fundamental to correctly dimensioning the system. Then, we provide a model that aims to estimate the duty cycle of devices within the multi-hop network based on their position.
### _Principle of Operation_
The multi-hop LoRa protocol we have developed consists of a TDMA system built at the Medium Access Control (MAC) layer on top of the LoRa physical layer (PHY). Fig. 1 illustrates the fundamental operational principle, which consists of a tree architecture composed of one relay node and many other regular nodes. The protocol allocates the time dimension, dividing it into distinct time slots, each designated for specific nodes within the network. To establish seamless wireless connections, every node within the network executes the LoRa multi-hop protocol, ensuring successful communication with both its parent and child nodes, if any. Conversely, the relay node, being the unique gateway-reachable device, must also implement the LoRaWAN stack. Therefore, the relay node undertakes the vital role of receiving all data packets transmitted within the network under the multi-hop system and subsequently uploading them to the Gateway through LoRaWAN communication.
Fig. 1: Time dimension division technique employed between the different devices in the network. This ensures that nodes of the system transmit and receive only in their corresponding slots.
The multi-hop LoRa MAC layer employs a versatile framework consisting of five packet types, each encapsulated within the LoRa PHY layer. Among these packet types, JoinRequest and JoinAccept serve the purpose of enabling devices to join the network. The former is utilized by a joining node to indicate its chosen parent node, while the latter is dispatched by the relay node containing the assigned time slot number within the TDMA framework, upon a successful joining procedure. The next two packets, named UpData and DownData, correspond to the data packets used for sending application information in both the uplink and downlink directions, which take place only once per frame for each device.
As it was mentioned previously, the notion of time is of paramount importance for every device in the system. In this regard, each device is equipped with a clock oscillator, whose oscillation frequency can be customized to get a certain _Tick_ time. Ideally, the tick duration should be the same for all devices, but real oscillators do not behave perfectly due to internal and external factors, preventing all the network's nodes from achieving the same tick period. To overcome this challenge, the multi-hop LoRa system adopts a beacon-based technique that periodically corrects the reference time across the network elements. As exemplified in Fig. 2, the TDMA framework defines a frame period composed of \(N\) slots, which are assigned to specific nodes for data or beacon exchange. Each of the mentioned frames begins with the reception of the fifth packet type, denoted as Beacon, which provides the reference to re-synchronize clocks. This packet contains essential information such as NetworkID and SenderID. It can also be noticed that one dedicated slot within each frame is reserved for the LoRaWAN link, which is exclusively utilized by the relay node to transmit the collected data to the Gateway. Consequently, the throughput of the entire network is limited to one LoRaWAN packet per frame.
Simultaneously, each slot within the frame consists of various distinct time elements, as depicted in Fig. 3, categorized based on the actions to be executed.1 When transmitting a data packet, devices initiate the transmission and then await the corresponding acknowledgment (ACK) packet. Conversely, during packet reception, devices first handle the reception process and subsequently transmit the corresponding acknowledgment message. \(T_{offset}\) is employed as a buffer to start the reception or transmission, as radio chips always require extra time to reach the ReadyToReceive state. A slot \(T_{data}\) (\(T_{x}\) and \(R_{x}\)) enables nodes to transmit or receive data packets of a predefined maximum size (64 bytes in this study) through the PHY LoRa layer, with all devices operating under the multi-hop protocol with the same spreading factor (set to \(SF=9\) in our case).2\(T_{bcn}\) and \(T_{ack}\) represent beacon and acknowledgement message duration, and they are kept as compact as possible with the goal of minimizing radio usage.
Footnote 1: Fig. 3 reports the parameter values used in this study, but the framework can be adapted to other design choices.
Footnote 2: The framework can potentially be adjusted to host links of different capacities, but this would increase the framework complexity and, in turn, the nodes’ energy consumption. Given the typically light traffic demand of the target applications, simplicity and energy efficiency are preferred to capacity.
The final element contained within the slot is referred to as _Guard Time_, \(T_{g}\), which represents an extension of the reception window necessary in TDMA systems to compensate for clock imperfections. If \(T_{F}\) denotes the frame time, which is the time between two consecutive synchronization events, and \(D_{R}\) the relative drift between father and child, \(T_{g}\) window must satisfy
\[T_{g}/2\geq D_{R}T_{F} \tag{1}\]
in order to keep the framework working at all times. It is noticeable that devices with a larger \(D_{R}\) will require a larger \(T_{g}\), which yields lower energy efficiency. To illustrate the most typical synchronization cases, in Fig. 4 we depict different scenarios that can occur when a node is receiving a beacon signal from its parent. The ideal case \(A\) occurs if no drift is present between the devices, whilst \(B\) and \(D\) illustrate the early and late cases in which the child is still able to get the synchronization signal, with \(B\) being the worst case scenario in terms of energy, as the node needs to be in \(R_{x}\) state for the entire \(T_{g}\) interval before the actual reception starts. On the other hand, \(C\) is too early and \(E\) is too late to get the beacon.
### _Frame Size and Network Throughput_
Once the TDMA framework is established, critical parameters, such as the number of slots \(N\) within a frame or the application period, must be carefully selected in order to dimension the network properly. To analyze their impact in terms of energy consumption, it is required to consider the different power consumption states of the nodes, which heavily depend on the hardware employed. Generally, nodes with radio chips can operate in three main states: _Transmission State_, _Reception State_, and _Sleep State_. Note that, we do not distinguish between the power state of the micro-controller (\(\mu\)C) and that of the radio interface because, with the boards considered in this study, the \(\mu\)C is activated any time the radio transceiver leaves the sleep state. Therefore, we classify
Fig. 3: Slot structure for the different types of slots used for receiving (top) or transmitting (middle) a data packet, or transmitting a beacon (bottom) in the TDMA multi-hop architecture.
Fig. 2: Frames are composed of N slots. To keep synchronization, each node receives and transmits one beacon at the beginning of the frame.
the state of the whole node, and we denote by \(P_{s}\), \(P_{Tx}\) and \(P_{Rx}\) the overall power consumption in Sleep, Transmission and Reception states, respectively. Additionally, we account for the energy consumed by an application running on top of the TDMA system, indicating by \(P_{app}\) its power consumption and by \(\tau_{app}\) its execution time. We assume the application is executed once every \(k\) frames, where \(k\) is an integer parameter. If \(T_{SL}\) denotes the slot duration, the application period \(T_{app}\) can hence be expressed as
\[T_{app}=kT_{F}=kT_{SL}N, \tag{2}\]
where \(T_{SL}\) depends on the maximum number of transmitted bytes per slot and the selected spreading factor \(SF\). As both \(T_{app}\) and \(T_{SL}\) remain constant during execution, their product \(kN\) is also constant (the longer the frame, the lower the number of frames between two application executions).
The mean power \(P_{tot}\) absorbed by a device that receives and sends a beacon, executes the application, and enters the sleep state during the rest of the time, can be written as:
\[\begin{split}& P_{tot}=P_{s}+(P_{Rx}+P_{Tx}-2P_{s})\frac{T_{ bcn}}{T_{SL}N}+\\ &(P_{Rx}-P_{s})\frac{T_{g}}{T_{SL}N}+(P_{app}-P_{s})\frac{\tau_{ app}}{T_{app}}.\end{split} \tag{3}\]
Recalling (1), the guard time is also proportional to the frame duration. Replacing \(T_{g}\) in (3) with its lower bound \(2D_{R}T_{F}\), we get
\[\begin{split}& P_{tot}=P_{s}+(P_{Rx}+P_{Tx}-2P_{s})\frac{T_{ bcn}}{T_{SL}N}+\\ &(P_{Rx}-P_{s})2D_{R}+(P_{app}-P_{s})\frac{\tau_{app}}{kT_{SL}N}. \end{split} \tag{4}\]
From (4), it is evident that, to minimize power consumption, \(N\) should be increased and, since \(kN\) must remain constant, \(k\) should be set to \(1\).
This result suggests prolonging the frame duration as much as possible, such that \(T_{app}=T_{F}=NT_{SL}\), allowing the application to be executed once per frame. However, as mentioned earlier, the TDMA architecture can handle only one LoRaWAN transmission from the relay node to the Gateway per frame. Therefore, if the entire network generates more than one packet to be sent to the Gateway within a single frame, it will result in an overload situation. Hence, if \(n\) represents the number of devices within the network, \(T_{app}\) is limited by
\[nT_{F}\leq T_{app}, \tag{5}\]
which, by (2), can be expressed as \(n\leq k\). These expressions signify that reducing the parameter \(k\), _i.e._, keeping the frame as long as \(T_{app}\), yields better energy efficiency, but at the cost of imposing a limitation on the maximum number of devices that the network can accommodate without collapsing. In a practical setting, then, the frame size is limited by \(T_{app}/n\). Alternatively, it is possible to allocate more LoRaWAN slots in the frame, slightly changing the framework.
### _Duty Cycle_
The duty cycle is another critical parameter that restricts radio usage in sub-GHz unlicensed bands. In Europe, for instance, ETSI established a maximum duty cycle of 1% per channel per device. In multi-hop networks, nodes closer to the relay tend to be the root of larger sub-trees, resulting in increased data transmissions. Consequently, duty cycle restrictions become significant for these devices within dense networks.
If we denote the number of available (orthogonal) radio channels by \(c\) and the application period by \(T_{app}\), and \(m_{i}\) counts for the number of devices within the sub-tree rooted by the \(i\)-th device, its duty cycle \(D_{C_{i}}\) is given by
\[D_{C_{i}}=\frac{m_{i}T_{ack}+(1+m_{i})T_{data}+kT_{bcn}}{T_{app}}\frac{1}{c}\,. \tag{6}\]
From this expression, the duty cycle of different devices within the multi-hop network can be easily estimated. Additionally, as in some cases the duty cycle \(D_{C}\) restriction can be the main constraint for \(T_{app}\), the minimum application period will be given by the most restrictive condition between (5) and
\[T_{app}\geq\frac{m_{i}T_{ack}+(1+m_{i})T_{data}+kT_{bcn}}{D_{C}}\frac{1}{c}\,.\]
## IV Experimental setup
The system measurements were conducted using a Logic Analyzer (Digilent Digital Discovery) jointly with a computer and an indoor setup of LoRa-based IoT devices, as illustrated in Fig. 5, where attenuators of \(20\) dB were included on each device, _i.e._, \(40\) dB attenuation for each link, to simulate longer distances, given the challenges of accessing underground assets for real-world experimentation. Apart from the radio link between the IoT devices, a wired connection between the nodes and the Logic Analyzer was employed for the measurement of General Purpose Input/Output (GPIO) pin signals. The hardware components utilized in the LoRa devices consisted of an SX1276 Radio chip and a 32-bit ARM architecture micro-controller running FreeRTOS operating
Fig. 4: Schemes \(A\) to \(E\) exemplify the various potential states of a child node while receiving a beacon transmitted by its parent node. In cases \(A\), \(B\), and \(D\), the devices achieve complete synchronization, whereas synchronization does not occur in cases \(C\) and \(E\).
system. For the TDMA system, a 32 KHz crystal was utilized as the time source, which leads to a tick duration of \(30.5\)\(\mu\)s. For the implementation, each frame comprised 90 slots of 21281 ticks, which results in \(T_{F}=58.5\) s when using a constant spreading factor \(SF=9\).
## V Results
### _Synchronization error_
The synchronization error is a fundamental property of the system that quantifies the time discrepancy between two devices. To measure this parameter, both the father and children generate a signal through the GPIO pin at the end of the synchronization slot. Let \(t_{i}^{syn}\) represent the time at which the synchronization event slot ends for the \(i\)-th device. Thus, the synchronization error, \(\varepsilon_{synch}\), can be expressed as
\[\varepsilon_{synch}=t_{fath}^{syn}-t_{child}^{syn}.\]
Due to quantization effects, the signal generated at \(t_{i}^{syn}\) can only occur at multiples of the \(i\)-th device's tick duration. Consequently, the maximum resolution achievable is limited to one tick duration. Fig. 6 - (top) presents the measurements of \(\varepsilon_{synch}\) obtained for a **star topology** with Node 0 acting as a relay and the rest of Nodes as children, as exemplified in Fig. 5. It is evident from the plot that the synchronization error is bounded to approximately \(30.5\)\(\mu\)s, which corresponds to the tick duration of the clock used. As a result, this implementation represents the first LoRa multi-hop system with the maximum resolution achievable for a \(32\) KHz clock. Contrary to the star topology, Fig. 6 - (bottom) depicts the \(\varepsilon_{synch}\) measurements obtained with devices arranged in a **line topology** as shown on Fig. 1. It must be noticed that the error is measured with respect to the relay, causing an error increment towards the end of the chain. Nevertheless, as it can be recognized, the father-child error of the node pairs \(0-1\), \(1-2\) and \(2-3\) is always bounded to approximately \(30.5\)\(\mu\)s, proving that every node in the network will be able to communicate accurately the relevant data generated by diverse monitoring applications.
### _Duty Cycle_
For duty cycle measurements, the network was configured with a star topology as displayed in Fig. 5, with Node 0 acting as the relay. Fig. 7 illustrates the results obtained with a sampling period of \(T_{app}=4T_{F}\) and considering only one channel, _i.e._, \(c=1\). The beginning of each frame is easily identifiable due to the four transmissions of beacons, whose Time-On-Air results in \(ToA=103.4\) ms. The uplink data size used in the multi-hop network was \(24\) bytes, and so the \(ToA\) becomes \(226.3\) ms, which can be identified in the figure together with the acknowledgment messages sent. Additionally, each frame includes a LoRaWAN transmission by the relay, which can be observed as the longest transmission. Although the useful data remains the same, _i.e._, \(24\) bytes, the \(ToA\) increases to \(267.26\) ms due to the overhead introduced by the LoRaWAN stack, which typically adds about \(12\) bytes of additional data, at the best.
Following the experiment shown in Fig. 7, we calculated the total \(T_{x}\) time of the relay during the \(T_{app}\) period for various scenarios with different values \(m_{i}\), in order to measure the relay's duty cycle. The findings are depicted in Fig. 8, together with the \(D_{C_{0}}\) estimation obtained with the model (6) and the relative error between them. The data reflects a linear behaviour in \(m_{0}\), as expected from (6), and the maximum relative error obtained between the model and the measurements was \(1.63\%\), which is mainly due to the difference between the ideal \(ToA\) of the LoRa frames and the time employed by the radio chip to effectively do the transmission, as the radio frequency (RF) hardware and software always exhibit small delays that can slightly affect the radio time.
## VI Concluding Remarks and Future Work
In the context of the abundance of IoT applications, certain domains remain underexplored in terms of the potential benefits that IoT could offer them. In this research, our focus was centered on addressing the need for automated wireless
Fig. 5: Setup of one relay and four children communicated via LoRa multi-hop, while a logic analyzer is wired connected to every device to facilitate GPIO signal measurements.
Fig. 6: Synchronization error with respect to the relay (Node 0) in a star (top) and line topology (bottom).
monitoring in the underground mining sector. To this end, we have developed, implemented, and evaluated a multi-hop TDMA protocol specifically tailored for inexpensive LoRa devices. With the use of this low overhead protocol, a proven sub \(40\)\(\mu\)s synchronization accuracy between parent-child pairs was achieved for various topologies. This, combined with future efforts aimed at conducting field tests in underground scenarios, could eventually establish its complete suitability for use in such extreme assets. Furthermore, we presented a theoretical analysis which deals with the relationships between the frame size, network's throughput, and the sampling period of applications executed on the devices. Moreover, we introduced a model to estimate the duty cycle of the different members of the multi-hop network and its empirical measurements for validation. Our future work also includes the reduction of the baseline cost to keep the network synchronized, together with the implementation of a simulation framework which will allow us to explore how scalable the proposed protocol implementation and model are when the number of devices increases.
|
2309.05806 | Simulated multi-component CuZr(Al) metallic glasses akin to experiments | We study a three-component CuZrAl metallic glass system by means of a
combined Monte Carlo and Molecular Dynamics simulations scheme. This hybrid
method allows us to generate equilibrated samples at temperatures below the
conventional glass transition for the first time, achieving a more stable
glassy regime. By using a realistic potential for the interactions of metallic
species, we explore the kinetics, thermodynamics, and rheology of a CuZrAl
glass, and then compare these findings with those of the ubiquitous CuZr.
Remarkably, the resulting sheared glassy configurations show an abrupt stress
drop corresponding to the shear band, akin to experimental observations. Our
results pave the way for theoretical studies of complex metallic glasses and
offer comparisons with experiments. | Rene Alvarez-Donado, Silvia Bonfanti, Mikko Alava | 2023-09-11T20:17:44Z | http://arxiv.org/abs/2309.05806v2 | # Simulated multi-component metallic glasses akin to experiments
###### Abstract
We study a three-component metallic glass system by means of a hybrid Monte Carlo and Molecular Dynamics algorithm which allows the generation of equilibrated samples for temperatures below the conventional glass transition accessible by conventional methods. Using a realistic potential for the atomic interactions we explore the kinetics, thermodynamics, and rheology of a Cu-Zr-Al metallic glass composition in the ultrastable glass regime, showing in particular how the configurational entropy depends on the temperature, and compare it to the ubiquitous Cu-Zr one. Our results pave the way for theoretical studies of complex metallic glasses and comparisons with experiments.
Metallic glasses (MGs) are an intriguing class of materials characterized by an amorphous atomic structure, formed by rapidly cooling down high-temperature metallic liquids to room temperature [1; 2; 3]. This unique disordered structure grants them with several outstanding properties, such as unprecedented mechanical features, including high strength and high elastic limit. Metallic glasses have therefore the potential to overcome the capabilities of traditional materials in various technological and industrial applications [2; 4; 5]. Nevertheless, the major limitation that prevents their use on a large scale, is the difficulty to avoid the crystallization of the samples during the cooling process [6]. Recent advancement in synthesis techniques has come from vapor deposition methods that allow the production of MGs with enhanced stability, known as ultrastable metallic glasses [7; 8; 9; 10; 11]. These are thought to be equivalent to conventionally liquid quenched MGs that have undergone aging for thousands of years [12]. The resultant shift in the glass transition temperature, has been shown e.g. for the ultrastable Cu\({}_{46}\)Zr\({}_{46}\)Al\({}_{8}\) MG in Ref. [9]. This trend has been discovered by Ediger and co-workers but for the class of organic glasses [13]. Ultrastable glasses are characterized by extraordinary thermodynamic and kinetic stability and exceptional mechanical properties [13; 14; 15; 16].
The molecular mechanisms underlying the glass-forming ability (GFA) in MGs and the development of the ultrastable state are far from being understood, a challenge characterizing the broad spectrum of glasses [17; 18]. Molecular simulations have been fundamental to investigate the properties of glasses from the microscopic point of view [19; 20]. However, simulations suffer from unrealistic high-cooling rates (\(\sim\![1-100]\) K/ns), resulting in computer-generated metallic glasses with properties that differ significantly from those observed in experiments [21; 22; 23; 24; 25]. Here, we solve this gap by obtaining _in silico_ samples of realistic CuZr-based ultrastable MGs similar to experiments, using an efficient hybrid Monte Carlo and Molecular Dynamics (MC+MD) simulation approach. Our study focus on two-component Cu\({}_{50}\)Zr\({}_{50}\) MGs as a starting point, and ternary component Cu\({}_{46}\)Zr\({}_{46}\)Al\({}_{8}\) MGs, both known for their exceptional GFA, outstanding mechanical properties and significant potential in engineering applications [9; 26; 27; 28; 29].
Recently, the limitation of timescale of standard algorithms has been overcome through the development of a very efficient method, called Swap Monte Carlo (SMC) [30; 31; 32]. Here the equilibration time increases less rapidly than the relaxation time \(\tau_{\alpha}\) as the temperature decreases [33; 34], allowing for the computation of the equilibrium configurational entropy below the experimental \(T_{g}\)[35; 36; 37]. The SMC algorithm has seemed to be the sole means to investigate the properties of ultrastable glasses _in silico_. Despite the accelerated dynamics facilitated by swap moves, the crystallization effects commonly observed in conventional simulations continue to occur in SMC, but can be resolved by enforcing a degree of polydispersity, which effectively prevents crystallization and enables the generation of deeply cooled equilibrium liquid configurations, even at temperatures below the glass transition \(T_{g}\), comparable with experiments. So far SMC has been applied only to a simplify model of metallic glass, a ternary mixture Lennard-Jones, that does not consider the specific nature of the constituents [38], therefore limiting our knowledge of the chemical composition landscape for MGs.
An intriguing solution to accelerate the dynamics sampling in the chemical configuration space [39] comes from the field of multicomponent alloys and combines MC methods, MD and realistic short range potential, e.g. embedded-atom method (EAM) [40]. Only very recently, this techniques has been applied in glasses to obtain realistic samples of binary CuZr MG in computer simulations [41], where it was shown that shear transformation zones are limited to small clusters of particles. Here we demonstrate how this approach allows an algorithm for tackling various concentrations of different species and that it is applicable to produce multi-component metallic glasses _in silico_. We introduce here for the first time equilibrium configurations of ternary metallic MGs, specifically Cu\({}_{46}\)Zr\({}_{46}\)Al\({}_{8}\), similar to experiments. We examine their thermodynamic and ultrastable states and compare
the mechanical properties. Our work presents an example for generating computationally realistic models for any type of metallic glass consisting of several atomic species.
Simulation setup.--The interatomic interactions are simulated through an Embedded Atom Method (EAM) as developed by [27]. For thermodynamics and kinetics, we performed simulations consisting of \(N\)=100000 atoms in a cubic box with periodic boundary conditions in three dimensions. Our simulations are performed with LAMMPS [42], using a time step \(\Delta t\)=1 fs. The glass state is obtained through quenching in the isobaric-isothermal ensemble (\(NpT\)) from the liquid at 2000 K to 300 K using a cooling rate \(\kappa\) = 10\({}^{12}\) K/s. We save configurations every 50 K during the cooling process for further analysis. The cooling process is performed by integrating the Nose-Hoover equations with damping parameters \(\tau_{T}\)=2 fs and \(\tau_{p}\)=5 ps for the thermostat and barostat. All results are obtained keeping the external pressure \(p\)=0.
Hybrid Molecular Dynamics-Monte Carlo (MD+MC) algorithm--In order to generate glasses that emulate the behavior observed in experiments, we employ a hybrid Molecular Dynamics-Monte Carlo (MD+MC) scheme under the variance-constrained semi-grand canonical ensemble (VC-SGC, see the Supplementary Information) [39]. This hybrid scheme allows exploring the configurational degrees of freedom by randomly selecting an atom and attempting to change its type, while also calculating the corresponding energy and concentration changes. Acceptance of these transmutations follows the Metropolis criterion, ensuring the preservation of detailed balance. On the other hand, the relaxation processes are accounted for by the MD integration steps. To maintain the desired composition within the system, we set the variance parameter \(\kappa\)=10\({}^{3}\). Furthermore, we evaluate the differences in chemical potential relative to Zr using hybrid MD+MC simulations under the semi-grand canonical ensemble (SGC) at a temperature of 2000 K. The specific set of parameters that minimize the composition errors in relation to the desired concentration can be found in the Supplementary Information.
Firstly, we compare the thermal evolution of the potential energy per particle between a pure MD simulation and the MD+MC algorithm for the two alloys. As shown in Figure 1, the hybrid algorithm allows us to quench the liquid with properties similar to those reported in experiments. It is worth noting that both methods produce the same results at high temperatures. Therefore, we saved a configuration from the pure MD simulation at 1100K, where the liquid is still in equilibrium. This configuration was used as the starting point for the MD+MC simulation. Finally, the thermal evolution of the potential energy presented in Figure 1 was obtained by averaging over 5 independent simulations in each case.
It is well known that the glass-forming ability (GFA) of CuZr alloys improves when a small percentage of Al is included [7]. In a previous study of the same alloys [43], we estimated \(T_{g}^{MD}\) to be 623 and 713 K for Cu\({}_{50}\)Zr\({}_{50}\) and Cu\({}_{46}\)Zr\({}_{46}\)Al\({}_{8}\), respectively. By following a similar protocol, we observe the same trend in the MD+MC algorithm, indicating that the method mimics the melt-quench procedure but provides equilibrium configurations for temperatures below the conventional \(T_{g}\), meaning that it creates ultrastable glasses. Throughout the quenching process, in both pure MD and MD+MC, we carefully analyzed the structure and found no evidence of crystallization in the alloys.
Thermodynamics.--Once we successfully create glasses under experimental conditions, our focus shifts
Figure 1: Thermal evolution of the potential energy per atom of the binary Cu\({}_{50}\)Zr\({}_{50}\) and the ternary Cu\({}_{46}\)Zr\({}_{46}\)Al\({}_{8}\) MGFLs with a cooling rate of 10\({}^{12}\) K/s using both a pure MD and a hybrid algorithm (red lines). At high temperatures, both methods exhibit similar results. However, as the temperature decreases, the behavior diverges significantly. The MD+MC method remains in equilibrium for lower temperatures compared to the pure MD approach.
Figure 2: Configurational entropy as a function of temperature obtained through RS method for Cu\({}_{50}\)Zr\({}_{50}\) (blue) and Cu\({}_{46}\)Zr\({}_{46}\)Al\({}_{8}\) (red)
to exploring their thermodynamic properties. To being with, we determine the configurational entropy of the glass formed by MD+MC by the formula \(S_{conf}\!=\!S_{tot}\!\!-\!S_{vib}\), where \(S_{tot}\) represents the total entropy and \(S_{vib}\) stands for the vibrational contribution. We obtain both entropies using the reversible scaling [44] (RS) method, with the Uhlenbeck-Ford potential and the Einstein crystal serving as reference systems, respectively[45; 46; 47] (see the SI). Figure 2 displays the configurational entropy (\(S_{conf}\)) plotted against temperature for both the binary Cu\({}_{50}\)Zr\({}_{50}\) and ternary Cu\({}_{46}\)Zr\({}_{46}\)Al\({}_{8}\) alloys. As anticipated, \(S_{conf}\) decreases with decreasing temperature until it freezes in at lower temperatures due to the glass transition. The measured residual entropy values, representing the constant \(S_{conf}\) value at the occurrence of the glass transition, were 0.049 and 0.035 J/mol-K for Cu\({}_{50}\)Zr\({}_{50}\) and Cu\({}_{46}\)Zr\({}_{46}\)Al\({}_{8}\), respectively. Note in particular that the \(S_{conf}\) of Cu\({}_{46}\)Zr\({}_{46}\)Al\({}_{8}\) seems lower than the binary alloy, the reason for that is because the ternary alloy possesses a high GFA allowing to reach deeper values before the freeze-in at \(T_{g}\) (which is higher). The reported value of \(S_{conf}\) in the Cu\({}_{50}\)Zr\({}_{50}\) alloy is just a 20% from the one reported _in situ_ by Smith _et al._[48]. We are dealing with glass configurations with glass transition temperatures lower than the one obtained in conventional glasses, thus it is expected to have a lower value of the residual entropy. A comparison with experiments for the ternary system would be welcome.
Following experiments on UMG using the vapor-deposition method, a common approach to studying their stability is by performing calorimetric measurements during a heating process. In Figure 4, we compare the behavior of glasses prepared using conventional MD with those created using the MD+MC algorithm. We heat the glass at the same rate of \(10^{12}\) K/s and save the configuration every 20 K. Then, we compute the heat capacity using its statistical definition: \((\langle E^{2}\rangle-\langle E\rangle^{2})/k_{B}T^{2}\), where \(E\) is the energy of the system, and \(k_{B}\) is the Boltzmann constant. The insets of Figures 3\(a)\) and 3\(b)\) with the potential energy of the binary and ternary alloys, respectively, illustrate as well via the difference of the two kinds of glasses as a function of temperature. Clearly, the devitrification process of the glass created through the hybrid MD+MC happens at a higher temperature, approximately 18% above \(T_{g}\)[50; 49; 15]. In vapor-deposited experiments [7], this temperature is called the "onset" temperature \(T_{o}\), and it is generally between 6 to 10% above the conventional \(T_{g}\), reflecting the much larger kinetic stability reached by this method. Since our MD+MC glass produces similar results, we can confirm that the Cu\({}_{46}\)Zr\({}_{46}\)Al\({}_{8}\) and Cu\({}_{50}\)Zr\({}_{50}\) alloys produced by this method are ultrastable configurations.
_Rheology.--_ For mechanical tests we perform athermal quasistatic simulations [51] starting from the configurations at \(T\)=300 K we shear the simulation box along the \(x\) direction by the amount \(\delta\gamma\). We perform energy minimization after each strain step with the fast inertial relaxation engine (FIRE) [52] until the system reaches mechanical equilibrium, until a maximum force of \(10^{-10}\)eV/A is reached.
The Figure 4 (top) shows how the composition and the history influence the mechanical behavior. The addition of aluminum increases by a few percent the shear modulus [53], and to compare the glasses at peak stress we scale away the dependence of on the modulus. These are typical stress-strain curves, where after an initial elastic part we see the typical glass response: strain-softening close to the maximum, fluctuations, and a drop in stress before plastic flow. In particular, as in model glasses [41] we observe that the ultrastable glasses exhibit drastic stress drops which corresponds to the the appearance of a system-spanning shear band (Inset). The well-prepared glasses do so at slightly larger strains than the basic ones, and in scaled units they are noticeably stronger, here about 20%. The final state of flow one finds may be summarized by the examples of the strain fields in the Fig. 4 (bottom), with the features associated with the post-yield shear bands depending on the nature of the stress-peak: for the ultrastable glasses we find a well
Figure 3: Heat capacity as a function of temperature obtained through RS method for \(a)\) Cu\({}_{46}\)Zr\({}_{46}\)Al\({}_{8}\) and \(b)\) Cu\({}_{50}\)Zr\({}_{50}\). The insets show the potential energy of the binary and ternary alloys, respectively, as a function of temperature during the heating process using MD (thin lines) and MD+MC (dashed lines). |
2304.00982 | Eternal black holes and quantum temporal correlations | It was recently suggested that quantum theory may support a unification of
the notions of space and time, as such, treating the spatial and temporal
correlations equally. To be more precise, the partial transposition of the
maximally entangled state of two quantum systems at one time exactly matches
the temporal correlations of one quantum system that unitary evolved between
two distinct moments of time. In this essay we consider this equivalence of
spatial and temporal correlations in the context of AdS/CFT correspondence. We
argue that in the high temperature limit the thermofield double state is the
equivalent of temporal correlation of a quantum theory unitary evolving at two
times. Thus, on the gravity side, we imagine that the temporal correlations
correspond to a black hole at one time connected behind the horizon to the same
black hole at another time by an Einstein-Rosen bridge. We show that the
correspondent spacetime of this temporal wormhole is the interior solution of
AdS-Schwarzschild black hole. Implications of this correspondence are briefly
considered. | Ovidiu Racorean | 2023-03-31T16:53:45Z | http://arxiv.org/abs/2304.00982v2 | # Eternal black holes and quantum temporal correlations
###### Abstract
It was recently suggested that quantum theory may support a unification of the notions of space and time, as such, treating the spatial and temporal correlations equally. To be more precise, the partial transposition of the maximally entangled state of two quantum systems at one time exactly matches the temporal correlations of one quantum system that unitary evolved between two distinct moments of time. In this essay we consider this equivalence of spatial and temporal correlations in the context of AdS/CFT correspondence. We argue that in the high temperature limit the thermofield double state is the equivalent of temporal correlation of a quantum theory unitary evolving at two times. Thus, on the gravity side, we imagine that the temporal correlations correspond to a black hole at one time connected behind the horizon to the same black hole at another time by an Einstein-Rosen bridge. We show that the correspondent spacetime of this temporal wormhole is the interior solution of AdS-Schwarzschild black hole. Implications of this correspondence are briefly considered.
Essay written for the Gravity Research Foundation
2023 Awards for Essays on Gravitation
March 31, 2023
## 1 Introduction
It was shown in recently that the quantum theory may support a unification of the notions of space and time, as such, treating the spatial and temporal correlations equally. Whether we talk about using a pseudo-density matrix formalism [1], [2] or a quantum generalization of Bayes' theorem [3], all this attempts can be viewed as an extension of Jamiolkowski isomorphism [4], [5], [6] that maps the spatial correlation of two distinct quantum systems at one time to temporal correlations that involve a single quantum system at two different times. Accordingly, the spatial correlations were found to exactly correspond to temporal correlations by a partial transposition. To be more precise, the partial transposition of the maximally entangled state of two quantum systems exactly matches the correlations of one quantum system that unitary evolved between two distinct moments of time.
We should emphasize here that this equivalence implies one slightly modification regarding the classical quantum theory. As is explicitly argued in [3], the quantum system measured at two times is defined on two distinct Hilbert spaces, i.e. two different quantum systems, such that the temporal correlations are defined on a product state of two Hilbert spaces.
In this essay we extend this line of thinking to the AdS/CFT correspondence. Thus, we argue that in the high-temperature limit the thermofield double state [9], [10], [11], i.e. the spatial correlations of two spatially separated CFT's at one time, is equivalent to temporal correlations of one CFT evolving unitary between two times or, equally, of two different CFT's temporally separated. Consequently, on the gravity side of the AdS/CFT duality the temporal correlations should correspond to one black hole at two different times or, equally, two distinct temporally separated black holes. Further, we assume that the duality holds for the temporal correlations as well such that the two temporally separated black holes are connected behind the horizon by an Einstein-Rosen bridge. We may see the spatial wormhole as an Einstein-Rosen bridge connecting two spatially separated black holes on the same spalike hypersurface while the temporal wormhole as an Einstein-Rosen bridge connecting two temporally separated black holes on two different spacelike hypersurfaces. To this end we can consider the two types of wormholes (spatial and temporal) as being equivalent when the roles of space and time coordinates are interchanged.
At this point we should ask what the corresponding spacetime of these unusual temporal wormholes might be. The construction of the spacetime of temporal wormholes is related to the remark that space radial coordinate and temporal coordinate exchange their character in the metric of the corresponding AdS-Schwarzschild black holes. Since such interchange of the spatial and temporal role occur behind the event horizon we assume that the corresponding spectime of temporal wormholes should have its origin on the eternal AdS-Schwarzschild solution in Kruskal coordinates as seen from the black hole's interior [12], [13], [14], [15].
We further assume the interior solution of the AdS-Schwarzschild black holes and
argue that in the Penrose diagram the two temporally separated black holes are smoothly glued at the hypersurface, \(t=0\). An intriguing implication when we consider identifying the interior solution to the temporal wormhole is that the event horizon is a temporal constant such that all inward or outward test particles cross the horizon simultaneously. It seems like from the perspective of an internal observer there is no sense of past or future.
Although, numerous aspects remain unclear for now, the temporal wormholes may prove to be important at the conceptual level.
## 2 Equivalence of space correlations and time correlations
In quantum theory the space and time are not treated on an equal footing. In this sense, the maximally entangled state of two quantum systems at one time is treated different in respect to the one unitary evolving system at two times. The maximally entangled state is described by a tensor product of two Hilbert spaces while the single state at two times is described by a single Hilbert state and a dynamical map between input and output states.
However, in recent work [1], [2], [3], [4], [5], [6] we observe a tendency to bring these two separate quantum descriptions under the same conceptual umbrella. These attempts considers a slightly modified Hilbert space formalism in the sense that pair of systems spacelike separated and single system at two different times (temporally separated) are both described on a tensor product of two Hilbert spaces. In addition, although spatial and temporal correlations are described by operators on tensor product they differ from one another by a partial transpose. One other way to express this equivalence is to consider that a maximally entangled state is equal to a maximally mixed state that eunitary evolves between two times.
Let us now explore more in depth this concept. To do this we begin considering the case of a pure state \(\left|\Psi\right\rangle\) with \(\rho=\left|\Psi\right\rangle\left\langle\Psi\right|=\frac{1}{d}\sum_{n,j}\left| n\right\rangle\left\langle j\right|\left\langle j\right|\) of two quantum systems, \(L\) and \(R\), defined on the Hilbert space \(\mathcal{H}=\mathcal{H}_{R}\otimes\mathcal{H}_{L}\) such that:
\[\left|\Psi\right\rangle=\frac{1}{d}\sum_{n}\left|n\right\rangle\left|n\right\rangle. \tag{1}\]
Now, the first step in finding the analog of the maximally entangled state in Eq.(1) in the temporal context we should write down the partial transpose \(\rho^{T_{L}}\) that would represent the quantum state of temporal correlations. Thus, using the pure state \(\rho\) we can express the partial transpose as:
\[\rho^{T_{L}}=\left|\Psi\right\rangle\left\langle\Psi\right|^{T_{L}}=\frac{1}{d }\sum_{n,j}\left|n\right\rangle\left|j\right\rangle\left\langle j\right|\left \langle n\right|. \tag{2}\]
It has been argued [1], [2], [3], [4], [5], [6] that precisely this state equals the
temporal correlated state, noted \(\rho^{t}\). As a result, we can emphasize that:
\[\rho^{t}=\rho^{T_{L}}. \tag{3}\]
Now, since the density matrix \(\rho^{T_{L}}\) is defined on the Hilbert space \(\mathcal{H}_{R}\otimes\mathcal{H}_{L}\) the formalism of quantum theory is slightly modified in the sense that state of temporal correlations should also be defined on the same tensor product of two Hilbert spaces. Thus, the single quantum system evolving unitary is mapped at time \(t_{1}\) to a Hilbert space \(\mathcal{H}_{R}\) while at time \(t_{2}\) is mapped to another Hilbert space \(\mathcal{H}_{L}\) such that the temporal correlated state \(\rho^{t}\) is defined on the tensor product \(\mathcal{H}=\mathcal{H}_{R}\otimes\mathcal{H}_{L}\).
While we set up the equivalence between spatial and temporal correlations under a partial transpose we should stresses out the meaning of the state \(\rho^{t}\). Consequently, the state \(\rho^{t}\) should be seen as temporal correlations of a single quantum system \(R\)(or \(L\)) that evolves unitary between two measured instances of time, \(t_{1}\) and \(t_{2}\), with \(t_{1}<t_{2}\). As a result, the measurement of the initial state by the operator \(\mathcal{O}_{i}\) equals an operator acting on the right side, \(\mathcal{O}_{R}\), while the operator acting on the final state \(\mathcal{O}_{f}\) is equivalent to the transpose of the operator acting on the left side of the maximally entangled state:
\[\mathcal{O}_{i}=\mathcal{O}_{R},\mathcal{O}_{f}=\mathcal{O_{L}}^{T}. \tag{4}\]
In other words, the temporal equivalent of the maximally entangled state \(\rho\) (taken as a partial transpose) is the maximally mixed state \(\rho_{R}\)(or \(\rho_{L}\)) measured at two different times under the identity (unitary) evolution. To find the state of the system \(R\) we take the partial trace of \(\rho^{T_{L}}\), such that we have:
\[\rho_{R}=\frac{1}{d}\sum_{n}\left|n\right\rangle\left\langle n\right|, \tag{5}\]
which is the maximally mixed state,\(\rho_{R}=\frac{1}{d}\mathbb{1}\). We should consider this state as the initial state of the temporal evolution while the final state would be the transpose of \(\rho_{L}\):
\[\rho_{i}=\rho_{R},\rho_{f}=\rho_{L}^{T}. \tag{6}\]
We can consider that temporal correlation relates the two maximally mixed states \(\rho_{R}\) and \(\rho_{L}^{T}\) which are temporally separated by an interval, \([t_{1},t_{2}]\).
As it was shown in [7] the only physical interpretation of transpose is the time reversal. In this respect, the partial transposition can be regarded as a partial time reversal in one of the two maximally entangled quantum systems. We would like to perform a reversal of time on the subsystem L to evaluate the quantum state that results. Starting from the initial quantum state \(\rho\) such partial time reversal in the system \(L\) would lead us to the state \(\rho^{*}\), that is:
\[\rho^{*}=\frac{1}{d}\sum_{n,j}\left|n\right\rangle\left|n^{*}\right\rangle \left\langle j\right|\left\langle j^{*}\right|, \tag{7}\]
which can be arranged further, as:
\[\rho^{*}=\frac{1}{d}\sum_{n,j}\left|n\right\rangle\left|j\right\rangle\left\langle j \right|\left\langle n\right|. \tag{8}\]
Comparing with the state in Eq.(2), it can be easily noted that, \(\rho^{*}=\rho^{T_{L}}=\rho^{t}\). The consequences of this identification are important in what follows and can be synthesised in the statement that the partial transpose density matrix, \(\rho^{T_{L}}\) correspond to the maximally entangled state:
\[\left|EPR\right\rangle=\frac{1}{d}\sum_{n,j}\left|n\right\rangle\left|n^{*} \right\rangle. \tag{9}\]
That is to say, we can consider the vector state \(\left|EPR\right\rangle\) as being equivalent to the vector state of temporal correlations. We can verify one more time this statement since we can infer from the \(\left|EPR\right\rangle\) vector state that an operator \(\mathcal{O}_{L}\) acting on the left side is equivalent to its transpose acting on the right side:
\[\mathcal{O}_{L}\left|EPR\right\rangle=\mathcal{O_{R}}^{T}\left|EPR\right\rangle, \tag{10}\]
which is the same result as in the case of temporal correlation in Eq.(4).
The importance of EPR state will become clear as we will try to extend the equivalence of spatial and temporal correlations to thermal states.
## 3 The thermofield double state and spatial wormholes
We would like to extend the equivalence of spatial and temporal correlations in a key point of intersection between quantum theory and General Relativity, which is the AdS/CFT correspondence. The motivation that we have in mind when consider this extension is two fold. First, we were motivated by the close link between space and time provided by the AdS/CFT duality.
The other important reason we chose to probe the equivalence of space and time correlations in the AdS/CFT realm is the form of the thermofield double state. The meaning of this statement consists in the fact that the EPR state is closely related to thermofield double state,
\[\left|TFD\right\rangle=\frac{1}{\sqrt{Z}}\sum_{n}e^{\frac{-\beta E_{n}}{2}} \left|n\right\rangle\left|n^{*}\right\rangle, \tag{11}\]
with the known notations of \(Z\) as the partition function and \(\beta\) as the inverse temperature. Here we consider two non-interacting copies of conformal field theory as two quantum systems, \(L\) and \(R\), such that we can decompose Hilbert space \(\mathcal{H}\) of the composite system as \(\mathcal{H}=\mathcal{H}_{R}\otimes\mathcal{H}_{L}\).
As it can be easily noted the quantum system \(L\) suffers a reversal of time, such that both CFT's evolve in the same direction of time. The motivation of this choice of time evolution in the quantum system \(L\), should be searched on the gravity side
of the AdS/CFT duality. It has been argued [9], [10], [11] that precisely the TFD state is dual to two black holes having their interiors connected by an Einstein-Rosen bridge. Accordingly, an observer in either asymptotic region sees the AdS-Schwarzschild black hole spacetime, which is understood [8] to correspond to the thermal state of the conformal field theory. Thus, the motivation of choosing to take a reversal of time in the left system such that both CFT's evolve in the same direction of time is translated on the gravity side as both black holes dual to CFT's to evolve in the same direction of time, as is sketched in fig. 1.
We can consider this construction as a spatial wormhole that connects two spatially distant black holes across the same space hypersurface at a constant time.
Let us return to the TFD state which as we have seen is precisely of the form of the EPR state with one system experiencing a time reversal. Comparing Eq.(9) and Eq.(11) it is easy to see that at finite temperature we have:
\[\left|TFD\right>=\sqrt{d\rho_{R}}\left|EPR\right>=\sqrt{d\rho_{L}}^{T}\left|EPR \right>, \tag{12}\]
such that in high-temperature limit (\(\beta\longrightarrow 0\)) the two states are equal.
We would like to define the temporal correlations in the limit of AdS/CFT duality. That is, we have to search for the temporal correlated state and for that we consider the density matrix of the TFD state vector:
\[\rho_{TFD}=\frac{1}{Z}\sum_{n,j}e^{\frac{-\beta E_{n}}{2}}\left|n\right>\left| n^{\star}\right>\left<j\right|\left<j^{\star}\right|. \tag{13}\]
At high-temperature this state equals the partial transposed state \(\rho^{T_{L}}\). As we recall, precisely this state is equivalent to the temporal correlated state \(\rho^{t}\), such that we have, \(\rho_{TFD}=\rho^{t}\).
In our scenario, the TFD state in Eq.(11) is one part of the equivalence and it define the spatial correlations between two spatially distant CFT's systems, \(R\) and \(L\). Thus, in the other side of the equivalence should reside correlations of the thermal state of one CFT system that evolve unitary between measurements at two different instances of
Figure 1: Spatial Einstein-Rosen bridge that connects two distant black holes on the same spacelike hypersurface.
time. To find the thermal state of the right CFT we tracing over the degrees of freedom of the left CFT to find the reduced density matrix:
\[\rho_{R}=Tr_{L}\left|TFD\right>\left<TFD\right|=\frac{1}{Z}\sum_{n}e^{-\beta E_{n }}\left|n\right>\left<n\right|, \tag{14}\]
which is exactly the thermal state of the right CFT,
\[\rho_{R}=\frac{1}{Z}\sum_{n}e^{-\beta E_{n}}\mathbb{1}. \tag{15}\]
The reduced density matrix \(\rho_{R}\) describes a maximally mixed state as required. That is to say, the thermofield double state, i.e. the maximally entangled state between two copies of quantum theory at one time, is equivalent in the high-temperature limit, to the thermal state (maximally mixed state) of one CFT evolving unitary between two times. Now, we can take the \(\rho_{R}\) as the initial state at time \(t_{1}\) described by Hilbert space \(\mathcal{H}_{R}\) and \(\rho_{L}\) as the final state at time \(t_{2}\) described by the Hilbert space \(\mathcal{H}_{L}\) as two temporally separated CFT's on the tensor product \(\mathcal{H}=\mathcal{H}_{R}\otimes\mathcal{H}_{L}\), In this case, the temporal correlated state relates the two temporally distant thermal states.
On the gravity side, the reduced density matrix \(\rho_{R}\) describes a maximally mixed state that in the high temperature limit corresponds to AdS-Schwarzschild black hole [8]. The gravity dual should be an AdS-Schwarzschild black hole that evolves unitary between two times. We consider \(\rho_{R}\) and \(\rho_{L}\) that describes the same black hole at two distinct times as two different temporally separated black holes.
If we assume the AdS/CFT duality holds for the temporal correlations since are equivalent with the TFD state we should conclude that the two temporally separated black holes should be connected behind the horizon. We depicted this result in fig. 2.
Figure 2:. Temporal Einstein-Rosen bridge connecting a black hole on two different spacelike hypersurfaces.
Thus, the black hole at one time is connected behind the horizon to the same black hole at another time by an Einstein-Rosen bridge. We can imagine the temporal wormholes as connecting two black holes across different spacelike hypersurfaces at a constant space coordinates.
## 4 Temporal wormholes
We have seen that we can distinguish spatial wormholes from temporal wormholes. The spatial wormholes describe an Einstein-Rosen bridge connecting two spatially separated black holes while the temporal wormholes describe the connection under horizon of two temporally separated black holes.
On the quantum theory side of the AdS/CFT correspondence we have seen that TFD state is equivalent to temporal correlations. The question to ask at this point is whether there is also equivalence between the two wormholes. If the answer is positive then we should be able to consider the temporal wormhole as eternal black hole in the Kruskal coordinates. In other words, can temporal Einstein-Rosen bridge have a correspondent in the Kruskal diagram of AdS-Schwarzschild black hole solution?
One clue to view the temporal wormhole as a solution of AdS-Schwarzschild black hole is to observe that the space and time roles are interchanged in our construction of the temporal wormholes. The two spatially separated black holes are connected by an Einstein-Rosen bridge on the same spacelike sheet in the case of the spatial wormholes. In the case of the temporal wormholes the exact opposite is in place. The two temporally separated black holes are connected on two different spacelike hypersurfaces. As such, time and space coordinates suffer an inversion of roles.
This situation is similar as considering the inversion of space and time roles behind the horizon, on the interior of the black hole. The interior solution of black hole, although it has a long history [12], it found only recently some support [13], [14], [15]. The change that occurs in the nature of spacetime for the interior solution is exactly the interchange of the space and time coordinates character.
Let us now elaborate further on this scenario and consider the interior solution of the eternal AdS-Schwarzschild black hole metric In the geometrized units with \(G=c=1\):
\[ds^{2}=-f(t)^{-1}dt^{2}+f(t)dz^{2}+t^{2}d\Omega^{2}. \tag{16}\]
We should note here that for the black hole interior the metric is time dependent \(f(t)\) in contrast to the traditional static metric of the exterior AdS-Schwarzschild solution with space dependent metric \(f(r)\).
In order to keep our construction as simple as possible we choose here \(f(t)=\frac{2\xi}{t}-1\) as in [13] and taking into account the exterior Schwarzschild solution we can make the identification \(\xi=M\).
In the maximally extended eternal black hole scenario for Kruskal coordinates we
have:
\[UV=(1-{t\over 2\xi})exp({t\over 2\xi}).\]
In [13] new transformations that can be written in the Kruskal coordinates as \(t_{*}=(U+V)\) and \(r_{*}=(U-V)\) were introduced. With these new coordinate transformations in mind we can find the equation of hyperbole:
\[t_{*}^{2}-r_{*}^{2}=(1-{t\over 2\xi})exp({t\over 2\xi}),\]
and also,
\[{r_{*}\over t_{*}}=tanh({z\over 4\xi}),\]
which are straight lines with \(z=const.\)
In this scenario we have the singularity at \(t=0\), which in the new coordinates is
\[t_{*}=\pm\sqrt{r_{*}^{2}+1},\]
such that the horizons are now at \(t=2\xi\) and \(z=\pm\infty\) as in the sketch in figure 3, where we also added the flow of time in the interior regions (the lines with arrows).
That is, the temporal Einstein-Rosen bridge, in the maximally extended solution, correspond to two black holes smoothly glued at the spacelike hypersurface, \(t=0\).
Figure 3:. The Penrose diagram for the interior solution of AdS- Schwarzschild black hole. The flow of time on the interior, represented by the oriented arrows is also depicted.
In this scenario we encounter an intriguing behavior; as seen by an interior observer a test particle starts at \(t=0\) and exits the interior by crossing the horizon at \(t=2\xi\) on the negative \(z\) direction to instantly reaper at \(t=2\xi\) on the positive \(z\) direction. In other words, all inward and outward oriented test particles cross the event horizon simultaneously at the same time \(t=2\xi\) such that an interior observer has no sense of future and past.
## 5 Conclusions
The recent attempts to unify the quantum mechanics notions of space and time were lead to the conclusion that the spatial and temporal correlations should be treated equally. As such, the EPR state of two quantum systems at one time exactly equals the temporal correlations of one quantum system that unitary evolved between two distinct moments of time.
In this essay we took advantage of this quantum equivalence and we have extended this equivalence to the AdS/CFT duality framework. Accordingly, we have argued that in the high-temperature limit the TFD state is equivalent to temporal correlations of one CFT measured at two times that evolves unitary.
We have shown that this equivalence have some important implications on the gravity side of the AdS/CFT duality. In this sense, we considered that the temporal correlations of one CFT at two times are dual to two black holes (temporally separated) connected behind the horizon by an Einstein-Rosen bridge.
Further, we have argued that these atypical temporal wormholes are related to the interior solution of the eternal AdS-Schwarzschild black hole. Some intriguing implications of the identification between temporal wormholes and the interior solution are briefly discussed.
|
2305.00473 | Time series clustering based on prediction accuracy of global
forecasting models | In this paper, a novel method to perform model-based clustering of time
series is proposed. The procedure relies on two iterative steps: (i) K global
forecasting models are fitted via pooling by considering the series pertaining
to each cluster and (ii) each series is assigned to the group associated with
the model producing the best forecasts according to a particular criterion.
Unlike most techniques proposed in the literature, the method considers the
predictive accuracy as the main element for constructing the clustering
partition, which contains groups jointly minimizing the overall forecasting
error. Thus, the approach leads to a new clustering paradigm where the quality
of the clustering solution is measured in terms of its predictive capability.
In addition, the procedure gives rise to an effective mechanism for selecting
the number of clusters in a time series database and can be used in combination
with any class of regression model. An extensive simulation study shows that
our method outperforms several alternative techniques concerning both
clustering effectiveness and predictive accuracy. The approach is also applied
to perform clustering in several datasets used as standard benchmarks in the
time series literature, obtaining great results. | Ángel López Oriona, Pablo Montero Manso, José Antonio Vilar Fernández | 2023-04-30T13:12:19Z | http://arxiv.org/abs/2305.00473v1 | # Time series clustering based on prediction accuracy of global forecasting models
###### Abstract
In this paper, a novel method to perform model-based clustering of time series is proposed. The procedure relies on two iterative steps: (i) \(K\) global forecasting models are fitted via pooling by considering the series pertaining to each cluster and (ii) each series is assigned to the group associated with the model producing the best forecasts according to a particular criterion. Unlike most techniques proposed in the literature, the method considers the predictive accuracy as the main element for constructing the clustering partition, which contains groups jointly minimizing the overall forecasting error. Thus, the approach leads to a new clustering paradigm where the quality of the clustering solution is measured in terms of its predictive capability. In addition, the procedure gives rise to an effective mechanism for selecting the number of clusters in a time series database and can be used in combination with any class of regression model. An extensive simulation study shows that our method outperforms several alternative techniques concerning both clustering effectiveness and predictive accuracy. The approach is also applied to perform clustering in several datasets used as standard benchmarks in the time series literature, obtaining great results.
keywords: time series clustering, forecasting, global models, prediction accuracy +
Footnote †: journal:
,
## 1 Introduction
Time series clustering (TSC) is a fundamental problem in machine learning with applications in many fields, including biology, economics, computer science or psychology, among others. The task consists of splitting a large collection of unlabelled time series realizations into homogeneous groups so that similar series are located together in the same group and dissimilar series are placed in different clusters. As result, each group can be characterized by a specific temporal pattern, which allows to address key issues as discovering hidden dynamic structures, identifying anomalies or forecasting future behaviours. Comprehensive overviews on the topic are provided in [1; 2; 3; 4; 5].
A crucial point in cluster analysis is to establish the dissimilarity notion since it determines the nature of the resulting clustering partition. Several distance measures have been proposed in the literature, each one of them associated with a different objective. If the goal is to discriminate between geometric profiles of the time series, then a shape-based dissimilarity is suitable. For instance, the well-known dynamic time warping (DTW) distance has been used in several works to perform TSC [6; 7; 8; 9]. On the contrary, a structure-based dissimilarity is desirable if the target is to compare underlying dependence models. Examples of this type of distances are metrics comparing the autocorrelations [10], the quantile autocovariances [11], the quantile cross-spectral densities [12; 13; 14; 15], the wavelet representations [16] or the wavelet coefficients [17] of two time series. Additional types of dissimilarities are based on dimensionality reduction techniques [18; 19] or levels of shared information [20; 21].
Among TSC, the so-called model-based clustering is a popular approach which gave rise to several works. These techniques rely on two main elements: (i) the assumption of the existence of a fixed number of models characterizing the different groups in the time series dataset and (ii) a practical procedure to partition the series in a suitable way according to the underlying models. In an early work, [22] proposed to perform TSC by employing a distance measure which is based on the ARIMA representation of the time series. A similar method was introduced by [23] for clustering financial time series. Specifically, the technique assumes the existence of different GARCH models and uses a metric based on estimated GARCH parameters. The assumption of underlying GARCH models is also used by [24] to construct different robust methods based on unconditional volatility and time-varying volatility of the GARCH representation of the time series. A novel mixture model for clustering series which are subject to regime changes was proposed by [25]. Particularly, the approach consists of modeling each cluster by a regression model in which the polynomial coefficients vary according to a discrete hidden process. It is worth remarking that this method belongs to a paradigm called clusterwise regression, which is
based on considering that the elements within each cluster are generated according to a specific linear regression scheme [26]. In the multivariate setting, [27] introduced a clustering method based on the \(p\)-value of a test of hypothesis assuming linear models, and [28] proposed to pool multiple time series into several groups using finite-mixture models, documenting the efficiency gains in estimation and forecasting realized relative to the overall pooling of the time series. In the categorical context, [29] constructed two clustering approaches based on time-homogeneous first-order Markov chains.
Note that, although the previous techniques for model-based clustering of time series attempt to identify the underlying models existing in a given dataset, they ignore the performance of these models in terms of predictive accuracy. In this context, the aim of this manuscript is to propose a model-based clustering approach producing a clustering solution with a high predictive accuracy. Our idea is motivated by the fact that, given two different model-based clustering solutions, the one generating the best predictions is preferred. In short, our approach is able to detect the underlying models while trying to optimize the predictive accuracy. To that aim, we assess the dissimilarity between a time series and a given model as the average prediction error produced when iteratively obtaining the point forecasts of the time series with respect to the corresponding model. It is worth highlighting that, although there are a few TSC methods based on forecast densities [30; 31], to the best of our knowledge, nobody has employed the concept of similarity mentioned above to perform clustering in time series databases. Specifically, our clustering approach makes use of the so-called global models (see Section 2) to minimize the average prediction error. In fact, the use of global models circumvents some limitations that one often faces when fitting a different model to each time series in the set, i.e., when considering the so-called local approach. For instance, the predictive accuracy of these independent models is often poor when dealing with short time series, but this is not the case for global models.
Based on previous comments, we propose a novel clustering method which is based on traditional iterative clustering algorithms. The technique relies on the following iterative process: (i) \(K\) global models (prototypes) are fitted by taking into account the series pertaining to each cluster independently and (ii) each time series is assigned to the group associated with the prototype producing the lowest forecasting error according to a specific metric.
It is worth emphasizing that, by construction, the proposed algorithm produces a partition which is optimal in terms of overall prediction accuracy. In fact, the objective function of the method can be seen as a sum of forecasting errors (see Remark 1 in Section 3), which is expected to decrease with each iteration of the two-step procedure described above. Therefore, the clustering algorithm is specifically designed to allocate the different time series in such a
way that the corresponding global models represent in the best possible manner the existing prediction patterns. There are only a few works in the literature combining clustering and global methods in a single technique. For instance, [32] proposed an approach particularly devised to improve the predictive accuracy of global models. First, the set of series is partitioned into different groups by using a specific clustering method. Then, global models are fitted by considering the series within each cluster. Although successful, the method of [32] splits the set of series by using a feature-based TSC clustering approach and, therefore, there is not guarantee that the resulting partition is optimal in terms of total prediction accuracy. Note that our approach circumvents this limitation by adapting the objective function to the specific purpose of forecasting error reduction. It is important to highlight that, although an improvement in the overall predictive effectiveness is usually achieved through the proposed method, the main output of the procedure is the resulting clustering partition, which produces a meaningful decomposition of the set of time series in terms of forecasting structures and can be very useful as a exploratory tool.
Some simulation experiments are carried out in the paper to assess the performance of the proposed algorithm in terms of both clustering effectiveness and. In all cases, synthetic partitions where the groups are characterized by different generating processes are considered. The approach is compared with several alternative methods, as one procedure based on local models or the technique of [32]. Several elements are analysed, including the type of global models, the way in which the series are assigned to the clusters or the numerical behaviour of the algorithm. The method is also applied to perform clustering in some well-known datasets which are used as classical benchmarks in the time series literature. Overall, the algorithm exhibits a great behaviour when dealing with both synthetic and real data.
An overview of the contributions provided in this manuscript is given below:
* The proposed approach exhibits a great ability to detect the underlying structures in several simulation experiments including different types of generating processes. In particular, we consider linear models with short and long memory and specific types of nonlinear processes. In most cases, the method outperforms the local approach and other alternatives in terms of clustering effectiveness, thus taking advantage of the underlying ability of global models to identify the different prediction patterns.
* Our method provides an effective and natural way of automatically determining the number of clusters, which is an important topic in the TSC literature. Specifically, as the objective function of the algorithm can be seen as a sum of prediction errors, one can select the number of groups by
choosing the value which minimizes a proper generalization of this objective function. Several experiments demonstrate that the true number of clusters is frequently selected by means of this procedure.
* Generally, the proposed technique improves the overall predictive performance of a collection of time series in comparison with both the local approach and the consideration of one single global model. Specifically, each one of the groups has an associated model exploiting all the information about the corresponding forecasting structure, which makes that model the best choice to predict future values of the time series in the group. This improvement in predictive accuracy is corroborated by means of some simulations and experiments with well-known real time series datasets which are often used for forecasting purposes.
It is worth highlighting that the proposed approach has also some limitations. First, the class of global models can have a great impact on the identification of the true clustering structure. In fact, for a proper identification, it is necessary that the complexity of the global models matches the underlying forecasting structures. In this regard, more complexity leads generally to fewer clusters, while the opposite happens with less complexity. Second, when the generating processes are not too complex (e.g., linear models with short memory), then the local approach reaches similar results than our method when moderate values of the series length are considered, since such lengths are enough for the coefficients of the local models to be estimated with high accuracy. Third, the proposed algorithm decreases its performance when some amount of uncertainty (noise) exists in the underlying structures, that is, when the time series dataset does not contain totally well-defined clusters. Fourth, as the proposed iterative method considers the future parts of the series to calculate the distance between each element and each global model, some numerical issues arise in the behaviour of the objective function. However, these negative effects can be easily neutralized by means of a simple heuristic rule (see Remark 3 in Section 3).
The remainder of this paper is organized as follows. Section 2 gives a brief background on global forecasting models, while Section 3 describes the clustering algorithm based on prediction accuracy of these models, which is motivated through an interesting example in Section 4. The approach is analysed in Section 5 by means of a simulation study where different scenarios are taken into account. In Section 6, we apply the proposed method to real datasets of time series belonging to different fields. Section 7 contains some concluding remarks and future work.
## 2 Background on global models
Global models are learning algorithms that fit the same forecasting function to all the time series in a set, in contrast to local models, which adjust a different function to each time series in the database [33]. Rigorously, let \(\mathbb{X}\) be the collection of all sets of univariate time series of finite size, i.e.,
\[\mathbb{X}=\bigg{\{}\mathcal{X}:\mathcal{X}=\Big{\{}\mathbf{X}_{t}^{(1)},\ldots, \mathbf{X}_{t}^{(r)}\Big{\}},\text{with }r\in\mathbb{N}\text{ and }\mathbf{X}_{t}^{(i)}\in\mathbb{R}^{T},i=1,\ldots,r \bigg{\}}, \tag{1}\]
where we assumed without loss of generality that all series have the same length \(T\) (they are vectors in the space \(\mathbb{R}^{T}\)). Usually, we are interested in the future part of each series up to \(h\) time steps, which can be seen as a vector of \(\mathbb{R}^{h}\). To compute the corresponding predictions, we employ a forecasting function \(f\), which is a function from the observed time series to the future part, i.e., \(f:\mathbb{R}^{T}\longrightarrow\mathbb{R}^{h}\), often defined in an iterative way when \(h>1\). A global method, \(\mathcal{A}_{G}\), is a learning algorithm taking the form
\[\mathcal{A}_{G}:\mathbb{X}\longrightarrow\mathbb{F}_{T}^{h}, \tag{2}\]
where \(\mathbb{F}_{T}^{h}\) is the set of all functions with domain \(\mathbb{R}^{T}\) and range \(\mathbb{R}^{h}\). Note that, for each set of series \(\mathcal{X}\in\mathbb{X}\), \(\mathcal{A}_{G}(\mathcal{X})\) defines a forecasting function created by using all the series in \(\mathcal{X}\). In this paper we consider global models constructed in the following way [33]: (i) each series in \(\mathcal{X}\) is lag-embedded into a matrix at a given autoregressive (AR) order, \(l\), fixed beforehand, (ii) these matrices are stacked together to form one big matrix, achieving data pooling and (iii) a classical regression model (e.g., linear regression, random forest etc) is fitted to the resulting matrix.
Global models have been shown to outperform local models in terms of predictive accuracy in several datasets [33]. In other words, when a single model is fitted to all the time series in the database, and used to obtain the corresponding predictions, a lower overall forecasting error is produced than in the case where each time series is predicted by considering a different local model. Moreover, global models do not need any assumption about similarity of the time series in the collection, and usually require far fewer parameters than the simplest of local methods.
Although the global approach produces outstanding results, it has one important drawback: it ignores the possible existence of homogeneous groups of series in terms of prediction patterns. For instance, a database could contain two groups of series in such a way that the series within each group are helpful to each other for obtaining accurate predictions (e.g., think of several countries
whose behaviour concerning monthly economic growth is very similar), but totally useless for the series in the remaining group. In the previous situation, it would be desirable to fit a global method for each distinct set of time series. Then the predictions would be computed for a given series by using its associated global model. This is the main idea beyond our clustering method based on prediction accuracy of global models, which is introduced in the next section.
## 3 A clustering algorithm based on prediction accuracy of global forecasting models
Consider a set of \(n\) time series, \(\mathcal{S}=\left\{\mathbf{X}_{t}^{(1)},\ldots,\mathbf{X}_{t}^{(n)}\right\}\), where each \(\mathbf{X}_{t}^{(i)}=\left(X_{1}^{(i)},\ldots,X_{L_{i}}^{(i)}\right)\) is a series of length \(L_{i}\), \(i=1,\ldots,n\). We assume that each series \(\mathbf{X}_{t}^{(i)}\) contains training and validation periods of lengths \(r(i)\) and \(s(i)\), denoted by \(\mathbf{\mathcal{T}}^{(i)}=(t_{1}^{i},\ldots,t_{r(i)}^{i})\) and \(\mathbf{\mathcal{V}}^{(i)}=(v_{1}^{i},\ldots,v_{s(i)}^{i})\), respectively, such that:
* Both \(\mathbf{\mathcal{T}}^{(i)}\) and \(\mathbf{\mathcal{V}}^{(i)}\) are formed by consecutive observations and \(t_{1}^{i}\) has a position equal to or less than the position of \(v_{1}^{i}\), considering both \(t_{1}^{i}\) and \(v_{1}^{i}\) as elements of the vector \(\mathbf{X}_{t}^{(i)}\).
* Both periods are included in the original series.
* Both periods form a cover of the original series.
The sets \(\mathcal{T}=\left\{\mathbf{\mathcal{T}}^{(1)},\ldots,\mathbf{\mathcal{T}}^{(n)}\right\}\) and \(\mathcal{V}=\left\{\mathbf{\mathcal{V}}^{(1)},\ldots,\mathbf{\mathcal{V}}^{(n)}\right\}\) are called the training and the validation sets, respectively. We wish to perform clustering on the elements of \(\mathcal{S}\) in such a way that the groups are associated with global models minimizing the overall forecasting error with respect to the validation set.
The method we propose is an iterative algorithm having the classical two stages: (i) constructing a prototype for each cluster, usually referred to as centroid and (ii) assigning each series to a specific group. The assignment step often relies on the distance from the series to the prototypes. In this work, we propose to consider global models as prototypes for each group. Specifically, the prototype of the \(k\)th cluster is a global model which is fitted to the series pertaining to that group.
Assume there are \(n_{k}\) series in the \(k\)th group \(C_{k}\), i.e., \(C_{k}=\left\{\mathbf{X}_{t,k}^{(1)},\ldots,\mathbf{X}_{t,k}^{(n_{k})}\right\}\), \(k=1,\ldots,K\), where the subscript \(k\) is used to indicate that the corresponding series belong to cluster \(k\). A global model \(\mathcal{M}_{k}\) is fitted in cluster \(C_{k}\) by considering the training periods associated to \(\mathbf{X}_{t,k}^{(j)}\), \(j=1,\ldots,n_{k}\). It is expected that the predictive ability of model \(\mathcal{M}_{k}\) with respect to the series in cluster \(C_{k}\) is
better the more related the series in the group are. Note that the set of clusters \(\mathbf{C}=\{C_{1},\ldots,C_{K}\}\) produce the set of prototypes \(\mathbf{\mathcal{M}}=\{\mathcal{M}_{1},\ldots,\mathcal{M}_{K}\}\).
Once the global models \(\mathcal{M}_{1},\ldots,\mathcal{M}_{K}\) have been constructed, each series is assigned to the cluster whose prototype gives rise to the minimal value for a certain error metric by considering its validation period. Specifically, series \(\mathbf{X}_{t}^{(i)}\), \(i=1,\ldots,n\), is assigned to cluster \(k^{\prime}\) such that
\[k^{\prime}=\operatorname*{arg\,min}_{k=1,\ldots,K}d\big{(}\mathbf{X}_{t}^{(i)}, \mathcal{M}_{k}\big{)}, \tag{3}\]
where \(d(\cdot,\cdot)\) is any function measuring discrepancy between the actual values of \(\mathbf{X}_{t}^{(i)}\) and their predictions according to model \(\mathcal{M}_{k}\). For instance, if the mean absolute error (MAE) is considered, then (3) becomes
\[k^{\prime}=\operatorname*{arg\,min}_{k=1,\ldots,K}d_{\text{MAE}}\big{(}\mathbf{X}_ {t}^{(i)},\mathcal{M}_{k}\big{)}, \tag{4}\]
where \(d_{\text{MAE}}\big{(}\mathbf{X}_{t}^{(i)},\mathcal{M}_{k}\big{)}=\frac{1}{s^{(i) }}\sum_{j=1}^{s(i)}\big{|}v_{j}^{i}-F_{j,k}^{(i)}\big{|}\) and \(F_{j,k}^{(i)}\) is the prediction of \(v_{j}^{i}\) by using the global model \(\mathcal{M}_{k}\). Note that considering the MAE is appropriate in this context, since we are evaluating the forecasting effectiveness of \(K\) global models with respect to a single series. Therefore, each assignation is only influenced by the units of the corresponding series so that no scaling issues arise. In fact, the simplicity of the MAE makes it a recommended error metric for assessing accuracy on a single series [34]. Based on previous comments and, unless otherwise stated, we assume that the reassignation rule employed throughout the manuscript is given by (3).
Both steps the computation of prototypes and the reassignation of the series are iterated until convergence or a maximum number of iterations is reached. The corresponding clustering algorithm is described in Algorithm1. Below we provide some remarks concerning the proposed method.
**Remark 1** (_Interpretation of the objective function_).: Note that the objective function in Algorithm 1 can be written as
\[J(\mathbf{C})=\sum_{k=1}^{K}\sum_{\begin{subarray}{c}i=1:\\ \mathbf{X}_{t}^{(i)}\in C_{k}\end{subarray}}^{n}d(\mathbf{X}_{t}^{(i)},\mathcal{M}_{k }), \tag{5}\]
which is a sum of prediction errors with respect to the validation periods. In particular, each series is forecasted by using the global model associated with the cluster it pertains to. In this regard, the value of the objective function returned when Algorithm 1 stops, say \(J_{\text{OPT}}\), can be regarded as the total optimal (minimal) prediction error when \(K\) groups are assumed to exist in the dataset.
**Algorithm 1** The proposed clustering algorithm based on prediction accuracy of global forecasting models
```
1: Fix \(K\), \(l\) and \(max.iter\)
2: Set \(iter\) = 2
3: Randomly divide the \(n\) series into \(K\) clusters
4: Compute the initial set of \(l\)-lagged global models \(\boldsymbol{\mathcal{M}}=\{\mathcal{M}_{1},\ldots,\mathcal{M}_{K}\}=\boldsymbol{ \mathcal{M}}^{(1)}\)
5:repeat
6: Set \(\boldsymbol{\mathcal{M}}_{\mathrm{OLD}}=\boldsymbol{\mathcal{M}}^{(iter-1)}\) {Store the current prototypes}
7: Assign each series to the cluster associated with its nearest prototype according to the rule in (3)
8: Compute the new collection of prototypes, \(\boldsymbol{\mathcal{M}}^{(iter)}\), by fitting a \(l\)-lagged global model to the training periods of the series in \(k\)th cluster, \(k=1,\ldots,K\). {Update the set of prototypes}
9:\(iter\)\(\leftarrow\)\(iter\) + 1
10:until\(\boldsymbol{\mathcal{M}}=\boldsymbol{\mathcal{M}}_{\mathrm{OLD}}\) or \(iter\) = \(max.iter\)
11: Considering the final set of \(K\) clusters, construct the final collection of prototypes by fitting a \(l\)-lagged global model to the training and validation periods of the series in \(k\)th cluster, \(k=1,\ldots,K\).
```
**Algorithm 2** The proposed clustering algorithm based on prediction accuracy of global forecasting models
In the same way, the quantity \(J_{\mathrm{OPT}}/n\) can be interpreted as the average optimal prediction error. In sum, the objective function of the proposed clustering algorithm is very interpretable from a forecasting perspective.
**Remark 2** (_Assessment of the predictive accuracy_).: Although the quantity \(J_{\mathrm{OPT}}/n\) can be seen as the average optimal prediction error (see Remark 1), this value is not an appropriate metric to assess the predictive ability of the resulting global models. In fact, note that the two-step procedure described in Algorithm 1 attempts to find the partition minimizing the average prediction error with respect to the validation periods. Therefore, \(J_{\mathrm{OPT}}/n\) is likely to underestimate the prediction error computed over future periods of the series which are not involved in the optimization process. In this regard, a proper error metric could be obtained through the following steps:
1. Given a prediction horizon \(h\in\mathbb{N}\), divide each series into two periods. The first period contains all but the last \(h\) observations of the series. The second period, referred to as test period, contains the last \(h\) observations. For the sake of simplicity, the first periods can be identified with the set \(\mathcal{S}=\left\{\mathbf{X}_{t}^{(1)},\ldots,\mathbf{X}_{t}^{(n)}\right\}\) introduced above, whereas the second periods constitute a new set \(\mathcal{S}^{*}=\left\{\mathbf{X}_{t}^{(1)*},\ldots,\mathbf{X}_{t}^{(n)*}\right\}\), where each \(\mathbf{X}_{t}^{(i)*}=(X_{1}^{(i)*},\ldots,X_{h}^{(i)*})\) is a series of length \(h\). The set \(\mathcal{S}^{*}\) is called the test set.
2. Run Algorithm 1 using the set \(\mathcal{S}\) as input, obtaining the clustering solution.
3. Given the clustering solution computed in Step 2, and for \(k=1,\ldots,K\), fit a \(l\)-lagged global model to the set of series in the \(k\)th cluster by considering both training and validation periods. This produces the set of global models \(\overline{\mathbf{\mathcal{M}}}=\{\overline{\mathcal{M}}_{1},\ldots,\overline{ \mathcal{M}}_{K}\}\).
4. Compute the average prediction error with respect to the test set as \[\frac{1}{n}\sum_{k=1}^{K}\sum_{\begin{subarray}{c}i=1:\\ \mathbf{X}_{t}^{(i)}\in C_{k}\end{subarray}}^{n}d^{*}\big{(}\mathbf{X}_{t}^{(i)*}, \overline{\mathcal{M}}_{k}\big{)},\] (6) where \(d^{*}(\cdot,\cdot)\) is any function measuring discrepancy between the actual values of \(\mathbf{X}_{t}^{(i)*}\) and their predictions according to model \(\overline{\mathcal{M}}_{k}\). Note that these predictions are computed starting from the series \(\mathbf{X}_{t}^{(i)}\) and in a recursive manner. As an example, if the MAE is chosen as the error metric, then (6) becomes \(\frac{1}{n}\sum_{k=1}^{K}\sum_{\begin{subarray}{c}i=1:\\ \mathbf{X}_{t}^{(i)}\in C_{k}\end{subarray}}^{n}d^{*}_{\mathrm{MAE}}\big{(}\mathbf{X} _{t}^{(i)*},\overline{\mathcal{M}}_{k}\big{)}\), with \[d^{*}_{\mathrm{MAE}}\big{(}\mathbf{X}_{t}^{(i)*},\overline{\mathcal{M}}_{k}\big{)} =\frac{1}{h}\sum_{j=1}^{h}\big{|}X_{j}^{(i)*}-\overline{F}_{j,k}^{(i)*}\big{|},\] (7)
where \(\overline{F}_{j,k}^{(i)*}\) is the prediction of \(X_{j}^{(i)*}\) according to the global model \(\overline{\mathcal{M}}_{k}\). It is worth highlighting that, if all the time series in the set are recorded in the same scale, then employing the MAE leads to meaningful conclusions. However, if that is not the case, (7) is likely to be a misleading performance measure, since series taking higher values are expected to have a larger impact on the computation of the average prediction error. This issue can be avoided by considering alternative error metrics (see Section 6).
**Remark 3** (_Numerical behaviour of the algorithm_).: The optimisation procedure presented in Algorithm 1 does not guarantee a decreasing in the value of the objective function \(J(\mathbf{C})\) from one iteration to the next, as it is the case with other standard clustering methods (e.g., \(K\)-means). This is due to the fact that new global models are being fitted at each step. Although it is reasonable to expect that the rule in (3) improves the predictive ability of the global models, this is not always ensured. As a result, undesirable situations can arise in some settings, as the algorithm entering an infinite loop with \(J(\mathbf{C})\) showing a continuous increasing-decreasing pattern. These drawbacks can be mitigated by introducing an additional stopping criterion in Algorithm 1 as follows. Fixed \(L\in\mathbb{N}\), the algorithm stops if no improvement in the value of \(J(\mathbf{C})\) took place during the last \(L\) iterations. In case the algorithm stops due to this rule, the returned clustering solution is the one associated with the minimum value of \(J(\mathbf{C})\).
Note that two important input parameters have to be set in advance before executing Algorithm 1, namely the number of considered lags to fit the global models (\(l\)) and the number of clusters (\(K\)). These two parameters can be easily selected by: (i) running the clustering algorithm in a grid of values for the pair \((l,K)\), and (ii) choosing the combination giving rise to the minimum value of the average error computed with respect to the test set (see Remark 2). In this way, the optimal pair in terms of predictive effectiveness is selected. The previous procedure is summarized in Algorithm 2. Note that, for a fixed \(l\), the case \(K=1\) corresponds to a global model fitted to all the series, whereas the case \(K=n\) corresponds to a local model fitted to each series (local approach).
Note that the procedure for choosing \(K\) and \(l\) described in Algorithm 2 is mainly based on maximizing the predictive accuracy, which is a reasonable and natural rule. In this regard, it is worth remarking that several heuristic criteria are available for the selection of parameter \(K\) (e.g., rules based on internal indexes as the Silhouette index), which constitutes an important problem in the clustering literature. According to previous considerations, such criteria are not necessary when carrying out clustering by means of the proposed approach, which constitutes an advantage of CPAGM with respect to alternative techniques for TSC. It is worth highlighting that the class of global models consti
tutes another important parameter to be selected before running the clustering procedure.
## 4 Motivating example
In order to illustrate the usefulness of the clustering procedure presented in the previous section, we considered a real time series dataset called Chinatown, which pertains to the well-known UCR time series archive1[35]. This archive consists of a collection of heterogeneous time series databases which are frequently used to evaluate the accuracy of different machine learning algorithms for temporal data [36; 37], including TSC [38; 39]. Dataset Chinatown includes data recorded by an automated pedestrian counting system located in a specific street of Melbourne, Australia, during the year 2017. Specifically, Chinatown includes 363 time series of length 24, with each series being associated with a particular day of the year and each time observation with a particular hour (e.g., the fifth observation corresponds to 5:00 am). Thus, each series measures the hourly number of pedestrians in the corresponding street. Originally, two classes of series are assumed to exist in dataset Chinatown according to whether the data come from normal business days or weekend days (true partition). The top and bottom panels of Figure 1 contain three series associated with normal and weekend days, respectively. As expected, there are some differences between both groups of series, indicating that the temporal evolution of the number of pedestrians on any given day is clearly influenced by whether or not that day
is a normal business day. For instance, it seems that the number of pedestrians reaches its peak during the afternoon on weekends, but during the evening on weekdays. In addition, the number of people just after midnight (e.g., 1 am) is higher on weekends, which is expected.
The clustering procedure defined in Algorithm 1 was applied to the time series in dataset Chinatown by using linear global models fitted by least squares. For the sake of illustration, we considered \(K=2\) (as there are two different classes in the original dataset) and run the algorithm for different values of \(l\), namely \(l\in\{2,4,\ldots,16\}\). We constructed a test set by considering the last \(h=5\) observations of each series. The training period was set to the first 19 observations of each series, while the validation period was set to observations from \(l+1\) to 19. Therefore, the reassignation step in Algorithm 1 is performed by using the in-sample error (see (3)). For each value of \(l\), the corresponding experimental partition was compared with the true one by considering the adjusted Rand index (ARI) [40], which is bounded between \(-1\) and \(1\). Values of ARI close to \(0\) indicate a noninformative clustering solution, while the closer
Figure 1: Three series representing business days (top panels) and weekend days (bottom panels) in dataset Chinatown.
to 1 the index, the better is the agreement between both partitions. Figure 2 contains a curve representing the corresponding ARI values as a function of the number of lags used to fit the global models (\(l\)). Note that high ARI values are achieved for \(l=8\) and \(l=10\), while the degree of similarity between both partitions decreases when more lags are considered. Specifically, the highest ARI value (namely 0.764) is reached when \(l=10\).
To gain greater insights into the behavior of the proposed algorithm in dataset Chinatown, we decided to analyze the clustering solution associated with \(l=10\). Specifically, we chose to examine the resulting prototypes, i.e., the final global models. In fact, these models characterize the forecasting structures of the different groups, and their analysis can provide a meaningful description of the time series belonging to each cluster. Note that, as linear global models were considered, simple descriptions can be given by providing the corresponding estimated coefficients. Figure 3 displays the estimated coefficients for the prototypes of both groups, which were labeled as Clusters 1 and 2. In particular, Cluster 1 contains mostly series associated with normal days, while Cluster 2 includes mainly series associated with weekend days. While the estimated coefficients for lag 8 and beyond are very similar for both prototypes, there are clear differences at earlier lags. For instance, the estimated coefficients for lags 3, 5 and 7 are close to zero for one of the groups but significantly different from zero for the remaining one, which indicates that both prototypes show a different behavior. Additionally, the estimates for the intercepts of global models associated with Clusters 1 and 2 are 447.82 and 499.06, respectively, thus
Figure 2: ARI as a function of the number of lags in dataset Chinatown.
suggesting that the number of pedestrians is higher during the weekends.
For illustrative purposes, we constructed the average time series associated with the clustering solution defined by the prototypes in Figure 2 (\(l=10\)). That is, for each one of the groups, we calculated the average value of all the time series belonging to that group at each time point. The resulting series for Clusters 1 and 2 are displayed in the left and right panels of Figure 4, respectively. Note that these plots are coherent with the series in Figure 1 and with previous comments. In fact, the average series for the weekend group (right panel) takes higher values (in particular at early hours) than the average series for the weekday group (left panel). Moreover, the former series indicates a peak in the number of pedestrians just after noon, while this peak does not happen until the evening according to the latter series. Previous analyses suggest that the proposed algorithm is able to clearly identify the weekday-weekend pattern of dataset Chinatown when a suitable number of lags is considered.
Although the main goal of the proposed method is to detect the different forecasting patterns existing in a given dataset, an interesting side effect of Algorithm 1 is that the resulting prototypes are expected to improve the prediction accuracy of a single global model when \(K\) groups of series exist. In fact, by considering the clustering partition associated with \(l=10\), the average
Figure 3: Estimated coefficients for lags 1 to 10 for the global linear models concerning the 2-cluster solution produced by the proposed algorithm (\(l=10\)) in dataset Chinatown.
MAE with respect to the test set (see Remark 2) is 370.42, while this quantity takes the value of 535.08 when only one global model is fitted to all the series (\(K=1\)). Hence, splitting the dataset into two clusters results in substantially better predictions. Similar results to the ones provided above are obtained with different values of \(h\) are considered.
In short, this section showed an example where the forecasting patterns detected by the proposed algorithm are associated with highly interpretable classes (namely weekday and weekend days) of a dataset which is frequently used in the TSC literature.
## 5 Simulation study
In this section we perform several simulations with the aim of assessing the performance of the proposed approach in different scenarios. Firstly we describe the simulation mechanism, then we explain how the evaluation of the method was done and, afterwards, we show the results of the simulation study. Finally, we carry out a set of additional experiments related to the numerical behaviour of the algorithm, the selection of some hyperparameters, or the consideration of complex models, among others.
### Experimental design
Two unsupervised classification setups involving linear processes were considered, namely clustering of (i) short memory processes and (ii) long memory processes. In this way, the proposed method was analysed under very dissimilar serial dependence structures. Both settings contain three different generating processes. The specific scenarios and the generating models are given below.
Figure 4: Average time series for the 2-cluster solution defined by the prototypes in Figure 3.
**Scenario 1**. Let \(\{X_{t}\}_{t\in\mathbb{Z}}\) be a stochastic process following the AR(\(p\))-type recursion given by
\[X_{t}=\sum_{i=1}^{p}\varphi_{i}X_{t-i}+\epsilon_{t}, \tag{8}\]
where \(\varphi_{1},\ldots,\varphi_{p}\) are real numbers verifying the corresponding stationarity condition and \(\{\epsilon_{t}\}_{t\in\mathbb{Z}}\) is a process formed by independent variables following the standard normal distribution. We fix \(p=4\). The vector of coefficients \(\boldsymbol{\varphi}_{4}=(\varphi_{1},\varphi_{2},\varphi_{3},\varphi_{4})\) is set as indicated below.
Process 1: \(\boldsymbol{\varphi}_{4}=(0.1,0.2,-0.4,0.3)\).
Process 2: \(\boldsymbol{\varphi}_{4}=(0.2,-0.5,0.3,-0.3)\).
Process 3: \(\boldsymbol{\varphi}_{4}=(-0.3,0.4,0.6,-0.2)\).
**Scenario 2**. Consider the AR(\(p\)) process given in (8). We fix \(p=12\). The vector of coefficients \(\boldsymbol{\varphi}_{12}=(\varphi_{1},\varphi_{2},\ldots,\varphi_{12})\) is set as
\[(0.9,-0.5,-0.3,0.3,0.1,-0.3,0.2,-0.3,0.5,-0.5,0.3,-0.3),\] \[(0.2,0.3,-0.2,-0.2,0.4,0.2,-0.1,0.2,0.1,-0.2,-0.3,0.5),\] \[(-0.3,-0.1,0.3,-0.1,-0.2,-0.1,-0.4,-0.2,-0.3,0.4,0.1,0.2),\]
for Processes 1, 2 and 3, respectively.
The simulation study was carried out as follows. For each scenario, \(N\) time series of length \(T\) were generated from each process. Several values of \(N\) and \(T\) were taken into account to analyse the effect of those parameters (see Section 5.3). The test set was constructed by considering the last \(h=2l_{\mathrm{SIG}}\) observations of each series, where \(l_{\mathrm{SIG}}\) is the number of significant lags existing in each scenario (e.g., \(l_{\mathrm{SIG}}=4\) in Scenario 1). The training period was set to the first \((T-h)\) observations of each series. The validation period was set to observations from \((l+1)\) to \((T-h)\). Note that this choice implies that the reassignation step in Algorithm 1 is carried out by considering the in-sample error (see (3)). The simulation procedure was repeated 200 times for each pair \((T,N)\).
### Alternative approaches and assessment criteria
To throw light on the behaviour of the proposed algorithm, which we will refer to as _Clustering based on Prediction Accuracy of Global Models_ (CPAGM), we decided to compare it with the alternative approaches described below.
* _Local Models_ (LM). Specifically, a local model (e.g., an AR model) is fitted to each series in the collection (by jointly considering training and
validation periods) and used to obtain the predictions with respect to the test period. In this way, each local model gives rise to an error metric measuring its predictive accuracy. The average of these quantities can be seen as the overall error associated with the LM approach. Note that the LM method was already used by [33] to show the benefits of global models for forecasting purposes.
* _Global Models by considering an Arbitrary Partition_ (GMAP). This procedure is based on 2 steps: (i) the original set of series \(\mathcal{S}\) is randomly partitioned into \(K\) groups and (ii) for each group, a global model is fitted by considering the series pertaining to that cluster. The assessment task is carried out as indicated in Step 4 of Remark 2. It is worth highlighting that global models fitted to random groups of series have been shown to improve the predictive accuracy of one global model fitted to all the series in some datasets (see, e.g., Figure 4 in [33]). The approach GMAP can be seen as a meaningful benchmark for the proposed method, since it is expected that the groups produced by Algorithm 1 improve the forecasting effectiveness of the corresponding global models in comparison with a random partition.
* _Global models by considering Feature-Based Clustering_ (GMFBC). Particularly, the technique proposed by [32], which relies on two steps: (i) the original collection of series is splitted into \(K\) groups by using a clustering algorithm based on the feature extraction procedure described in [41] and (ii) \(K\) global models are constructed according to the resulting partition. This approach is evaluated in a similar way that GMAP. Note that, like CPAGM, GMFBC also tries to exploit the notion of similarity between time series in order to minimize the overall prediction error. However, GMFBC considers a specific clustering algorithm before fitting the global models, while CPAGM iterates until achieving the optimal clustering partition in terms of forecasting effectiveness.
In the simulations, the number of clusters was set to \(K=3\), since both scenarios contain 3 different generating processes. For approaches CPAGM, GMAP and GMFBC, the number of lags \(l\) to fit the global models was set to \(l=l_{\text{SIG}}\). The considered global models were standard linear regression models adjusted by least squares. As for the method LM, a linear local model was fitted to each series by using the function _auto.arima()_ in the **forecast** R package [42]. Model selection was performed by means of AICc criterion. Note that classical linear models are important as a benchmark because they do not include any advanced machine learning technique and overlap the model class with ARIMA model (a common local approach). Therefore, they are ideal to isolate the
effect of globality [33] and to analyze the advantages of splitting the dataset into different groups according to Algorithm 1.
The quality of the procedures was evaluated by comparing the clustering solution given by the algorithms with the true partition, usually referred to as ground truth, which is defined in each scenario by the corresponding underlying processes. Approaches CPAGM and GMFBC automatically provide a clustering partition. For method LM, each series was first described by means of the vector of estimated model coefficients returned by _auto.arima()_ function (when necessary, the vectors were padded with zeros until reaching the length of the longest vector). Next, a standard \(K\)-means algorithm was executed by using these feature vectors as input. Similar clustering methods were already employed by [22] and [27], the latter in the multivariate setting. Experimental and true partitions were compared by considering the ARI.
The predictive accuracy of methods CPAGM, GMAP and GMFBC was assessed by recording the average MAE as indicated in (7). The MAE associated with each local model computed with respect to the test set was stored for LM and the average of those quantities was calculated as the error metric. Note that, since all series within a given scenario have the same numerical scale, the MAE is a proper measure to evaluate the overall prediction error.
In each simulation trial and given a pair \((T,N)\), the proposed technique CPAGM was executed 5 times and the partition associated with the minimum value of \(J_{\mathrm{OPT}}\) (see Remark 1) was stored. This way, we tried to avoid the well-known issue of local optima related to iterative clustering procedures. A similar strategy was employed for the remaining approaches. The overall MAE produced by GMAP was approximated via Monte Carlo (i.e., by considering several random partitions).
### Results and discussion
Average values of ARI attained by the different techniques in Scenario 1 are provided in Table 1. In order to perform rigorous comparisons, pairwise paired \(t\)-tests were carried out by taking into account the 200 simulation trials. In all cases, the alternative hypotheses stated that the mean ARI value of a given method is greater than the mean ARI value of its counterpart. Bonferroni corrections were applied to the set of \(p\)-values associated with each value of \(T\). An asterisk was incorporated in Table 1 if the corresponding method resulted significantly more effective than the remaining ones for a significance level 0.01.
According to Table 1, the proposed method CPAGM achieved significantly greater ARI values than the alternative approaches in most cases. The only exceptions were \((T,N)=(200,5)\), \((T,N)=(400,5)\) and \((T,N)=(400,20)\), where CPAGM and LM showed a similar performance. What happens here is
\begin{table}
\begin{tabular}{c c c c} \hline \hline \((T,N)\) & LM & CPAGM & GMFBC \\ \hline (20, 5) & 0.027 & \(\mathbf{0.352^{*}}\) & 0.094 \\ (20, 10) & 0.032 & \(\mathbf{0.459^{*}}\) & 0.090 \\ (20, 20) & 0.029 & \(\mathbf{0.556^{*}}\) & 0.092 \\ (20, 50) & 0.026 & \(\mathbf{0.612^{*}}\) & 0.076 \\ \hline (50, 5) & 0.305 & \(\mathbf{0.914^{*}}\) & 0.243 \\ (50, 10) & 0.336 & \(\mathbf{0.956^{*}}\) & 0.222 \\ (50, 20) & 0.331 & \(\mathbf{0.988^{*}}\) & 0.216 \\ (50, 50) & 0.331 & \(\mathbf{0.981^{*}}\) & 0.195 \\ \hline (100, 5) & 0.747 & \(\mathbf{0.946^{*}}\) & 0.379 \\ (100, 10) & 0.740 & \(\mathbf{0.954^{*}}\) & 0.380 \\ (100, 20) & 0.743 & \(\mathbf{0.961^{*}}\) & 0.334 \\ (100, 50) & 0.740 & \(\mathbf{0.956^{*}}\) & 0.311 \\ \hline (200, 5) & 0.876 & \(\mathbf{0.906}\) & 0.581 \\ (200, 10) & 0.854 & \(\mathbf{0.919^{*}}\) & 0.561 \\ (200, 20) & 0.820 & \(\mathbf{0.921^{*}}\) & 0.516 \\ (200, 50) & 0.800 & \(\mathbf{0.926^{*}}\) & 0.488 \\ \hline (400, 5) & 0.897 & \(\mathbf{0.908}\) & 0.719 \\ (400, 10) & 0.848 & \(\mathbf{0.900^{*}}\) & 0.725 \\ (400, 20) & 0.877 & \(\mathbf{0.881}\) & 0.732 \\ (400, 50) & 0.803 & \(\mathbf{0.872^{*}}\) & 0.726 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average ARI in Scenario 1. For each pair \((T,N)\), the best result is shown in bold. An asterisk indicates that a given method is significantly better than the rest at level \(\alpha=0.01\).
that, as long series are considered, the model coefficients are very accurately estimated via the local approach and then the clustering partition returned by LM is quite similar to the ground truth. An increasing in the number of series per cluster was clearly beneficial for the proposed method when short series were considered (\(T\in\{20,50\}\)), but it had little impact when \(T>50\). In some way, considering more series per cluster has a similar effect on CPAGM than increasing the series length, since both phenomena result in a better estimation of the global models. The approach GMFBC showed a steady improvement when increasing the series length, but it was still far from a perfect partition for \(T=400\).
Average results for Scenario 2 concerning ARI are displayed in Table 2. The proposed approach showed a similar behaviour than in Scenario 1 in terms of clustering effectiveness, but the differences with respect to the remaining techniques were more marked in Scenario 2. The long memory patterns exhibited by the processes of this scenario negatively affected both methods LM and GMFBC. In fact, the local approach was not able to show the same performance than CPAGM even when very long series (\(T=1000\)) were considered. In short, the iterative procedure of Algorithm 1 takes advantage of the excellent accuracy of global models to properly estimate the complex forecasting patterns existing in the long memory processes of Scenario 2.
Average results in terms of MAE for Scenarios 1 and 2 are given in Tables 11 and 12 in the Appendix, respectively, where a discussion of the performance of the different approaches is also provided. In short, the proposed method significantly outperforms the alternative techniques in most cases, and the differences are particularly pronounced in Scenario 2.
### Additional analyses
This section shows some additional analyses which complement the simulations presented above.
#### 5.4.1 Noisy scenarios
The previous simulations considered scenarios with well-defined clusters given by three types of autoregressive processes. Specifically, the time series belonging to a given group were generated by the same stochastic process. Although this is a reasonable simulation mechanism, it is also interesting to study the behavior of the different methods when some degree of uncertainty exist in the underlying processes. To this aim, we considered a slightly modified version of Scenario 2 by incorporating some amount of noise in the corresponding model coefficients. Particularly, series 1 to \(N\) within a given group were simulated from an autoregressive process with vector of coefficients \(u_{1}\boldsymbol{\varphi}_{12},\ldots,u_{N}\boldsymbol{\varphi}_{12}\), where \(\boldsymbol{\varphi}_{12}\) is the
\begin{table}
\begin{tabular}{c c c c} \hline \hline \((T,N)\) & LM & CPAGM & GMFBC \\ \hline \((50,5)\) & 0.243 & \(\mathbf{0.584}^{*}\) & 0.238 \\ \((50,10)\) & 0.259 & \(\mathbf{0.853}^{*}\) & 0.222 \\ \((50,20)\) & 0.250 & \(\mathbf{0.956}^{*}\) & 0.219 \\ \((50,50)\) & 0.256 & \(\mathbf{0.980}^{*}\) & 0.205 \\ \hline \((100,5)\) & 0.386 & \(\mathbf{0.933}^{*}\) & 0.278 \\ \((100,10)\) & 0.387 & \(\mathbf{0.937}^{*}\) & 0.274 \\ \((100,20)\) & 0.410 & \(\mathbf{0.979}^{*}\) & 0.277 \\ \((100,50)\) & 0.412 & \(\mathbf{0.986}^{*}\) & 0.286 \\ \hline \((200,5)\) & 0.453 & \(\mathbf{0.907}^{*}\) & 0.302 \\ \((200,10)\) & 0.478 & \(\mathbf{0.937}^{*}\) & 0.317 \\ \((200,20)\) & 0.468 & \(\mathbf{0.959}^{*}\) & 0.306 \\ \((200,50)\) & 0.477 & \(\mathbf{0.972}^{*}\) & 0.303 \\ \hline \((400,5)\) & 0.517 & \(\mathbf{0.898}^{*}\) & 0.383 \\ \((400,10)\) & 0.510 & \(\mathbf{0.918}^{*}\) & 0.382 \\ \((400,20)\) & 0.507 & \(\mathbf{0.926}^{*}\) & 0.368 \\ \((400,50)\) & 0.487 & \(\mathbf{0.921}^{*}\) & 0.365 \\ \hline \((1000,5)\) & 0.571 & \(\mathbf{0.846}^{*}\) & 0.497 \\ \((1000,10)\) & 0.556 & \(\mathbf{0.841}^{*}\) & 0.456 \\ \((1000,20)\) & 0.552 & \(\mathbf{0.867}^{*}\) & 0.453 \\ \((1000,50)\) & 0.532 & \(\mathbf{0.877}^{*}\) & 0.457 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average ARI in Scenario 2. For each pair \((T,N)\), the best result is shown in bold. An asterisk indicates that a given method is significantly better than the rest at level \(\alpha=0.01\).
vector of coefficients associated with the corresponding group (see Section 5.1) and \(u_{1},\ldots,u_{N}\) are independent random variables following a uniform distribution in the interval \((0.8,1)\). Note that, according to previous considerations, each group of series in this new scenario shows a moderate level of variability in terms of generating structures, thus making the clustering task more challenging. The proposed approach and the alternative methods were assessed in this additional setting by following the same steps as in Scenario 2.
Results in terms of clustering effectiveness for the noisy scenario are given in Table 3. Scores in Table 3 are rather similar to the ones in Table 2, with method CPAGM significantly outperforming the alternative techbiques in all settings. Note that the clustering accuracy of the former approach does not get negatively affected by the noisy coefficients of the different generating processes. Thus, a moderate amount of uncertainty is not enough to prevent the iterative procedure in Algorithm 1 from grouping the time series according to the different forecasting structures. It is worth highlighting that the feature-based approach GMFBC substantially decreases its clustering effectiveness with respect to the original Scenario 2, thus indicating that the introduced noise considerably corrupts the estimation of the corresponding statistical quantities.
Results in terms of predictive accuracy are provided in Table 13 in the Appendix. In short, the corresponding values indicate that method CPAGM outperforms the remaining techniques in most settings, but the differences in terms of MAE are less marked than in the original Scenario 2.
In sum, the previous analysis corroborates the excellent performance of the proposed algorithm even when a moderate amount of noise exists in the generating processes defining the different clusters. Note that this is a great property of CPAGM, since the assumption of clear, well-separated clusters is often not fulfilled in real time series datasets.
#### 5.4.2 Selection of \(K\) and \(l\)
Note that, in the simulation study of Sections 5.1, 5.2 and 5.3, the true values of \(K\) and \(l\) were given as input to the proposed clustering algorithm. However, the optimal values of these parameters are usually unknown in practice. For this reason, an automatic criterion to perform parameter selection was provided in Algorithm 2. In order to study the behaviour of that procedure in practice, we considered Scenario 1 with \((T,N)=(100,5)\). Training and validation sets were the same as in original Scenario 1. Former test periods were split into two parts formed by the first and the last \(l_{\text{SIG}}=4\) observations, respectively, giving rise to the corresponding test sets. The first test set was purely used for parameter selection (see the second step above), whereas the second test set was employed for evaluation purposes.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \((T,N)\) & LM & CPAGM & GMFBC \\ \hline \((50,5)\) & 0.180 & \(\mathbf{0.374^{*}}\) & 0.094 \\ \((50,10)\) & 0.160 & \(\mathbf{0.732^{*}}\) & 0.082 \\ \((50,20)\) & 0.163 & \(\mathbf{0.893^{*}}\) & 0.069 \\ \((50,50)\) & 0.160 & \(\mathbf{0.941^{*}}\) & 0.051 \\ \hline \((100,5)\) & 0.343 & \(\mathbf{0.922^{*}}\) & 0.140 \\ \((100,10)\) & 0.321 & \(\mathbf{0.978^{*}}\) & 0.115 \\ \((100,20)\) & 0.369 & \(\mathbf{0.967^{*}}\) & 0.112 \\ \((100,50)\) & 0.364 & \(\mathbf{0.976^{*}}\) & 0.090 \\ \hline \((200,5)\) & 0.461 & \(\mathbf{0.952^{*}}\) & 0.180 \\ \((200,10)\) & 0.497 & \(\mathbf{0.952^{*}}\) & 0.149 \\ \((200,20)\) & 0.503 & \(\mathbf{0.957^{*}}\) & 0.146 \\ \((200,50)\) & 0.516 & \(\mathbf{0.973^{*}}\) & 0.138 \\ \hline \((400,5)\) & 0.572 & \(\mathbf{0.915^{*}}\) & 0.227 \\ \((400,10)\) & 0.593 & \(\mathbf{0.936^{*}}\) & 0.190 \\ \((400,20)\) & 0.572 & \(\mathbf{0.931^{*}}\) & 0.183 \\ \((400,50)\) & 0.575 & \(\mathbf{0.946^{*}}\) & 0.180 \\ \hline \((1000,5)\) & 0.681 & \(\mathbf{0.934^{*}}\) & 0.248 \\ \((1000,10)\) & 0.687 & \(\mathbf{0.897^{*}}\) & 0.227 \\ \((1000,20)\) & 0.687 & \(\mathbf{0.912^{*}}\) & 0.217 \\ \((1000,50)\) & 0.702 & \(\mathbf{0.920^{*}}\) & 0.214 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Average ARI in Scenario 2 with noisy coefficients. For each pair \((T,N)\), the best result is shown in bold. An asterisk indicates that a given method is significantly better than the rest at level \(\alpha=0.01\).
The procedure described above was run by considering the grid \(\mathcal{G}=\{(K,l):K=1,2,\ldots,6,l=1,2,3,4\}\). The average MAE with respect to the first test set was calculated and the pair giving rise to the minimum value of this quantity was selected as the optimal one. The simulation mechanism was repeated 200 times.
Table 4 shows the percentage of times that each pair \((K,l)\) was chosen. The true combination \((K,l)=(3,4)\) was selected \(37\%\) of the time. The procedure properly detected the correct value of \(l\) most of the trials, but identifying the real value of \(K\) was more challenging. Particularly, the combinations \((4,4)\), \((5,4)\) and \((6,4)\) were selected with high frequency. It is worth remarking that, although theoretically these pairs could be considered a wrong choice, they are often associated with situations in which: (i) the clustering solution ends up with 3 clusters even though a value \(K>3\) is given as input parameter or (ii) the clustering algorithm correctly identifies the three real clusters but in turn divides some of them into further subgroups.
To analyse to what extent the selection of pairs \((K,4)\) with \(K>3\) is appropriate, we computed the average MAE and ARI of such pairs (being the former measure calculated with respect to the second test set) and compared them with the average MAE and ARI associated with the optimal pair. Table 5 displays the corresponding quantities along with the average MAE and ARI corresponding to the LM approach. Pairs \((K,4)\) with \(K\in\{4,5,6\}\) exhibit a similar MAE value than the true pair, which corroborates that those pairs produce clustering partitions as good as the optimal one in terms of forecasting effectiveness. The average MAE associated with local models is significantly higher. All values of the ARI indicate a partition quite close to the ground truth.
In short, a proper combination of both parameters was selected more than \(90\%\) of the time via the proposed procedure. The numerical experiment was repeated by considering \(N>5\) and the optimal pair was selected almost \(100\%\) of the trials. As a last remark, it is worth noting that we did not consider values
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(l\backslash K\) & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline
1 & 0 & 0 & 0 & 0 & 0 & 0 \\
2 & 0 & 0 & 0 & 1.5 & 0 & 0 \\
3 & 0.5 & 1 & 1 & 1 & 0.5 & 0.5 \\
4 & 0.5 & 1.5 & 37 & 13.5 & 23.5 & 18 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Percentage of times that each pair \((K,l)\) was selected as the optimal one. The values \(T=100\) and \(N=5\) were considered.
of \(l>4\) in the grid because this often results in global models with estimated coefficients above the 4th lag being close to zero, which implies that they are virtually equivalent to a 4-lagged global model.
#### 5.4.3 Analysing the iterative behaviour of the proposed method
The simulations of Sections 5.1, 5.2 and 5.3 evaluate the performance of the proposed method but without analysing the iterative process described in Algorithm 1. However, it is important to assess how the clustering and the predictive accuracy fluctuate from one iteration to the next. To this aim, we executed Algorithm 1 in a specific setting, namely Scenario 2 with \((T,N)=(100,20)\), and recorded, in each iteration: (i) the clustering partition, \(\mathbf{C}\), (ii) the average prediction error with respect to the validation set, \(J(\mathbf{C})/n\) and (iii) the average MAE with respect to the test set (see (7)). The numerical experiment described above was repeated 1000 times.
The average and maximum number of iterations in the simulation procedure were 3.602 and 8, respectively. Figure 5 contains two curves displaying the average prediction error (MAE) with respect to the validation (blue colour) and test (orange colour) sets as a function of the specific iteration. Given the \(j\)th iteration, only those trials in which the clustering algorithm stopped at or after the \(j\)th iteration were considered to construct the curves in Figure 5. Table 6 shows the specific values of the points depicted in Figure 5. The last column includes the average ARI computed by considering the clustering partition associated with each iteration.
It is clear from Figure 5 and Table 6 that the quantity \(J(\mathbf{C})/n\) decreases the most during the first three iterations and gets stabilized afterwards. The curve indicating average error with respect to the test set exhibits a similar pattern, implying that a drop in the validation error is accompanied by a similar decrease
\begin{table}
\begin{tabular}{c c c} \hline \hline Pair \((K,l)\) & Average MAE & Average ARI \\ \hline \((3,4)\) & 0.8551 & 0.9855 \\ \((4,4)\) & 0.8737 & 0.8887 \\ \((5,4)\) & 0.8632 & 0.8595 \\ \((6,4)\) & 0.8505 & 0.8092 \\ \hline LM & 0.9400 & 0.8427 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Average MAE (with respect to the second test set) and ARI for several pairs \((K,l)\) and the LM approach. The values \(T=100\) and \(N=5\) were considered.
in the test error. However, the latter curve takes higher values, which corroborates that \(J_{\text{OPT}}/n\) underestimates the real forecasting error of the method (see Remark 2).
Average values of ARI index also improve substantially during the first three iterations. However, after the 3rd iteration, clustering solutions are always rather similar to the ground truth. Interestingly, the maximum value of 1 (associated with a perfect identification of the underlying partition) is reached at the last iteration. Note that, although the average number of iterations was rather small in this example, this is not usually the case with real datasets, where the underlying clustering structure is usually more complex (see Section 6).
In sum, the iterative process described in Algorithm 1 performs in a reasonable way, being able to discover the true clustering structure in barely 3 iterations in this example.
#### 5.4.4 Employing the out-of-sample error in Algorithm 1
In the numerical experiments carried out above, the in-sample error was employed to measure the distance from a series to a given cluster. This often works with global linear models, since the in-sample error is a reliable indicator of the predictive accuracy in this context. However, when more complex models are considered, the use of the in-sample error can lead to misleading results. For instance, a complex global model can reach zero in-sample error due to overfitting, and then generalize poorly over new observations. To avoid these undesirable situations, it is necessary to consider validation periods which are
Figure 5: Average MAE with respect to the validation set (blue curve) and test set (orange curve) as a function of the iteration. Scenario 2 with \((T,N)=(100,20)\) was considered.
not used to fit the global models.
Based on previous comments, we decided to analyse the behaviour of Algorithm 1 in the particular case of training and validation periods being disjoint. To this aim, we considered Scenario 2 but modified the training and validation sets. Specifically, for each series, training and validation periods were fixed to the first \((T-h-l_{\text{SIG}})\) observations and to observations from \((T-h-l_{\text{SIG}}+1)\) to \((T-h)\), respectively. In addition, the minimum series length was set to \(T=100\) in this new setting due to the fact that \(T=50\) would produce very short training periods.
Table 7 contains the results of this new analysis in terms of ARI for method CPAGM. The performance of the method moderately decreased when considering the out-of-sample error. In fact, by comparing Table 7 with Table 2, it is clear that the clustering effectiveness of the proposed approach is better when the in-sample-error is employed, specially when only a few series per cluster are considered. The worse performance of CPAGM was expected, since fewer observations are used for both fitting the global models and performing the reassignation step. This decrease in sample size ends up causing a higher instability. In any case, the proposed method still outperforms the alternative approaches by a large degree in this new scenario. Results in terms of MAE are provided in Table 14 in the Appendix. In brief, CPAGM exhibits a worse behaviour for the shortest values of \(T\), but the differences are minor with respect to the original Scenario 2.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Iteration & Validation error & Test error & ARI \\ \hline
1 & 1.354 & 1.976 & -0.001 \\
2 & 0.970 & 1.483 & 0.537 \\
3 & 0.844 & 1.259 & 0.848 \\
4 & 0.824 & 1.205 & 0.911 \\
5 & 0.831 & 1.203 & 0.902 \\
6 & 0.844 & 1.225 & 0.871 \\
7 & 0.819 & 1.214 & 0.907 \\
8 & 0.800 & 1.133 & 1.000 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Average MAE with respect to the validation and test sets and ARI for the corresponding partition, as a function of the iteration. Scenario 2 with \((T,N)=(100,20)\) was considered.
\begin{table}
\begin{tabular}{c c} \hline \hline \((T,N)\) & Average ARI \\ \hline \((100,5)\) & 0.744 \\ \((100,10)\) & 0.832 \\ \((100,20)\) & 0.831 \\ \((100,50)\) & 0.876 \\ \hline \((200,5)\) & 0.788 \\ \((200,10)\) & 0.841 \\ \((200,20)\) & 0.859 \\ \((200,50)\) & 0.871 \\ \hline \((400,5)\) & 0.777 \\ \((400,10)\) & 0.810 \\ \((400,20)\) & 0.840 \\ \((400,50)\) & 0.877 \\ \hline \((1000,5)\) & 0.785 \\ \((1000,10)\) & 0.802 \\ \((1000,20)\) & 0.845 \\ \((1000,50)\) & 0.864 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Average ARI Scenario 2 for method CPAGM. Out-of-sample error was used to assign the series to the clusters.
#### 5.4.5 Nonlinear global models
Throughout this section, the proposed clustering algorithm was assessed in scenarios where the different clusters are characterized by linear structures. The corresponding results indicated that, when the linearity assumption is met, the iterative procedure outlined in Algorithm 1 shows an outstanding performance when linear global models fitted by least squares are considered. However, as real time series in many domains exhibit a certain degree of nonlinearity, it is interesting to evaluate the clustering technique in situations where the underlying stochastic processes are highly nonlinear. To this aim, an additional setup including the so-called self-exciting threshold autoregressive (SETAR) processes introduced by [43] was considered. SETAR models can adequately describe many nonlinear features commonly observed in practice, as limit cycles and jump phenomena, among others. The specific generating structures in this new setting are provided below.
**Scenario 3**. Let \(\{X_{t}\}_{t\in\mathbb{Z}}\) be a stochastic process following the SETAR(\(p\))-type recursion given by
\[X_{t}=\begin{cases}\beta_{0}^{(1)}+\sum_{i=1}^{p}\beta_{i}^{(1)}X_{t-i}+ \epsilon_{t}^{(1)}&\text{if }X_{t-d}\leq r,\\ \beta_{0}^{(2)}+\sum_{i=1}^{p}\beta_{i}^{(2)}X_{t-i}+\epsilon_{t}^{(2)}&\text{ if }X_{t-d}>r,\end{cases} \tag{9}\]
where \(\beta_{0}^{(j)},\beta_{1}^{(j)},\ldots,\beta_{p}^{(j)}\), \(j=1,2\), are real numbers verifying the corresponding stationarity condition and \(\{\epsilon_{t}^{(j)}\}_{t\in\mathbb{Z}}\), \(j=1,2\), is a process formed by independent variables following the standard normal distribution. We fix \(p=5\) and \(d=3\). The vector of coefficients \(\mathbf{\beta}_{5}=(\beta_{0}^{(1)},\beta_{1}^{(1)},\ldots,\beta_{5}^{(1)},\beta_ {0}^{(2)},\beta_{1}^{(2)},\ldots,\beta_{5}^{(2)})\) and the parameter \(r\) are set as indicated below.
Process 1: \(\mathbf{\beta}_{5}=(0,0.2,0.9,-0.7,0.3,-0.4,0,0.5,-0.6,0.5,-0.4,0.4)\), \(r=1.2\).
Process 2: \(\mathbf{\beta}_{5}=(0,-0.2,-0.9,0.7,-0.3,0.4,0,-0.5,0.6,-0.5,0.4,-0.4)\), \(r=0\).
Process 3: \(\mathbf{\beta}_{5}=(0,0.3,0.3,0.3,-0.4,-0.4,0,-0.1,-0.7,-0.3,0.5,0.5)\), \(r=0.6\).
A new simulation experiment was designed to evaluate the proposed clustering algorithm in Scenario 3. As in previous analyses, several values of \(N\) and \(T\) were taken into account to generate the series. This time, due to the higher complexity of Scenario 3 in comparison with Scenarios 1 and 2, the test set was constructed by considering the last \(h=l_{\text{SIG}}=5\) observations of each series. The out-of-sample error was employed as dissimilarity measure concerning the iterative procedure in Algorithm 1. Specifically, training and validation sets were defined as indicated in Section 5.4.4. The number of clusters was set to \(K=3\). As Scenario 3 contains nonlinear processes, the random forest was selected for the global models involved in the approaches CPAGM, GMFBC and
GMAP. The number of lags to fit the global models was set to \(l=l_{\text{SIG}}=5\). With regards to the local approach (LM), a random forest was independently fitted to each one of the series. These individual models were used to compute the average prediction error. Note that the construction of a feature-based clustering approach based on the random forest is not straightforward. Therefore, the clustering accuracy associated with the LM method was obtained by considering the estimated coefficients of standard linear models. The simulation procedure was repeated 200 times and the ARI and the MAE were employed again as performance measures.
The results for Scenario 3 in terms of clustering effectiveness are provided in Table 8. The value \(N=50\) was not considered in this new simulation experiment due to the high computational cost of CPAGM when nonlinear global models as the random forest are fitted. The same statistical tests indicated in Section 5.3 were carried out along with the corresponding Bonferroni corrections. According to Table 8, the proposed algorithm attains significantly higher ARI values than the alternative ones in most cases, with a clustering accuracy which generally increases with the series length (\(T\)). This effect is not observed for the number of series per process (\(N\)), which is probably due to the complexity of the models in this new scenario. The approach GMFBC shows a rather poor performance, which suggests that the features employed by this method are not appropriate to detect the dependence structure of the SETAR processes in Scenario 3. The LM approach also exhibits low scores, which was expected, since it considers estimated features based on linear models.
Results in terms of predictive accuracy are given in Table 15 in the Appendix. The corresponding values indicate that CPAGM exhibits a significantly lower forecasting error than the remaining approaches in most of the settings.
In sum, the analyses carried out throughout this section illustrate the flexibility of the proposed clustering approach, which can deal with series generated from highly complex processes as long the class of global models is chosen appropriately.
## 6 Application to real data
In this section, we apply the proposed algorithm to perform clustering in several well-known datasets. All of them have been used in many peer-reviewed publications as standard benchmarks, from literature on local models to recent works on global models. Specifically, [33] employed these databases to show the advantages of global methods over local methods in terms of predictive accuracy. After describing the data, the method is first applied to each one of the databases individually. Then, we show an application in which series
\begin{table}
\begin{tabular}{c c c c} \hline \hline \((T,N)\) & LM & CPAGM & GMFBC \\ \hline \((50,5)\) & 0.250 & \(\mathbf{0.321}^{*}\) & 0.130 \\ \((50,10)\) & 0.231 & \(\mathbf{0.357}^{*}\) & 0.109 \\ \((50,20)\) & 0.149 & \(\mathbf{0.354}^{*}\) & 0.054 \\ \hline \((100,5)\) & 0.278 & \(\mathbf{0.511}^{*}\) & 0.198 \\ \((100,10)\) & 0.259 & \(\mathbf{0.531}^{*}\) & 0.128 \\ \((100,20)\) & 0.201 & \(\mathbf{0.540}^{*}\) & 0.080 \\ \hline \((200,5)\) & 0.267 & \(\mathbf{0.543}^{*}\) & 0.181 \\ \((200,10)\) & 0.225 & \(\mathbf{0.582}^{*}\) & 0.143 \\ \((200,20)\) & 0.220 & \(\mathbf{0.535}^{*}\) & 0.060 \\ \hline \((400,5)\) & 0.347 & \(\mathbf{0.708}^{*}\) & 0.297 \\ \((400,10)\) & 0.290 & \(\mathbf{0.675}^{*}\) & 0.207 \\ \((400,20)\) & 0.148 & \(\mathbf{0.638}^{*}\) & 0.059 \\ \hline \((1000,5)\) & 0.397 & \(\mathbf{0.741}^{*}\) & 0.312 \\ \((1000,10)\) & 0.245 & \(\mathbf{0.732}^{*}\) & 0.145 \\ \((1000,20)\) & 0.231 & \(\mathbf{0.738}^{*}\) & 0.135 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Average ARI in Scenario 3. For each pair \((T,N)\), the best result is shown in bold. An asterisk indicates that a given method is significantly better than the rest at level \(\alpha=0.01\).
from two databases are combined. It is worth highlighting that, for the sake of simplicity and computational efficiency, the analyses shown throughout this section are focused on global linear models.
### Applying the proposed algorithm to each one of the datasets independently
This section shows the application of the proposed approach to some real time series databases which pertain in turn to some data collections described below.
* **M1**. Heterogeneous dataset from a forecasting competition [44]. It contains 1001 series subdivided in yearly (181), quarterly (203) and monthly (617) periodicity. The considered datasets are referred to as M1 Yearly, M1 Quarterly and M1 Monthly, respectively.
* **M3**. Heterogeneous database from a forecasting competition [45], containing 3003 time series subdivided in yearly (645), quarterly (756), monthly (1428) and an extra category of periodicity, so-called "other" (174). The considered datasets are referred to as M3 Yearly, M3 Quarterly, M3 Monthly and M3 Other, respectively.
* **Tourism**. Homogeneous dataset from a tourism forecasting competition [46], including 1311 series divided into yearly (518), quarterly (427) and monthly (366) data. The considered datasets are referred to as Tourism Yearly, Tourism Quarterly and Tourism Monthly, respectively.
Method CPAGM and the alternative approaches examined in Section 5 were executed in each one of the previous datasets. No data preprocessing was performed, since there is not a clear agreement about the benefits of preprocessing when fitting global models [33]. It is worth noting that, unlike in the simulation study, there is no way of objectively assessing the quality of the clustering partition in these databases, since no information about the ground truth is available. Hence, our comparative analyses focus on the predictive effectiveness of the considered techniques. In all cases, the test sets were constructed by considering the last \(h=5\) observations of each time series. Procedures CPAGM, GMFBC and GMAP were run for several values of \(K\), namely \(K\in\{1,2,3,4,5,7,10\}\) and \(l\). Note that the range of the latter parameter is limited by the minimum series length existing in a given database.
To measure the predictive accuracy, we considered two well-known error metrics, namely the mean absolute scaled error (MASE) and the symmetric mean absolute percentage error (sMAPE). Using a scale-free or a percentage error is desirable in our setting because, unlike in the numerical experiments of Section 5, some databases contain series which are recorded in very different
scales. Thus, employing the MAE could have resulted in the average forecasting error being corrupted by the higher influence of the series in the largest scales. Note that, by considering the MASE and sMAPE metrics, the average prediction error in (6) takes the form
\[\frac{1}{n}\sum_{k=1}^{K}\sum_{\begin{subarray}{c}i=1:\\ \boldsymbol{X}_{t}^{(i)}\in C_{k}\end{subarray}}^{n}d_{\text{MASE}}^{*}\big{(} \boldsymbol{X}_{t}^{(i)*},\overline{\mathcal{M}}_{k}\big{)}\;\;\text{and}\; \;\frac{1}{n}\sum_{k=1}^{K}\sum_{\begin{subarray}{c}i=1:\\ \boldsymbol{X}_{t}^{(i)}\in C_{k}\end{subarray}}^{n}d_{\text{sMAPE}}^{*}\big{(} \boldsymbol{X}_{t}^{(i)*},\overline{\mathcal{M}}_{k}\big{)}, \tag{10}\]
respectively, where
\[\begin{split} d_{\text{MASE}}^{*}\big{(}\boldsymbol{X}_{t}^{(i)* },\overline{\mathcal{M}}_{k}\big{)}=\frac{\frac{1}{h}\sum_{j=1}^{h}\big{|}X_{j }^{(i)*}-\overline{F}_{j,k}^{(i)*}\big{|}}{\text{MAE}_{\text{Naive}}^{i}},\\ d_{\text{sMAPE}}^{*}\big{(}\boldsymbol{X}_{t}^{(i)*},\overline{ \mathcal{M}}_{k}\big{)}=\frac{200}{h}\sum_{j=1}^{h}\Bigg{(}\frac{\big{|}X_{j }^{(i)*}-\overline{F}_{j,k}^{(i)*}\big{|}}{\big{|}X_{j}^{(i)*}\big{|}+\big{|} \overline{F}_{j,k}^{(i)*}\big{|}}\Bigg{)},\end{split} \tag{11}\]
for \(i=1,\ldots,n\), \(k=1,\ldots,K\), with \(\text{MAE}_{\text{Naive}}^{i}=\frac{1}{L_{i}-m_{i}}\sum_{t=m_{i}+1}^{L_{i}} \big{|}X_{t}^{(i)}-X_{t-m_{i}}^{(i)}\big{|}\) and \(m_{i}\) denoting the seasonal period of the \(i\)th time series (\(m_{i}=1\) if the series is nonseasonal).
Figures 6, 7 and 8 show the results in terms of MASE for datasets pertaining to M1, M3 and Tourism collections, respectively. For a given method, curves of average MASE were represented as a function of the number of lags. Several colors were used to indicate the different values of \(K\). The average forecasting error associated with the LM approach was incorporated to a given graph by means of an horizontal dashed line. In the plots associated with the databases Tourism Quarterly and Tourism Monthly (middle and bottom panels in Figure 8), these horizontal lines are not visible because the local approach exhibits a rather high forecasting error.
The graphs in Figures 6 and 7 indicate that, in datasets belonging to M1 and M3 collections, splitting the series into different clusters by means of the proposed approach is advantageous. In fact, the red curve, which corresponds to \(K=1\), is usually above all the remaining curves (the only exception being dataset M3 Other). This indicates that a better prediction accuracy is achieved when the series are grouped according to the underlying forecasting structures. In addition, a general pattern of reduction in the forecasting error is usually observed when increasing the order of the global models. Note that the proposed algorithm substantially outperforms the approach based on local models for several values of the number of clusters and the number of lags, sometimes over the whole range (e.g., dataset M3 Quarterly), which is consistent with the conclusions of [33] concerning the effectiveness of global models in the considered
Figure 6: Average MASE as a function of the number of lags in datasets M1 Yearly (top panel), M1 Quarterly (middle panel) and M1 Monthly (bottom panel). Each color corresponds to a different value for the number of clusters, \(K\). The horizontal dashed line indicates the average MASE achieved by the approach based on local models (LM).
Figure 7: Average MASE as a function of the number of lags in datasets M3 Yearly (top panel), M3 Quarterly (upper middle panel), M3 Monthly (lower middle panel) and M3 Other (bottom panel). Each color corresponds to a different value for the number of clusters, \(K\). The horizontal dashed line indicates the average MASE achieved by the approach based on local models (LM).
Figure 8: Average MASE as a function of the number of lags in datasets Tourism Yearly (top panel), Tourism Quarterly (middle panel) and Tourism Monthly (bottom panel). Each color corresponds to a different value for the number of clusters, \(K\). The horizontal dashed line indicates the average MASE achieved by the approach based on local models (LM).
datasets. Splitting the series into different groups according to the approach of [32] is also generally advantageous, thus indicating that the features employed by this method contain useful information about the forecasting structures of the time series. However, the corresponding average MASE is usually greater than the one produced by CPAGM, since the latter method is specifically designed to generate an optimal partition in terms of predictive accuracy. As expected, the approach based on an arbitrary partition (GMAP) shows a worse behavior. As this method splits the series totally at random, its forecasting error is usually similar to the one produced by a single global model (\(K=1\)). In fact, in some datasets (e.g., M3 Yearly), the performance of this approach clearly worsens when increasing the number of groups, which is reasonable, since the consideration of a larger number of random clusters means a worse exploitation of the information contained in the database.
A different situation happens for datasets pertaining to Tourism collection (Figure 8). In fact, in two of these databases, namely Tourism Yearly and Tourism Monthly, grouping the series into different clusters does not result in a better predictive accuracy. Note that this is not an issue, since one can not expect the proposed method to be advantageous in each and every database. It is worth remarking that, in these datasets, approaches CPAGM and GMFBC exhibit a higher prediction error than the method based on a random partition (GMAP) when \(K>1\). This means that, in such cases, partitioning the set according to a given criterion aimed at maximizing the predictive accuracy is counterproductive. This could be due to the fact that, in most of the corresponding time series, the observations constituting the test periods behave rather differently from the remaining ones. In any case, an in-depth analysis of the series in these data collections would be desirable. On the other hand, method CPAGM outperforms GMFBC and GMAP in dataset Tourism Quarterly. Specifically, a number of lags of \(l=10\) is the optimal one for this approach regardless of the number of clusters. In fact, values of \(l\) above or below this value result in a higher forecasting error.
In order to better understand the results of Figures 6, 7 and 8, average values of MASE are provided in Table 9. Specifically, for approaches CPAGM, GMFBC and GMAP, we report the forecasting error associated with the optimal pair \((K,l)\). The corresponding quantities indicate that the proposed method exhibits a significant advantage over the remaining approaches in most datasets of M1 and M3 collections (with the only exception of M3 Other) but results only in small improvements in datasets pertaining to Tourism collection. Results in terms of sMAPE error were also incorporated to Table 9. The superiority of the proposed approach over the alternative methods is generally more pronounced according to this new error metric. In fact, in some datasets (e.g., M1 Monthly), the differences with respect to the feature-based approach GMFBC
are dramatic. Interestingly, CPAGM clearly achieves the lowest average sMAPE in two datasets pertaining to Tourism collection, namely Tourism Quarterly and Tourism Monthly. In addition, there are some datasets in which the proposed method does not show the best performance. For instance, in Tourism Yearly, the local approach exhibits by far the highest predictive accuracy. Finally, it is worth highlighting that similar conclusions to the ones stated above were obtained when considering longer test periods (\(h>5\)).
As shown in Section 3, the proposed method constructs a clustering partition in a way that the overall prediction error is minimized. This quantity is computed by obtaining the forecasts of each time series using the global model (prototype) associated with its group, which are then compared with the corresponding test periods (see (6)). Thus, each global model has a certain contribution to the overall prediction error in the form of individual error terms, and a
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Dataset & Measure & LM & CPAGM (\(K=1\)) & GMFBC & GMAP \\ \hline M1 Yearly & MASE & 2.310 & **2.131** (2.231) & 2.187 & 2.231 \\ & sMAPE & 138.550 & 75.574 (100.683) & **71.829** & 92.362 \\ \hline M1 Quarterly & MASE & 1.244 & **1.139** (1.233) & 1.181 & 1.233 \\ & sMAPE & 26.266 & **21.645** (75.518) & 44.467 & 60.842 \\ \hline M1 Monthly & MASE & 1.060 & **0.649** (0.677) & 0.677 & 0.677 \\ & sMAPE & 18.511 & **16.609** (64.778) & 42.946 & 52.703 \\ \hline M3 Yearly & MASE & 2.182 & **2.012** (2.065) & 2.061 & 2.065 \\ & sMAPE & 15.546 & 15.598 (15.874) & **15.384** & 15.874 \\ \hline M3 Quarterly & MASE & 0.992 & **0.761** (0.812) & 0.794 & 0.812 \\ & sMAPE & 8.460 & **7.069** (7.965) & 7.440 & 7.965 \\ \hline M3 Monthly & MASE & 0.732 & **0.591** (0.642) & 0.623 & 0.641 \\ & sMAPE & 13.141 & **11.918** (12.467) & 12.256 & 12.467 \\ \hline M3 Other & MASE & 1.639 & **1.339** (**1.339**) & **1.339** & **1.339** \\ & sMAPE & 4.046 & **3.215** (3.729) & 3.657 & 3.729 \\ \hline Tourism Yearly & MASE & 2.470 & **2.260** (2.269) & 2.269 & 2.268 \\ & sMAPE & **22.434** & 39.956 (39.956) & 39.956 & 39.956 \\ \hline Tourism Quarterly & MASE & 2.501 & **1.177** (1.190) & 1.183 & 1.190 \\ & sMAPE & 22.101 & **14.515** (19.571) & 19.571 & 19.571 \\ \hline Tourism Monthly & MASE & 2.327 & **1.043** (1.048) & 1.048 & 1.048 \\ & sMAPE & 30.349 & **16.305** (18.494) & 18.494 & 18.494 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Average MASE and sMAPE associated with the optimal pair \((K,l)\) for methods CPAGM, GMFBC and GMAP. The average errors obtained by the LM approach were also incorporated. For each dataset and error metric, the best result is shown in bold
way of assessing the quality of the model consists of examining the distribution of these quantities. To this aim, a boxplot can be constructed for each one of the groups by using the final prediction errors produced by the clustering algorithm. The top, middle and bottom panels of Figure 9 provide the corresponding boxplots for datasets M1 Yearly, M1 Quarterly and M1 Monthly, respectively, which were constructed by considering the optimal values of \(K\) and \(l\), and the MASE as the error metric. In the three cases, the distribution of the forecasting error is clearly different among groups. For instance, in dataset M1 Yearly, the series in the second and third clusters usually yield better predictions than the series in the first group. Moreover, the corresponding prediction errors show more variability in the latter case. These properties suggest a higher degree of similarity for the series in the second and third clusters in terms of forecasting structures, which results in more accurate global models. Similar conclusions can be reached by analyzing the boxplots for datasets M1 Quarterly and M1 Monthly. Note that, in the latter database, there are several time series giving rise to extremely large values of the forecasting error, which could require an individual analysis.
A numerical summary of the boxplots in Figure 9 is given in Table 10. In particular, for each cluster, the relative size is provided along with the sample mean and variance of the corresponding MASE values. These quantities corroborate the existence of substantial differences among the groups in a given dataset. Note that, in datasets M1 Yearly and M1 Monthly, the cluster containing the largest number of series is associated with the lowest average forecasting error. This is reasonable, since more series imply more accurate estimates for the coefficients of the underlying global model, thus resulting in a better predictive accuracy.
Note that, besides the clustering partition, an essential element of the proposed clustering algorithm are the resulting prototypes, i.e., the final global models. In fact, these models characterize the forecasting structures of the different clusters, and their analysis can provide a meaningful description of the time series belonging to each group. Based on previous comments, we decided to examine the prototypes for the 3-cluster solution in dataset M1 Yearly (top panel of Figure 9). In this regard, Figure 10 represents the estimated coefficients for the prototypes of the three clusters. Note that a number of 8 lags were considered to fit the global models. The values associated with \(l=0\) in Figure 10 indicate the estimates for the corresponding intercepts. Note that the three prototypes exhibit clearly dissimilar behaviors. In fact, the estimated coefficients for the global models of first and second clusters take rather different values for several lags, usually showing opposite signs. On the contrary, the global model of the third group lies somewhere in the middle, with estimated coefficients close to zero for lags between 1 and 7. In fact, the only significant
Figure 9: Distribution of MASE for the different clusters concerning the partitions produced by the proposed method in datasets M1 Yearly (top panel), M1 Quarterly (middle panel) and M1 Monthly (bottom panel). The optimal values for \(K\) and \(l\) were considered for each dataset.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline M1 Yearly & & & & & & & & & & & \\ Cluster & 1 & 2 & 3 & - & - & - & - & - & - & - \\ \hline Relative size & 0.38 & 0.40 & 0.22 & - & - & - & - & - & - & - & - \\ Mean (MASE) & 1.14 & 0.83 & 1.16 & - & - & - & - & - & - & - \\ Variance (MASE) & 0.59 & 0.55 & 0.52 & - & - & - & - & - & - & - \\ \hline M1 Quarterly & & & & & & & & & & \\ Cluster & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline Relative size & 0.06 & 0.15 & 0.15 & 0.13 & 0.06 & 0.06 & 0.02 & 0.10 & 0.22 & 0.04 \\ Mean (MASE) & 1.52 & 0.91 & 1.33 & 1.32 & 1.03 & 0.97 & 0.97 & 1.31 & 1.06 & 1.13 \\ Variance (MASE) & 1.66 & 0.73 & 1.15 & 0.96 & 0.25 & 0.32 & 0.11 & 0.35 & 0.48 & 0.34 \\ \hline M1 Monthly & & & & & & & & & \\ Cluster & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline Relative size & 0.07 & 0.13 & 0.06 & 0.05 & 0.05 & 0.15 & 0.26 & 0.07 & 0.07 & 0.09 \\ Mean (MASE) & 0.85 & 0.74 & 0.92 & 0.89 & 0.79 & 0.50 & 0.41 & 0.73 & 0.87 & 0.69 \\ Variance (MASE) & 0.25 & 0.32 & 0.30 & 0.33 & 0.27 & 0.10 & 0.19 & 0.13 & 0.33 & 0.17 \\ \hline \hline \end{tabular}
\end{table}
Table 10: Description of the different clusters concerning the partitions produced by the proposed method in datasets M1 Yearly (upper part), M1 Quarterly (middle part) and M1 Monthly (lower part). The optimal values for \(K\) and \(l\) were considered for each dataset.
lag for this prototype seems to be \(l=8\), since the corresponding estimate takes a large value. Hence, the third cluster is expected to contain mostly series exhibiting significant serial dependence only at lag 8. Note that this is an important insight that can lead to interesting conclusions about the series in this group. In sum, the graph in Figure 10 provides a useful decomposition of the linear forecasting structures existing in dataset M1 Yearly.
### Additional analysis. Combining series from two datasets
In the previous section, the proposed method and the alternative approaches were applied by considering each database independently. However, it is interesting to assess the performance of the different techniques with datasets containing series from different domains, since several real time series databases have this property. To this aim, we employed the data collections in Section 6.1 and created new databases by combining two of the former data collections. Algorithm CPAGM and the alternative approaches were executed with these two data collections. The results of this analysis are provided in the Appendix.
## 7 Conclusions and future work
In this work, a clustering algorithm based on prediction accuracy of global forecasting models was introduced. The procedure is based on an iterative mechanism and relies on the following two steps:
Figure 10: Estimated coefficients for lags 0 (intercepts) to 8 for the global linear models concerning the 3-cluster solution produced by CPAGM (\(l=8\)) in dataset M1 Yearly.
1. \(K\) global models (prototypes) are fitted by considering the series belonging to each cluster.
2. Each time series is assigned to the group associated with the prototype yielding the lowest forecasting error according to the MAE metric.
Since the algorithm is specifically designed to minimize the overall prediction error, the resulting partition distributes the time series in such a way that the corresponding global models represent in the best possible way the existing forecasting structures. The method is motivated by the fact that, given two different model-based clustering solutions, the one generating the best predictions is preferred. In short, our method produces a meaningful clustering partition while providing a powerful tool to predict future values of the series. It is important to emphasize that, to assess the predictive ability of the procedure, a test period must be considered for each of the series in the collection. Otherwise, the forecasting error is likely to be underestimated.
Although several distance measures have been proposed in the literature to perform TSC (metrics based on geometric characteristics, extracted features, estimated model coefficients etc), to the best of our knowledge, no previous works have proposed to measure dissimilarity as the forecasting error produced by a global model. Moreover, the concept of prototype introduced in this manuscript, namely a global model fitted to all the time series within a given cluster, is also novel. In short, our method takes advantage of the outstanding performance of global models to find groups of series sharing the same forecasting patterns, a situation that can easily happen in real databases. It is important to remark that, although an improvement in the overall predictive accuracy is frequently attained by means of the proposed approach, the main output of the procedure is the resulting clustering solution, which produces a meaningful decomposition of the collection of time series in terms of forecasting structures and can be very useful as a exploratory tool.
The proposed approach was evaluated by means of a broad simulation study where the groups were characterized by different underlying stochastic processes. Some alternative methods as one procedure based on local models were considered for comparison purposes. The algorithm was also applied to perform clustering in classical time series datasets. Several important elements were analysed, including the number of clusters, the AR order of the global models, the class of models (linear regression, random forest etc) or the number of iterations the procedure needs to reach convergence. Overall, the proposed technique showed an excellent behaviour in terms of both clustering accuracy and forecasting effectiveness, outperforming the alternative approaches.
It is worth highlighting that the proposed procedure has some limitations. An important one is that, as the method considers the future periods of the
series to compute the distance between each element and each prototype, the value of the objective function does not necessarily decrease with each iteration. However, this issue can be easily solved by means of a simple heuristic rule. In addition, there are some situations in which our method is not advantageous. For instance, if the considered dataset contains long time series exhibiting simple dependence structures, then an approach based on local models can get similar or even better results. Moreover, although the proposed algorithm gets great results in the simulation experiments with well-separated groups, its clustering accuracy decreases when some amount of uncertainty exists in the generating processes. Another important issue is related to the class of global models, which must be appropriately chosen for a correct identification of the underlying groups.
There are two main ways through which this work can be extended. First, the numerical issues related to the two-step iterative process in Algorithm 1 could be addressed. In this regard, a new objective function could be proposed so that the forecasting error automatically decreases with each iteration, as in the traditional iterative clustering approaches. Second, our approach could be extended to the fuzzy setting. This way, each series would belong simultaneously to all the clusters and the corresponding forecasts would be computed as a weighted average by considering the individual predictions produced by each global model. This would probably result in increased stability of the overall forecasting error. Both topics will be properly addressed in further research.
|
2304.00996 | Deep Learning-based Diffusion Tensor Cardiac Magnetic Resonance
Reconstruction: A Comparison Study | In vivo cardiac diffusion tensor imaging (cDTI) is a promising Magnetic
Resonance Imaging (MRI) technique for evaluating the micro-structure of
myocardial tissue in the living heart, providing insights into cardiac function
and enabling the development of innovative therapeutic strategies. However, the
integration of cDTI into routine clinical practice is challenging due to the
technical obstacles involved in the acquisition, such as low signal-to-noise
ratio and long scanning times. In this paper, we investigate and implement
three different types of deep learning-based MRI reconstruction models for cDTI
reconstruction. We evaluate the performance of these models based on
reconstruction quality assessment and diffusion tensor parameter assessment.
Our results indicate that the models we discussed in this study can be applied
for clinical use at an acceleration factor (AF) of $\times 2$ and $\times 4$,
with the D5C5 model showing superior fidelity for reconstruction and the SwinMR
model providing higher perceptual scores. There is no statistical difference
with the reference for all diffusion tensor parameters at AF $\times 2$ or most
DT parameters at AF $\times 4$, and the quality of most diffusion tensor
parameter maps are visually acceptable. SwinMR is recommended as the optimal
approach for reconstruction at AF $\times 2$ and AF $\times 4$. However, we
believed the models discussed in this studies are not prepared for clinical use
at a higher AF. At AF $\times 8$, the performance of all models discussed
remains limited, with only half of the diffusion tensor parameters being
recovered to a level with no statistical difference from the reference. Some
diffusion tensor parameter maps even provide wrong and misleading information. | Jiahao Huang, Pedro F. Ferreira, Lichao Wang, Yinzhe Wu, Angelica I. Aviles-Rivero, Carola-Bibiane Schonlieb, Andrew D. Scott, Zohya Khalique, Maria Dwornik, Ramyah Rajakulasingam, Ranil De Silva, Dudley J. Pennell, Sonia Nielles-Vallespin, Guang Yang | 2023-03-31T16:30:31Z | http://arxiv.org/abs/2304.00996v2 | # Deep Learning-based Diffusion Tensor Cardiac Magnetic Resonance Reconstruction: A Comparison Study
###### Abstract
In vivo cardiac diffusion tensor imaging (cDTI) is a promising Magnetic Resonance Imaging (MRI) technique for evaluating the micro-structure of myocardial tissue in the living heart, providing insights into cardiac function and enabling the development of innovative therapeutic strategies. However, the integration of cDTI into routine clinical practice is challenging due to the technical obstacles involved in the acquisition, such as low signal-to-noise ratio and long scanning times. In this paper, we investigate and implement three different types of deep learning-based MRI reconstruction models for cDTI reconstruction. We evaluate the performance of these models based on reconstruction quality assessment and diffusion tensor parameter assessment. Our results indicate that the models we discussed in this study can be applied for clinical use at an acceleration factor (AF) of \(\times\)2 and \(\times\)4, with the DSCC5 model showing superior fidelity for reconstruction and the SwinMR model providing higher perceptual scores. There is no statistical difference with the reference for all diffusion tensor parameters at AF \(\times\)2 or most DT parameters at AF \(\times\)4, and the quality of most diffusion tensor parameter maps are visually acceptable. SwinMR is recommended as the optimal approach for reconstruction at AF \(\times\)2 and AF \(\times\)4. However, we believed the models discussed in this studies are not prepared for clinical use at a higher AF. At AF \(\times\)8, the performance of all models discussed remains limited, with only half of the diffusion tensor parameters being recovered to a level with no statistical difference from the reference. Some diffusion tensor parameter maps even provide wrong and misleading information.
## Introduction
In vivo cardiac diffusion tensor (DT) imaging (cDTI) is an emerging Magnetic Resonance Imaging (MRI) technique that has the potential to describe the micro-structure of myocardial tissue in the living heart. The diffusion of water molecules occurs anisotropically due to the restrictions imposed by the micro-structure of the myocardium, which can be approximated by fitting three-dimensional (3D) tensors with a specific shape and orientation in cDTI. Various parameters can be derived from the DT, including mean diffusivity (MD) and fractional anisotropy (FA), which are crucial indices that can indicate the structural integrity of myocardial tissues. The helix angle (HA) signifies local cell orientations, while the second eigenvector (E2A) represents the average sheetlet orientation [1]. The development of cDTI provides insights into the myocardial micro-structure and offers new perspectives on the elusive connection between cellular contraction and macroscopic cardiac function [1, 2]. Furthermore, it presents opportunities for novel assessments of the myocardial micro-structure and cardiac function, as well as the development and evaluation of innovative therapeutic strategies [3].
Despite the numerous advantages, there are still significant technical obstacles that must be overcome to integrate cDTI into routine clinical practice. For the calculation of the DT, diffusion-weighted images (DWIs) with diffusion encoding in at least six distinct directions need to be collected. Due to the movement derived from the heart beat and human breath, in vivo cDTI exploits single-shot encoding acquisition for repetitive fast scanning, e.g., single-shot echo planar imaging (SS-EPI) or spiral diffusion-weighted imaging [4]. The utilisation of these single-shot encoding acquisitions, which lead to low signal-to-noise (SNR) images, usually requires multiple repetitions to enhance the accuracy of the DT estimation [5, 6]. Each repetition necessitates an
additional breath-hold for the patient when using breath-hold acquisitions, which significantly increases the total scanning time and leads to uncomfortable patient experience.
Numerous studies have been proposed to accelerate cDTI technique, which can be mainly categorised as 1) reducing the total amount of DWIs used for the calculation of the DT; 2) general fast DWIs by _k_-space undersampling and reconstruction using compressed sensing (CS) or deep learning techniques. This study focuses on the second strategy.
Deep learning has emerged as a powerful technique for image analysis, capitalising on the non-linear and complex nature of networks through supervised or unsupervised learning, and has found widespread applications in medical image researches [7]. Deep learning-based MRI reconstruction [8, 9] has gained significant attention, leveraging its capable of learning complex and hierarchical representations from large MRI datasets [10].
In this work, we investigate the application of deep learning-based methods for cDTI reconstruction. We explore and implement three different types of deep learning-based models on cDTI datasets with acceleration factor (AF) of \(\times\)2, \(\times\)4 and \(\times\)8. These models include a Convolutional Neural Network (CNN)-based unrolling method, i.e., D5C5 [11], a CNN-based and conditional Generative Adversarial Network (GAN)-based non-unrolling method, i.e., DAGAN [12], and a Transformer-based non-unrolling methods, i.e., SwinMR [13]. The performance of these three models are evaluated by the reconstruction quality assessment and the DT parameters assessment.
Our experiments demonstrate that the models discussed in this paper can be applied for clinical use at AF \(\times\)2 and AF \(\times\)4, since the both the reconstruction of DWIs and DT parameters reach satisfactory levels. Among these models, D5C5 shows superior fidelity for the reconstruction, while SwinMR provides results with higher perceptual scores. There is no statistical difference with the reference for all the DT parameters at AF \(\times\)2 or most of the DT parameters at AF \(\times\)4. The quality of most the DT parameter maps we considered are visually acceptable. Considering various factors, SwinMR is recommended as the optimal approach for the reconstruction with AF \(\times\)2 and AF \(\times\)4.
However at AF \(\times\)8, the performance of these three models, including the best-performing SwinMR, is still limited. The reconstruction quality is not unsatisfactory due to the artefact remaining and the noisy (DAGAN) or 'fake' (SwinMR) estimation. Only half of the DT parameters can be recovered to a level that is no statistical difference with the reference. Some DT parameter maps even provide wrong and misleading information, which is unacceptable and dangerous for clinical use.
## Related Works
### Diffusion Tensor MRI Acceleration
A major drawback of DTI is its extended scanning time, as it requires multiple DWIs with varying b-value and diffusion gradient directions to calculate the DT. In theory, the estimation of the DT requires only six DWIs with different diffusion gradient directions and one reference image. Practically for cDTI, a considerable number of cardiac DWIs and multiple averages are typically required to enhance the accuracy of DT estimation, due to the inherently low SNR of single-shot acquisitions.
Strategies to accelerate the DTI technique have been explored. One technical route aims to reduce the number of DWIs required for the DT estimation [14, 15, 16, 17, 18, 19, 20], which can be further categorised into three sub-class.
1) Learn a direct mapping from reduced repetition (or gradient direction) of DWIs, to the DT or DT parameter maps. Ferreira _et al._[14] proposed a U-Net-based method for cDTI acceleration, which directly estimates the DT, using DWIs collected within one breath-hold, instead of solving a conventional linear-least-square (LLS) tensor fitting. Karimi _et al._[15] introduced a Transformer-based model with coarse-and-fine strategy to provide accuracy estimation of the DT, using only six diffusion-weighted measurements. Aliotta _et al._[21] proposed a neural network for brain DTI, namely DiffNet, which estimated MD and FA maps directly from diffusion-weighted acquisitions with as few as three diffusion-encoding directions. They further improved their method by combining a parallel U-Net for slice-to-slice mapping and a multi-layer perceptron for pixel-to-pixel mapping [16]. Li _et al._[17] developed a CNN-based model for brain DTI, i.e., SuperDTI, to generate FA, MD and directionally encoded color maps with as few as six diffusion-weighted acquisitions.
2) Enhance DWIs (denoising). This kind of methods usually apply only a small amount of enhanced images to achieve comparable estimation results with the results via standard protocol. Tian _et al._[18] developed a novel DTI processing framework, entitled DeepDTI, that minimised the required data for DTI to six diffusion-weighted images. The core idea of this framework was to use a CNN, which takes a non-diffusion-weighted (b0) image, six DWIs as well as a anatomical (T1- or T2-weighted) image as input, to produce high-quality b0 images and six DWIs. Phipps _et al._[19] applied a denoising CNN to enhance the quality of b0 images and corresponding DWIs for cDTI.
3) Refine the DT quality. Tanzer _et al._[20] proposed a GAN-based Transformer, to directly enhance the quality of the DT that was calculated with reduce amount of DWIs in a end-to-end manner.
Another technical route follows the general DWIs acceleration by _k_-space undersampling and reconstruction [22, 23, 24, 25]. Zhu _et al._[22] directly estimated the DT from highly undersampled _k_-space data. Chen _et al._[23] incorporated the joint sparsity prior of different DWIs with the L1-L2 norm and the DT's smoothness using the total variational (TV) semi-norm to efficiently expedite DWI reconstruction. Huang _et al._[24] utilised a local low-rank model and 3D TV constraints to reconstruct the DWIs from
undersampled \(k\)-space measurements. Teh _et al._[25] introduced a directed TV-based method for DWI images reconstruction, applying the information on the position and orientation of edges in the reference image.
In addition to these major technical routes, Liu _et al._[26] explored the deep learning-based image synthetics for the inter-directional DWIs generation. The true b0 and 6 DWIs were concatenated with the generated data and passed to the CNN-based tensor fitting network.
### Deep Learning-Based Reconstruction
The aim of MRI reconstruction is to recover the ground truth image \(x\) from the undersampled \(k\)-space measurement \(y\), which is mathematically described as an inverse problem:
\[x=\arg\min_{x}\frac{1}{2}||\mathcal{A}x-y||_{2}^{2}+\lambda\mathcal{R}(x), \tag{1}\]
in which the degradation matrix \(\mathcal{A}\) can be further presented as the combination of the undersampling trajectory \(\mathcal{M}\), Fourier transform \(\mathcal{F}\) and coil sensitivity maps \(\mathcal{S}\). \(\lambda\) is the coefficient that balances regularisation term \(\mathcal{R}(x)\).
Deep learning technique has been widely used for MRI reconstruction. Based on the association with traditional iterative CS algorithms, deep learning-based MRI reconstruction methods can be categorised into 1) unrolling-based models [11, 27, 28] and 2) non-unrolling-based models [12, 13].
Unrolling-based models usually integrate neural networks with traditional CS algorithms, simulating the iterative reconstruction algorithms through learnable iterative blocks [9]. Yang _et al._[27] reformulated an Alternating Direction Method of Multipliers (ADMM) algorithm to a multi-stage deep architecture, namely Deep-ADMM-Net, for MRI reconstruction, of which each stage corresponds to an iteration in traditional ADMM algorithm. Some unrolling-based models improved Eq (1) with a deep learning-based regulariser [11, 28], which can be formulated as:
\[x=\arg\min_{x}\frac{1}{2}||\mathcal{A}x-y||_{2}^{2}+\lambda\,||x-f_{\theta}(x _{u})||_{2}^{2},\quad\text{s.t. }x_{u}=\mathcal{F}^{-1}y, \tag{2}\]
in which \(f_{\theta}(\cdot)\) is a deep neural network and \(x_{u}\) is the undersampled zero-filled images (ZF). Schlemper _et al._[11] designed a deep cascade of CNNs for cardiac cine reconstruction, in which a spatio-temporal correlations can be also efficiently learned via the data sharing approach. Aggarwal _et al._[28] proposed a model-based deep learning methods, namely MoDL, which exploited a CNN-based regularisation prior for MRI reconstruction.
Non-unrolling-based models usually train an deep learning-based function \(f_{\theta}(\cdot)\) that maps the undersampled \(k\)-space measurement \(y\) or zero-filled images \(x_{u}\) to estimate fully-sampled images \(\hat{x}_{u}\) or its residual in an end-to-end manner, which can be formulated as \(\hat{x}_{u}=f_{\theta}(x_{u})\) or \(\hat{x}_{u}=f_{\theta}(x_{u})+x_{u}\). Yang _et al._[12] proposed a de-aliasing Generative Adversarial Networks for MRI reconstruction, in which the U-Net-based generator produced the estimated fully-sampled MRI images in an end-to-end manner. Feng _et al._[29] exploited the task-specific novel cross-attention and designed a end-to-end Transformer-based model for jointly MRI reconstruction and super-resolution. Huang _et al._[13] proposed a Swin Transformer-based model, namely SwinMR, for end-to-end MRI reconstruction, and they further explored the combination of Swin Transformer and GAN for the edge and texture preservation in MRI reconstruction [30].
Deep learning community constantly provides a wide range of novel and powerful network structures for both kinds of MRI reconstruction methods, including CNNs [11, 12], Recurrent Neural Networks [31, 32], Graph Neural Networks [33], recently thriving Transformers [13, 29, 30, 34, 35], etc. These rapidly evolving deep learning-based networks enable advances in for MRI reconstruction.
## Methodology
In this study, we implement three deep learning-based MRI reconstruction methods, namely, DAGAN [12], D5C5 [11] and SwinMR [13], and assess their performance on cDTI dataset. The overall data flow is depicted in Figure 1.
### Data Acquisition
All data used in this study were approved by the National Research Ethics Service. Written informed consent was obtained from all subjects.
Retrospectively acquired cDTI data were acquired using Siemens Skyra 3T MRI scanner and Siemens Vida 3T MRI scanner (Siemens AG, Erlangen, Germany). A diffusion-weighted stimulated echo acquisition mode (STEAM) SS-EPI sequence with reduced phase field-of-view and fat saturation. Some MR sequence parameters are listed: TR \(=2\) RR intervals; TE \(=23\) ms; SENSE or GRAPPA with AF \(=2\); echo train duration \(=13\) ms; spatial resolution \(=2.8\times 2.8\times 8.0\) mm\({}^{3}\). Diffusion-weighted images were encoded in six directions with diffusion-weighted of b \(=150\) and \(600\) sec/mm\({}^{2}\) (namely b150 and b600) in a short-axis mid-ventricular slice. Reference images, namely b0, were also acquired with with a minor diffusion weighting.
We used 481 cDTI cases including 2 cardiac phases, i.e., diastole (\(n=232\)) and systole (\(n=249\)), for the experiments section. The dataset contains 241 healthy cases, 31 amyloidosis (AMYLOID) cases, 47 dilated cardiomyopathy (DCM) cases, 35 in-recovery DCM (rDCM) cases, 39 hypertrophic cardiomyopathy (HCM) cases, 48 HCM genotype-positive-phenotype-negative (HCM G+P-) cases, and 40 acute myocardial infarction (MI) cases. The overall data distribution of our dataset is shown in Table 1. The detailed data distribution per cohort and cardiac phase can be found in Table S1 in Supplementary.
This work separately discussed the reconstruction of systole and diastole cases. For each deep learning-based methods, two network weights were trained for either systole or diastole reconstruction. In the training stage, we applied 5-fold-cross-validation strategy, using 169 diastole cases (TrainVal-D) or 183 systole cases (TrainVal-S). In the testing stage, four testing sets were utilised, including mixed ordinary testing set with diastole cases (Test-D) or systole cases (Test-S) and out-of-distribution MI testing set with diastole cases (Test-MI-D) or systole cases (Test-MI-S). According to Table S1, Test-D and Test-S includes the data of Health, AMYLOID, rDCM, DCM, HCM and HCM G+P-, which are also included in the TrainVal. For further examining the model robustness and ability to handle out-of-the-distribution data, Test-MI dataset includes only MI cases, which are 'invisible' for models during the training stage.
### Data Pre-Processing
In the data pre-processing stage, all DWIs (b0, b150 and b600) were processed following the same protocol.
The pixel intensity ranges of DWIs vary considerably across different b-values. To address this, We normalised all DWIs in the dataset to a pixel intensity range of \(0\sim 1\) using the max-min method, while the maximum and minimum pixel values of all
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Cardiac Phase} & \multicolumn{2}{c}{Total} & \multicolumn{2}{c}{TrainVal} & \multicolumn{2}{c}{Test} & \multicolumn{2}{c}{Test-MI} \\ \cline{2-9} & Case & Slice & Case & Slice & Case & Slice & Case & Slice \\ \hline Diastole & 272 & 22509 & 169 & 14182 & 43 & 3630 & 20 & 1470 \\ Systole & 290 & 23654 & 183 & 15054 & 46 & 3938 & 20 & 1470 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The overview of the dataset.
Figure 1: The data flow of our implementation for cardiac diffusion tensor imaging data. The whole procedure consists (A) data acquisition, (B) data pre-processing, (C) deep learning-based reconstruction and (D) data post-processing. It is noted that D5C5 does not require the cropping and pasting step and additionally takes the undersampled \(k\)-space data and the corresponding undersampling mask as input.
DWIs were recorded for the pixel intensity range recovery at the beginning of the data post-processing stage.
In our dataset, the majority of DWIs have a resolution of \(256\times 96\), while a small subset of 2D slices exhibit a resolution of \(256\times 88\). In order to standardise the resolution, we zero-padded the edges of the images with a resolution of \(256\times 88\) to achieve a resolution of \(256\times 96\).
In this study, GRAPPA-like Cartesian \(k\)-space undersampling masks with AF \(\times 2\), \(\times 4\) and \(\times 8\), generated by the official protocol of fastMRI dataset [10], were applied to simulate the \(k\)-space undersampling process. Since all the 2D slices have been reconstructed with zero-padding factor of two, the phase encoding (PE) of our undersampling masks was set to 48 instead of 96, for a more realistic simulation. The undersampling masks were then zero-padded from \(128\times 48\) to \(256\times 96\) as shown in Figure 1. More details regarding the undersampling masks can be found in Figure S1 in Supplementary.
For DAGAN and SwinMR, DWIs were further cropped to \(96\times 96\), as both models only support square-shaped input images.
### Deep Learning-Based Cardiac Diffusion Tensor Imaging Reconstruction
In this stage, deep learning-based models were utilised to took the \(k\)-space undersampled data as the input and produced the reconstructed MR images. We implemented and evaluated three deep learning-based models, namely DAGAN [12], D5C5 [11] and SwinMR [13] in this stage.
#### Dagan
DAGAN [12] is a conditional GAN-based and CNN-based model designed for general MRI reconstruction, of which the model structure is presented in Figure. 2. DAGAN comprises two components: a generator and a discriminator, which are trained in an adversarial manner as a two-player game.
The generator is a modified CNN-based U-Net [36] with a residual connection [37], which takes the \(k\)-space zero-filled MR images as input and aims to produce reconstructed MR images as close as possible to the ground truth images. The discriminator is a standard CNN-based classifier that attempts to distinguish the 'fake' reconstructed MR images generated by the generator, from the ground truth MR images.
During the inference stage, only the generator is applied, which takes the ZF MR images as input and outputs the reconstructed images.
DAGAN is trained with a hybrid loss function including an image space \(l2\) loss, a frequency space \(l2\) loss, a perceptual \(l2\) loss based on a pre-trained VGG [38], as well as an adversarial loss [39]. More implement details can be found in the original paper [12].
#### D5c5
D5C5 [11] is a CNN-based model for MRI reconstruction, with its model structure presented in Figure. 3. Originally proposed for cine MRI reconstruction, D5C5 also supports general MRI reconstruction.
D5C5 takes the undersampled \(k\)-space measurement as well as ZF MR images as the input and outputs the reconstructed MR images. It is composed of multiple stages, each comprising a CNN block and a data consistency (DC) layer. The CNN block contains a cascade of convolutional layers with Rectifier Linear Units (ReLU) for feature extraction, an optional data sharing (DS) layer for learning spatio-temporal features, as well as a residual connection [37]. The DC layer takes a linear combination
Figure 2: The model architecture of DAGAN. (A) the generator of DAGAN is a modified Convolutional Neural Network (CNN)-based U-Net with a residual connection; (B) the discriminator of DAGAN is a standard CNN-based classifier. Conv2D: 2D convolution layer; Recon: reconstructed MR images; GT: growing truth MR images.
between the output of the CNN block and the undersampled \(k\)-space data, enforcing the consistency between the prediction of CNNs and the original \(k\)-space measurements. D5C5 has five stages, with five convolution layers in each CNN block, and no DS layer is applied for our 2D MRI reconstruction task.
D5C5 is trained end-to-end using an image space \(l2\) loss function. Further implementation details can be found in the original paper [11].
#### SwinMR
SwinMR[13] is a Swin Transformer-based model for MRI reconstruction, with its model structure shown in Figure 4. SwinMR takes the ZF MR images as the input and directly outputs the reconstructed images.
SwinMR is composed of a CNN-based input module and output module for projecting between the image space and the latent space, a cascade of residual Swin Transformer blocks (RSTBs), and a convolution layer with a residual connection for feature extraction. A patch embedding and a patch unembedding layer are placed at the beginning and end of each RSTB, facilitating the inter-conversion of feature maps and sequences, since the computation of Transformers is based on sequences. Multiple standard Swin Transformer layers (STLs) [40] and a single convolutional layer are applied between the patch embedding and unembedding layer.
SwinMR is trained end-to-end with a hybrid loss function consisting of an image space \(l1\) loss, a frequency space \(l1\) loss, a perceptual \(l1\) loss based on a pre-trained VGG [38]. More implementation details can be found in the original paper [13].
### Data Post-Processing
We applied our in-house developed software (MATLAB 2021b, MathWorks, Natick, MA) for cDTI post-processing, following the protocol described in [14, 1]. The post-process procedure for reference data includes: 1) manual removal of low-quality DWIs; 2) DWI registration; 3) semi-manual segmentation for left ventricle (LV) myocardium; 4) DT calculation via the LLS fit; 5) DT parameter calculation including FA, MD, HA and E2A. The initial post-processing of reference data was performed by either Z.K. (7 years of experience), R.R. (3 years of experience) or M.D. (2 years of experience), and subsequently reviewed by P.F. (10 years of experience).
For the post-processing of deep leanring-based reconstruction results, the output (\(96\times 96\)) of DAGAN and SwinMR were 'pasted' back to the corresponding zero-filled images (\(256\times 96\)) at their original position. (This process does not affect the final post-processing results since the ROI region is set in the central \(96\times 96\) area.)
All the DWIs were 'anti-normalised' (pixel value range recovery) to their original pixel intensity range using the maximum and minimum values recorded in the pre-processing stage.
The reconstruction results were arranged to construct new reconstruction dataset with the same structure as the reference dataset. The reconstructed dataset was then automatically post-processed following the configuration of reference data (e.g.,
Figure 3: (A) The model architecture of D5C5. D5C5 has five stages, each comprising a Convolutional Neural Network block (CNN Block) and a data consistency layer (DC). (B) The structure of the CNN Block. One optional data sharing module (DS) and five convolutional layers (Conv Layers) are included in the CNN Block. (C) The structure of the DC. \(\mathcal{M}\) denotes the undersampling mask, and \(\overline{\mathcal{M}}=\mathcal{I}-\mathcal{M}\). \(\mathcal{F}\) and \(\mathcal{F}^{-1}\) denote the Fourier and inverse Fourier transform. \(\lambda\) is an adjustable coefficient controlling the level of DC.
low-quality removal information, registration shifting, segmentation masks) for a fair comparison.
## Experiments and Results
In this section, the experimental results are presented from the perspective: 1) the quality of DWIs reconstruction and 2) the quality of DT parameter estimations.
### Reconstruction Quality Assessment
In this study, four metrics were considered to assess the reconstruction quality. Peak Signal-to-Noise Ratio (PSNR) is a simple and commonly used metric for measuring the reconstruction quality, which measures the ratio of the maximum possible power of the signal to the power. Higher PSNR value indicates a better reconstruction quality. Structural Similarity Index (SSIM) is a perceptual-based metric that measures the similarity between two images by comparing their structural information. Higher SSIM value indicates a better reconstruction quality. Learned Perceptual Image Patch Similarity (LPIPS) [41] is a learned metric that measures the perceptual similarity between two images by computing the distance in the latent space using a pre-trained deep neural network. LPIPS has shown a high correlation with human perceptual judgements of the image similarity. Lower LPIPS value indicates a better generated images quality. Frechet Inception Distance (FID) [42] is a learned metric that measures the similarity between two sets of images by comparing their feature statistics, using a pre-trained deep neural network. FID has also shown to have high correlation with human perceptual experience. Lower FID value indicates a better generated images quality.
Quantitative reconstruction results on the Test-S and Test-D datasets are presented in Table 2. The two-sample t-test was applied for the statistical analysis, and \({}^{\star}\) in Table 2 indicates the specific result distribution is significantly different (\(p<0.05\)) from the **best result** distribution. Among the evaluated models, D5C5 demonstrates superior fidelity in the reconstruction, while SwinMR provides results with higher perceptual score.
Visualised samples of the reconstruction results on Test-S and Test-D datasets are shown in Figure 5.
### Diffusion Tensor Parameter Quality Assessment
We further evaluated the quality of DT parameters, including FA, MD, E2A and HA, after post-processing.
Figure 4: (A) The model architecture of SwinMR. (B) The structure of the residual Swin Transformer block (RSTB). (C) The structure of the Swin Transformer layer (STL). Conv2D: 2D convolutional layer. MLP: multi-layer perceptron; LN: layer normalisation; Q: query; K: key; V: value. MSA: multi-head self-attention.
\begin{table}
\begin{tabular}{l|c c c c|c c c c c} \hline \hline
**Metrics** & \multicolumn{6}{c}{**Test-S**} & \multicolumn{6}{c}{**Test-D**} \\ \hline
**AF x2** & \multicolumn{1}{c}{ZF} & DAGAN & D5C5 & SwinMR & ZF & DAGAN & D5C5 & SwinMR \\ \hline SSIM \(\uparrow\) & 0.819 (0.042) \({}^{*}\) & 0.857 (0.026) \({}^{*}\) & **0.931 (0.030)** & 0.919 (0.031) \({}^{*}\) & 0.819 (0.044) \({}^{*}\) & 0.851 (0.025) \({}^{*}\) & **0.932 (0.031)** & 0.921 (0.034) \({}^{*}\) \\ PSNR \(\uparrow\) & 25.76 (2.27) \({}^{*}\) & 27.86 (1.59) \({}^{*}\) & **31.80 (2.45)** & 30.83 (2.50) \({}^{*}\) & 26.41 (2.38) \({}^{*}\) & 28.25 (1.52) \({}^{*}\) & **32.43 (2.70)** & 31.61 (2.80) \({}^{*}\) \\ LPIPS \(\downarrow\) & 0.149 (0.037) \({}^{*}\) & 0.060 (0.024) \({}^{*}\) & **0.050 (0.026)** & **0.050 (0.023)** & 0.148 (0.034) \({}^{*}\) & 0.066 (0.028) \({}^{*}\) & 0.053 (0.032) & **0.052 (0.029)** \\ FID \(\downarrow\) & 82.29 & 33.1 & 18.53 & **17.2** & 101.55 & 38.84 & 25.83 & **22.7** \\ \hline
**AF x4** & \multicolumn{1}{c}{ZF} & DAGAN & D5C5 & SwinMR & ZF & DAGAN & D5C5 & SwinMR \\ \hline SSIM \(\uparrow\) & 0.663 (0.068) \({}^{*}\) & 0.751 (0.039) \({}^{*}\) & **0.849 (0.044)** & 0.842 (0.048) \({}^{*}\) & 0.668 (0.065) \({}^{*}\) & 0.783 (0.039) \({}^{*}\) & **0.860 (0.047)** & 0.851 (0.051) \({}^{*}\) \\ PSNR \(\uparrow\) & 20.92 (2.34) \({}^{*}\) & 24.20 (1.62) \({}^{*}\) & **26.85 (2.14)** & 26.56 (2.19) \({}^{*}\) & 21.86 (2.40) \({}^{*}\) & 25.66 (1.76) \({}^{*}\) & **28.27 (2.37)** & 27.75 (2.51) \({}^{*}\) \\ LPIPS \(\downarrow\) & 0.321 (0.040) \({}^{*}\) & 0.117 (0.041) \({}^{*}\) & 0.092 (0.030) \({}^{*}\) & **0.090 (0.030)** & 0.313 (0.041) \({}^{*}\) & 0.090 (0.032) & 0.091 (0.039) \({}^{*}\) & **0.089 (0.039)** \\ FID \(\downarrow\) & 218.18 & 61.9 & 38.42 & **29.48** & 212.61 & 51.41 & 44.64 & **36.17** \\ \hline
**AF x8** & \multicolumn{1}{c}{ZF} & DAGAN & D5C5 & SwinMR & ZF & DAGAN & D5C5 & SwinMR \\ \hline SSIM \(\uparrow\) & 0.529 (0.084) \({}^{*}\) & 0.579 (0.054) \({}^{*}\) & 0.680 (0.067) \({}^{*}\) & **0.719 (0.074)** & 0.544 (0.080) \({}^{*}\) & 0.595 (0.054) \({}^{*}\) & 0.689 (0.064) \({}^{*}\) & **0.720 (0.072)** \\ PSNR \(\uparrow\) & 18.26 (2.37) \({}^{*}\) & 20.15 (1.73) \({}^{*}\) & 21.52 (2.09) \({}^{*}\) & **22.32 (2.19)** & 19.33 (2.40) \({}^{*}\) & 20.99 (1.80) \({}^{*}\) & 22.63 (2.20) \({}^{*}\) & **23.17 (2.30)** \\ LPIPS \(\downarrow\) & 0.491 (0.040) \({}^{*}\) & 0.212 (0.059) \({}^{*}\) & 0.197 (0.049) \({}^{*}\) & **0.165 (0.046)** & 0.473 (0.037) \({}^{*}\) & 0.199 (0.061) \({}^{*}\) & 0.196 (0.055) \({}^{*}\) & **0.166 (0.055)** \\ FID \(\downarrow\) & 375.91 & 100.14 & 127.37 & **62.28** & 368.58 & 85.79 & 137.83 & **72.96** \\ \hline \hline \end{tabular}
\end{table}
Table 2: The quantitative reconstruction results on the testing sets Test-S and Test-D with undersampling masks of the acceleration factor (AF) \(\times 2\), \(\times 4\) and \(\times 8\). SSIM, PSNR and LPIPS results are quoted as ‘mean (standard deviation)’. \({}^{*}\) indicates the specific distribution is significantly different (\(p<0.05\)) from the **best results** distribution by the two-sample t-test.
\begin{table}
\begin{tabular}{l|c c c|c c|c c c} \hline
**DT Para.** & \multicolumn{6}{c|}{**Test-S**} & \multicolumn{6}{c}{**Test-MI-S**} \\ \hline
**AF**\(\times 2\) & \multicolumn{1}{c}{ZF} & DAGAN & D5C5 & SwinMR & ZF & DAGAN & D5C5 & SwinMR \\ \hline FA & 0.057 [0.021] \({}^{*}\) & **0.006 (0.008)** & **0.005 (0.008)** & 0.008 (0.007) & 0.075 [0.016] \({}^{*}\) & **0.011 (0.014)** & 0.012 (0.009) & **0.012 (0.007)** \\ \cline{2-10} MD & **0.011 [0.012] \({}^{*}\)** & 0.008 (0.007) \({}^{*}\) & **0.003 (0.006)** & **0.003 (0.009)** & **0.027 (0.014) \({}^{*}\)** & 0.005 (0.008) & **0.002 (0.006)** & 0.004 (0.004) \\ \cline{2-10} HAl Slope & 1.264 [0.849] \({}^{*}\) & 0.244 [0.279] \({}^{*}\) & 0.227 [0.228] & **0.165 [0.244]** & 1.560 [0.630] \({}^{*}\) & 0.220 [0.503] & 0.208 [0.166] & **0.164 [0.161]** \\ E2A & **2.658 [0.213] \({}^{*}\)** & 0.850 [1.248] \({}^{*}\) & **0.570 [0.858]** & 0.608 [1.197] & **3.093 [0.405] \({}^{*}\)** & 0.904 [1.284] & 0.959 (0.993) & **0.699 (0.805)** \\ \hline
**AF**\(\times 4\) & \multicolumn{1}{c}{ZF} & DAGAN & D5C5 & SwinMR & ZF & DAGAN & D5C5 & SwinMR \\ \hline FA & 0.140 [0.043] \({}^{*}\) & **0.014 (0.020)** & 0.031 [0.028] \({}^{*}\) & **0.020 [0.019]** & 0.167 [0.043] \({}^{*}\) & **0.029 [0.022]** & 0.056 [0.020] \({}^{*}\) & 0.033 (
Differences in DT parameter global mean values between the reference and reconstruction, on systole testing sets (Test-S and Test-MI-S) and diastole testing sets (Test-D and Test-MI-D) are presented in Table 3 and Table S2, respectively. The mean absolute error for FA, MD and the mean absolute angular error for the HA gradient (HA Slope) and E2A were employed to quantify the difference. The Mann-Whitney test was utilised for the statistical analysis, and \({}^{\star}\) in Table 3 and Table S2 indicates that the specific error distribution is significantly different (\(p<0.05\)) from the **best results** distribution. Data point with green background indicates that the specific distribution of corresponding DT parameter global mean values is NOT significantly different (\(p>0.05\)) from the reference distribution according to the Mann-Whitney Test.
Overall, SwinMR can achieve better or comparable (not significantly different) MD, HA slope and E2A results on all testing sets. DAGAN can achieve better or comparable (not significantly different) FA results on all testing sets. D5C5 has provided better results only on Test-S at AF \(\times 2\), but it is not significantly better than SwinMR (on MD, HA Slope and E2A) and DAGAN (on FA).
Some cases of visualised DT parameter maps are presented in this study, including FA, MD, HA and absolute value of E2A (IEEEAI). The DT parameter maps of a systole healthy case from Test-S with different AFs are shown in Figure 6 (AF\(\times 2\)), Figure 7 (AF\(\times 4\)), and Figure 8 (AF\(\times 8\)). The DT parameter maps of a diastole healthy case from Test-D with different AFs are shown in Figure S2 (AF\(\times 2\)), Figure S3 (AF\(\times 4\)), and Figure S4 (AF\(\times 8\)). The DT parameter maps of a systole MI case from Test-MI-S with different AFs are shown in Figure S5 (AF\(\times 2\)), Figure S6 (AF\(\times 4\)), and Figure S7 (AF\(\times 8\)). The DT parameter maps of a diastole MI case from Test-MI-D with different AFs are shown in Figure S8 (AF\(\times 2\)), Figure S9 (AF\(\times 4\)), and Figure S10 (AF\(\times 8\)).
superior PSNR and SSIM, while SwinMR has achieved better deep learning-based perceptual scores, i.e., LPIPS and FID. For the reconstruction tasks at AF \(\times\)8, SwinMR has outperformed other methods across all the metrics applied.
According to Figure 5, for the reconstruction task of AF \(\times\)2, all three methods have produced fairly good visual reconstruction results. For the reconstruction task of AF \(\times\)4, all three methods have successfully recovered overall structure information, whereas they have behaved differently in the recovery of the high-uncertainty area. For example in the experiment on Test-S at AF \(\times\)4 (Row 3-4, Col 1-5, Figure 5), the red arrows indicate the high-uncertainty area on the LV myocardium due to the signal loss. DAGAN has provided a noisy estimation while SwinMR has clearly preserved this part of information. However, the results of D5C5 have missed the information in this area. For reconstruction task of AF \(\times\)8, neither of three methods is able to produce visually satisfied reconstruction results. For example in the experiment on Test-S with AF \(\times\)8 (Row 5-6, Col 1-5, Figure 5), a large amount of visible aliasing artefacts along the PE direction has remained in the reconstruction results of both D5C5 and SwinMR, with D5C5 performing relatively worse than SwinMR. DAGAN, to some extent, has eliminated the aliasing artefacts at the expense of the increased noise, leading to a low-SNR reconstruction. Regarding the recovery of high-uncertainty area, both DAGAN and D5C5 have failed to preserve the information in this area. SwinMR can retain most information of this area, but meanwhile it has produced 'fake' estimation (green arrow).
For the fidelity of the reconstruction, D5C5 has shown superiority on the condition of a relative lower AF, whereas this superiority has been observed disappearance on the condition of a relative higher AF. This phenomenon is caused by the utilisation of DC module in D5C5 (Figure 3), which combines the \(k\)-space measurements information with the CNN estimation to keep the consistency. According to Figure S1, in the reconstruction task at a relative lower AF, a large proportion of information in the final output of D5C5 is provided by the DC module, whereas this proportion is significantly decreased in a relative higher AF reconstruction task (AF \(\times\)8). Therefore, this kind of unrolling-based methods with DC module is more suitable for the reconstruction at a relative lower AF.
For the perceptual score of the reconstructions, experiments have shown SwinMR outperforms D5C5 and DAGAN on metrics LPIPS and FID. However, even though the perceptual score has a high correlation with the observation of human, it is not always equivalent to a better reconstruction quality[30]. According to Figure 5 (green arrow), SwinMR has learnt to estimate a 'fake' reconstruction detail for a higher perceptual score, which is totally unacceptable and dangerous for clinical use. We believes this phenomenon is caused by the nature of the Transformer applied in SwinMR, which is powerful enough
Figure 6: Diffusion parameter maps of the reconstruction results (AF \(\times\)2) and the reference of a healthy systole case from testing set Test-S. Row 1: fractional anisotropy (FA); Row 2: mean diffusivity (MD); Row 3: helix angle (HA); Row 4: absolute value of the second eigenvector ((I2A)).
to estimate and generate details that does not exist originally. In addition, the utilisation of the perceptual VGG-based loss restricts SwinMR to produce more _perceptual-similar_ reconstruction instead of _pixel-wise-similar_ reconstruction.
In general, the differences for tensor parameter global mean values between reference and reconstruction results tend to increase as the AF rises. Concerning the global mean values of FA, DAGAN has demonstrated superiority on the Test-S and Test-MI-S, with its superiority growing as the AF increases. On the Test-D and Test-MI-D, the three methods have yielded similar results, with no statistically significant difference observed. Regarding the global mean values of MD, D5C5 and SwinMR have outperformed DAGAN across all the testing sets. Specifically, D5C5 has delivered better results on Test-S, while SwinMR has excelled on Test-MI-S. On the Test-D and Test-MI-D, SwinMR and D5C5 have achieved similar results with no statistical difference at AF \(\times\)2 and AF \(\times\)4, while SwinMR has surpassed D5C5 at a higher AF (AF \(\times\)8). For the global mean values of HA Slope, it is clear that SwinMR has outperformed DAGAN and D5C5 on all testing sets, with its superiority being statistically significant on Test-S and Test-D. In terms of the global mean values of E2A, generally SwinMR has achieved better or comparable results among the three methods, but the differences are typically not statistically significant.
Generally, the quality of DT parameter maps have decreased as the AF increases. We believes that at AF \(\times\)2 and AF \(\times\)4, the DT parameter maps calculated by these three methods can achieve similar level with the reference. For the MI cases from out-of-the-distribution testing set Test-MI, these three methods can successfully preserve the information in lesion area for clinical use. For example at AF \(\times\)2, all three methods have provided visually similar DT parameters maps with the reference (Figure 6 and Figure S5). At AF \(\times\)4, all three methods can recover most information of the DT parameters maps. DAGAN tends to produce noisier DT parameter maps, while SwinMR and D5C5 tend to produce the smoother DT parameter maps, which matches the results from the reconstruction quality assessment. We can observed from the MD map and its corresponding error map, that the vertical aliasing (along PE) direction has affected the DT parameter maps (Figure 7, red arrows). The intensity of the MI area in the MD map of DAGAN has had a trend to decrease, while D5C5 and SwinMR has clearly preserved it (Figure S6, red arrows).
However, at AF \(\times\)8, the quality of DT parameter maps have significantly gone worse, which also matches the results from the reconstruction quality assessment. For the FA map, a band of higher FA is expected to be observed in the mesocardium for a healthy heart[43]. However, DAGAN and D5C5 have failed to recover the band of higher FA, where DAGAN has produced very noisy FA map, and D5C5 over-smoothed the FA map and wrongly estimate a highlight area (Figure S7, blue arrows).
Figure 7: Diffusion parameter maps of the reconstruction results (AF \(\times\)4) and the reference of a healthy systole case from testing set Test-S. Row 1: fractional anisotropy (FA); Row 2: mean diffusivity (MD); Row 3: helix angle (HA); Row 4: absolute value of the second eigenvector (IE2AI).
For the MD map, the affect from the aliasing that has been observed at AF \(\times\)4, has gone more severe. In the healthy case, the highlight area has wrongly appeared in MD maps from all three methods, which is unacceptable for clinical use and may lead to misdiagnosis (Figure 8, red arrows). In the MI case, the MI lesion area tends to decrease for all the methods, especially for the results of DAGAN, where the lesion area has nearly disappeared (Figure S7, red arrows). For the HA map, it has been observed that SwinMR can produce relatively smooth HA map, while DAGAN can only reconstruct very noisy one. However, the direction of HA has been wrong estimated in the epicardium of the healthy case (Figure 8, green arrows). This is not acceptable for clinical use and easier to lead to misdiagnosis such as MI. For the IE2Al map, DAGAN tends to reconstruct a noisy map, while SwinMR tends to produce a smooth map. All three maps can reconstruct similar results with reference even at AF \(\times\)8.
Through our experiments, we have demonstrated that the models discussed in this paper can be effectively applied for clinical use at AF \(\times\)2 and AF \(\times\)4. However, at AF \(\times\)8, the performance of these three models, including the best-performing SwinMR, has still remained limited.
We hope that this study will serve as a baseline for the future cDTI reconstruction model development. Our findings have indicated that there are still limitations when directly applying these general MRI reconstruction methods for cDTI reconstruction.
_There is an absence of restrictions on diffusion._ The loss functions utilised in the three models discussed in this study all rely on image domain loss, with D5C5 and DAGAN additionally incorporating the frequency domain loss and the perceptual loss. In other words, these is no diffusion information restriction implemented during the model training stage. For further work, the diffusion tensor or parameter maps can be jointly considered into the loss function. Moreover, physical constraints on diffusion can be also incorporated into the training stage.
_There is a trade-off between perceptual performance and quantitative performance._ Cardiac diffusion tensor MRI is a quantitative technique, which places greater emphasis on the accuracy contrast, pixel intensity range, and pixel-wise fidelity, also referred to as pixel-wise distance. However, the models discussed in this study were originally designed for structural MRI, and they tend to pay more attention on the 'perceptual-similarity', which can be regarded as the latent space distance. A trade-off exists between pixel-wise fidelity and perceptual-similarity [44]. For example, blurred images generally exhibit better pixel-wise fidelity, while the images with clear but 'fake' details tend to have better perceptual-similarity [30]. Such 'fake' details can sometimes be harmful for clinical use. Consequently, for further work, more efforts should be made to consider about
Figure 8: Diffusion parameter maps of the reconstruction results (AF \(\times\)8) and the reference of a healthy systole case from testing set Test-S. Row 1: fractional anisotropy (FA); Row 2: mean diffusivity (MD); Row 3: helix angle (HA); Row 4: absolute value of the second eigenvector (IE2Al).
how to improve the pixel-wise fidelity rather than the perceptual-similarity, or how to prevent the appearance of the 'fake' information.
_There is a gap between current DT evaluation methods and the true quality of cDTI reconstruction._ This study has revealed that the global mean value of diffusion parameters is not always accurate or sensitive enough to evaluate the diffusion tensor quality. For example, Table 3 indicates no statistically significant difference in MD between reconstruction results (even including ZF) and the reference on Test-S, whereas the Figure 8 shows that the MD maps are entirely unacceptable. This discrepancy arises because the MD value increases and decreases in different parts of the MD map, while the global mean value maintains relative consistency, rendering the global mean MD ineffective in reflecting the quality of the final DT estimation. For future work, apart from the visualised assessment, we will applied the down-stream task assessment, e.g., utilising a pre-trained pathology classification or detection mdoel to evaluate the reconstruction quality. Theoretically, better classification or detection accuracy corresponds to improved reconstruction results.
There are still limitations for this study. 1) The size of testing sets is not sufficiently large. The relative small testing sets enlarge the randomness of experimental results and reduce the reliability of statistical tests. In future studies, we will expand our dataset, and provide more accurate results. 2) Our simulation experiment is based on the retrospective \(k\)-space undersampling on single-channel DWIs that have been reconstructed by the MR scanner. The retrospective undersampling step itself has removed a large amount of noise, leading to unrealistic post-processing results. In future studies, we will conduct our experiment on prospectively-acquired \(k\)-space raw data.
## Conclusion
In conclusion, we have investigated the application of deep learning-based methods for accelerating cDTI reconstruction, which has significant potential for improving the integration of cDTI into routine clinical practice. Our study focuses on three different models, namely D5C5, DAGAN, and SwinMR, which have been evaluated on cDTI datasets with the AF of \(\times\)2, \(\times\)4, and \(\times\)8. The results have demonstrated that the examined models can be effectively utilised for clinical use at AF \(\times\)2 and AF \(\times\)4, with SwinMR being the recommended optimal approach. However, at AF \(\times\)8, the performance of all models has remained limited, and further research is required to improve their performance at a relative higher AF.
|
2309.11867 | Tuning brittleness in multi-component metallic glasses through chemical
disorder aging | Shear localization in slowly-driven bulk metallic glasses (BMGs) is typically
accompanied by a sharp drop in the bulk stress response as a signature of the
plastic yielding transition. It is also observed that the sharpness of this
elastic-plastic dynamical transition depends on the extent of local chemical
and microstructural orders, as well as the glass preparation protocol ( ie.
thermal annealing). Here, we investigate sheared multi-element BMGs in
molecular dynamics (MD) simulations, and demonstrate that glass aging,
implemented through a hybrid Monte-Carlo(MC)-MD process, sharpens the
elastic-plastic transition through a distinct crossover, seen in strain
patterns that gradually shift from diffuse features in as-quenched samples to
localized (yet system-spanning) patterns in well-annealed glasses. This effect
of glass aging on the elastic-plastic transition is found to be correlated to
the inherent interplay between aging-induced icosahedra ordering and
co-operative formation of shear transformation zones. The observed crossover is
quantified through a measure of the age-dependent susceptibility to plastic
rearrangements, exhibiting strong (anti-)correlations to local ordering
features, and the corresponding spatial correlation length grows with the aging
timescale. | Kamran Karimi, Stefanos Papanikolaou | 2023-09-21T08:10:54Z | http://arxiv.org/abs/2309.11867v1 | # Tuning brittleness in multi-component metallic glasses through chemical disorder aging
###### Abstract
Shear localization in slowly-driven bulk metallic glasses (BMGs) is typically accompanied by a sharp drop in the bulk stress response as a signature of the plastic yielding transition. It is also observed that the sharpness of this elastic-plastic dynamical transition depends on the extent of local chemical and microstructural orders, as well as the glass preparation protocol ( _ie_. thermal annealing). Here, we investigate sheared multi-element BMGs in molecular dynamics (MD) simulations, and demonstrate that glass aging, implemented through a hybrid Monte-Carlo(MC)-MD process, sharpens the elastic-plastic transition through a distinct crossover, seen in strain patterns that gradually shift from diffuse features in as-quenched samples to localized (yet system-spanning) patterns in well-annealed glasses. This effect of glass aging on the elastic-plastic transition is found to be correlated to the inherent interplay between aging-induced icosahedra ordering and co-operative formation of shear transformation zones. The observed crossover is quantified through a measure of the age-dependent susceptibility to plastic rearrangements, exhibiting strong (anti-)correlations to local ordering features, and the corresponding spatial correlation length grows with the aging timescale.
Plastic yielding in slowly sheared metallic glasses (below the glass transition temperature \(T_{g}\)) typically occurs via localization of intense, irrecoverable deformation, yet without crushing or crumbling within the bulk. Microstructurally, this flow is attributed to emergent shear transformation zones (STZs) [1; 2; 3; 4], commonly known as carriers of amorphous plasticity, somewhat analogously to dislocations in crystals. STZs mutually interact, as they are soft mesoscale defects that relax stress locally but can further induce far-field elastic-type triggering elsewhere within the glassy medium. This phenomenon leads to _collective_ dynamics upon failure and _universal_ features, including scale-free statistics and diverging length, time, and/or energy scales that could be understood within the broad context of far-from-equilibrium critical phenomena [5; 6; 7; 8]. Despite the observed universality, the _sharpness_ of the elastic-plastic transition (seen for example, as a discontinuous stress drop in displacement-controlled uniaxial loading ), may exhibit significant variations across multi-element BMGs owing to modifications in thermal treatments (i.e. aging/annealing) and chemical compositions [9; 10; 11; 12]. In this paper, we concentrated on the equiatomic CoNiCrFeMn alloy that has been commonly regarded in the literature as a high-entropy "Cantor" alloy [13; 14] but we consider only its mechanical properties in the glassy state achieved by fast cooling [15]. We investigate the inherent correlations between microstructure and shear localization, as the amorphous state is gradually aging.
Amorphous metals have the capability to undergo plastic flow mediated by STZs, contributing significantly to their ductility [16; 17; 18; 19]. However, in certain aged glasses, which do not possess inherent heterogeneities [10; 20; 21], or when the associated length scales do not significantly surpass the average interatomic distance, plastic distortion displays localization within a dominant band and eventually brittle-type fracture. This is akin to the ductile-to-brittle transition present in a broad range of amorphous solids [5]. In addition, aging-mediated, structural relaxation, leads to "annealed" metallic glasses that nucleate certain quasi-ordered phases characterized by short range order (SRO) [22; 23; 24; 25], and there exists a strong tendency to tune the extent of shear localization through structural ordering [20; 26; 27]. SROs are commonly identified by the formation of ordered icosahedral clusters, representing the most (energetically) favored atomic arrangement within the chemically diverse, amorphous matrix. Such clusters play a pivotal role in developing a number of glassy properties, such as the dynamic slowing-down in the super-cooled regime [28]. It is worth noting that SROs serve as "infertile" sites for the nucleation of STZs in that the latter are typically loosely-packed, soft, and disordered arrangements that weaken the local strength and, thus, enhance the nucleation probability of shear banding instabilities.
In this paper, we investigate the precise characteristics of the inherent SRO-STZ interplay and relevant atomistic mechanisms that influence the nucleation dynamics of shear bands in concentrated and chemically complex BMGs. Previously [19; 29], in concentrated BMGs, we showed how the elastic-plastic transition sharpness and fine-scale structural ordering features in shear bands might be dependent on the chemical composition. Here, we show how room temperature thermal aging may affect four chemically complex glasses and then, we focus on the structural aspects of the transition in the
CoNiCrFeMn metallic glass. We center our approach on a classification pertinent to the plastic yielding transition in metallic glasses: _i_) the "good" or annealed glass, forms localized deformation patterns and displays a discontinuous transition in the macroscopic average stress response, _ii_) the "bad" or quenched glass, may delocalize strain and display ductility characteristics. We demonstrate that annealing effectively controls the crossover from bad to good glass by tuning the level of icosahedra-based structural ordering within the glass, thereby influencing its propensity to form shear bands. We perform cluster analysis for a measure that measures non-affine plastic rearrangements, and we show that the cluster correlation length evolves across the yielding transition and exhibits significant variations with sample age.
_Simulations & Protocols_ - The details of the hybrid Monte Carlo (MC) - Molecular Dynamics (MD) simulations, in this work, are given in the Supplementary Material (SM) [30] (see also references [31; 32; 33; 34; 35] therein) including the description of relevant units, the preparation protocol, the utilized interatomic potential functions, and also, the deformation parameters of the investigated model metallic glasses [19]. Prior to shearing, the as-quenched samples were subject to MC-MD annealing up to the aging duration of \(t_{\rm age}=100\) ps (with standard heuristic assumptions on the MC-related timescale). Simple shear tests were subsequently performed on the aged glasses at a fixed strain rate \(\dot{\gamma}_{xy}=10^{-4}\) ps\({}^{-1}\) and temperature \(T=300\) K, up to shear strain \(\gamma_{xy}=0.2\). To probe the dynamics of individual atoms, we track the mean squared displacements \(D^{2}_{\rm min}\) as a measure of atoms' non-affinity with respect to the imposed shear deformation [2]. We further perform a Voronoi analysis using OVITO [36] to locate atoms displaying icosahedral order, namely polyhedral cells with exactly twelve faces, including regular pentagons. To obtain the associated (number) density, \(\rho_{\rm ico}=1/V_{\rm ico}\), we repeat the Voronoi analysis by including _exclusively_ atoms with icosahedral symmetries within the periodic box (and excluding other atoms). This gives another set of Voronoi cells with volume \(V_{\rm ico}\). The mean number density of icosahedral clusters is further derived as \(\langle\rho_{\rm ico}\,\rangle\).
Further, we measure the softening modulus \(h_{\rm min}\), defined as the maximum _rate_ of the macroscopic average stress drop at every age \(t_{\rm age}\) as in Fig. S3. This stress drop is typically defined as the difference between the overshoot stress and the subsequent flow stress, and is associated with the initiation of a spanning shear band, thus it has been used as an order parameter in model glass studies [37; 5] showing meaningful variations with glass compositions and processing parameters [10]. Nevertheless, in metallic glass simulations and/or experiments, a robust measurement of the macroscopic drop is not always feasible due to the lack of a well-defined steady flow regime beyond the apparent stress overshoot. In a recent work [19], we established \(h_{\rm min}\) as a more robust experimentally-relevant indicator of the elastic-plastic transition in BMGs, shear banding and associated structural features.
_Results_ - Figure 1 displays results of the shear tests performed on the aged CoNiCrFeMn glass. The resulting stress-strain curves, \(\sigma_{xy}\) against \(\gamma_{xy}\) in Fig. 1(a), indicate a pronounced stress overshoot, namely a monotonic increase of stress towards a peak value at \(\gamma_{\rm max}\simeq 0.1\), followed by a sheer reduction in stress, and prior to a well-defined plastic flow regime, with marked dependence on glass aging. This is quantified in Fig. 1(b) where the rate of stress drop \(h_{\rm min}\) tends to grow (and eventually saturate) with increasing \(t_{\rm age}\) corresponding to CoNiCrFeMn as well as Co\({}_{5}\)Cr\({}_{2}\)Fe\({}_{40}\)Mn\({}_{27}\)Ni\({}_{26}\), CoNiCrFe, and CoNiFe. This feature appears to be quite robust with respect to variations in the chemical composition and/or molar concentrations. The observed enhancement in the sharpness of yielding transition aligns closely with the \(D^{2}_{\rm min}\) maps as visualized in Fig. 1(b) and (c) corresponding to the as-quenched (\(t_{\rm age}=0\)) and annealed (\(t_{\rm age}=90\) ps) glasses at \(\gamma_{xy}=0.2\). Notably, the abrupt stress drop in the aged metallic glass is accompanied by localized (and system-spanning) features as illustrated in Fig. 1(d), whereas the as-quenched sample displays scattered deformation patterns across the medium, as in Fig. 1(c).
Figure 1: **a**) Macroscopic stress \(\sigma_{xy}\) plotted against applied (shear) strain \(\gamma_{xy}\) in deforming CoNiCrFeMn annealed at different duration \(t_{\rm age}\)**b**) softening modulus \(h_{\rm min}\) versus annealing duration \(t_{\rm age}\) corresponding to several BMGs. Local mean-squared nonaffine displacements \(D^{2}_{\rm min}\) associated with **c**) as-quenched (\(t_{\rm age}=0\)) and **d**) annealed (\(t_{\rm age}=90\) ps) CoNiCrFeMn at \(\gamma_{xy}=0.2\). Here \(x\), \(y\), and \(z\) denote flow, gradient, and vorticity directions, respectively. The blue and red colors in **c**) and **d**) indicate low and high \(D^{2}_{\rm min}\) values, respectively. The size of the cubic box is \(L\simeq 80\) Å.
Here, the blue and red colors in the deformation maps indicate regions with low and high squared nonaffine displacements.
We now turn to the characteristics of microstructural ordering and the interplay with shear bands in the specific, for clarity purposes, deforming metallic glass CoNiCrFeMn. The color maps in the insets of Fig. 2(a) and (b) overlay atoms with the full icosahedral order (black disks) on the two-dimensional (interpolated) \(D^{2}_{\rm min}\) field associated with the as-quenched (\(t_{\rm age}=0\) ps) and annealed (\(t_{\rm age}=90\) ps) metallic glass at \(\gamma_{xy}=0.2\). It is evident from both maps that (red) rearranging zones notably lack local structural ordering in contrast to the (blue) rigid matrix. This is further illustrated in Fig. 2(c) and (d) displaying \(D^{2}_{\rm min}\) probability distribution functions corresponding to atoms with icosahedral symmetry in the as-quenched and annealed sample, respectively. The latter exhibits a clear bimodal behavior in Fig. 2(d) with the first (higher) and second (lower) peaks denoting the population of atoms outside and within shear zones. The conditional distribution (red squares) indicates a relatively higher contribution of the ordered icosahedral phase to the higher peak in very close agreement with the observation of rare ordering occurrences within plastically deforming zones. Such features are also present in Fig. 2(c) corresponding to the as-quenched glass but with less pronounced bimodality.
Next, we consider the atoms with icosahedral ordering and we carry out a cross correlation analysis between associated squared non-affine displacements and the number density \(\rho_{\rm ico}\). The scatter data of \(D^{2}_{\rm min}\) and \(\rho_{\rm ico}\) in Fig. 2(a) and (b) indicate significant (anti-)correlations between the two observables \(X=\log_{10}D^{2}_{\rm min}\) and \(Y=\log_{10}\rho_{\rm ico}\). The (linear) correlation coefficient \(c_{XY}=\langle\hat{XY}\rangle\) and its evolution with strain is shown in Fig. 3(a) where \(\langle.\rangle_{i}\) denotes averaging over the atom index \(i\) and \(\hat{X}\) indicates the deviation from the mean \(\langle X\rangle_{i}\), normalized by the standard deviation associated with each variable. Overall, (anti-)correlation monotonically grows with loading, and it saturates at the onset of the plastic flow regime. The aging process leads to a sharp elastic-plastic transition on approach to failure (see Fig. 1(a) and (b), with a very infrequent occurrence of structural icosahedral ordering within the shear bands (regions with large \(D^{2}_{\rm min}\)) (_cf._ Fig. 2(d)).
The aging-induced crossover is also manifested in the evolution of \(\langle D^{2}_{\rm min}\rangle\) (averaged over atoms) with \(t_{\rm age}\) as in Fig. 3(c). Non-affine displacements are low at small strains but exhibit a clear crossover with increasing age at the onset of yielding (\(\gamma_{xy}\simeq 0.1\)) above which \(\langle D^{2}_{\rm min}\rangle\) grows almost linearly with strain, irrespective of \(t_{\rm age}\). Figure 3(d) displays the scaled standard deviation associated with atoms' \(D^{2}_{\rm min}\) as a measure of susceptibility. The overall trend we observe is akin to the behavior illustrated in Fig. 3(c). In this context, fluctuations tend
Figure 3: Evolution of **a**) correlation coefficient \(c_{XY}\)**b**) mean number density of atoms with icosahedral symmetries \(\langle\rho^{\rm ico}\rangle\) **c**) mean squared nonaffine displacements \(\langle D^{2}_{\rm min}\rangle\)**d**) scaled standard deviation \(\mathrm{std}(D^{2}_{\rm min})/(D^{2}_{\rm min})\) associated with atoms’ \(D^{2}_{\rm min}\) plotted against applied strain \(\gamma_{xy}\) in CoNiCrFeMn corresponding to different aging duration \(t_{\rm age}\). The error bars denote standard errors.
Figure 2: Scatter plots of \(D^{2}_{\rm min}\) and icosahedra density \(\rho_{\rm ico}\) in the **a**) as-quenched (\(t_{\rm age}=0\)) and **b**) annealed (\(t_{\rm age}=90\) ps) CoNiCrFeMn at \(\gamma_{xy}=0.2\). \(D^{2}_{\rm min}\) probability distribution function of atoms with the full icosahedral order corresponding to the **c**) as-quenched and **d**) annealed glasses at \(\gamma_{xy}=0.2\). The corresponding \(D^{2}_{\rm min}\) spatial maps are shown in the insets with the blue and red colors indicating spatial regions with low and high nonaffine displacements. The (black) dots denote atoms with full icosahedral order. The scale of the color map is \(L\simeq 80\) Å.
to reveal an age-dependent shear-induced transition that displays characteristics akin to critical phenomena and a diverging correlation length (_cf._ Fig. 3(d)). The character/order of this shear-induced transition and the exponents associated to this correlation length divergence may be unveiled through finite-size scaling studies in the strong aging regime, a study that goes beyond the purpose of the current work. We also probed variations in the (mean) number density of ordered clusters \(\langle\rho_{\rm{ico}}\rangle\) with strain as in Fig. 3(b). Our data show that the degree of ordering tends to increase with aging, and the applied shear appears to further _amorphize/rejuvenate_ the deforming glass (_eg._ by reduction in the icosahedral cluster density by \(>10\%\)). Nevertheless, this observable, lacks a clear signature of the yielding transition and associated variations with age.
The \(D^{2}_{\rm{min}}\) maps depicted in Fig. 2 give a visual impression that the elastic-plastic transition in BMGs (at \(\gamma_{xy}\simeq 0.1\)) might indeed coincide with a _percolation_ transition of rearranging atoms upon shear loading (see also [29] and references therein). To validate this picture, we adopt ideas from classical percolation theory [38], including investigations of cluster sizes and their dynamical evolution. To perform the cluster analysis, atom-wise \(D^{2}_{\rm{min}}\) was interpolated on a regular (cubic) grid and the top \(5\%\) of grid points with highest \(D^{2}_{\rm{min}}\) were labeled as rearranging sites and colored in black in Fig. 4(a). As a basic statistical property, \(n_{s}\) denotes the probability distribution function associated with the number of clusters containing \(s\) rearranging sites. The radius of gyration associated with a cluster of size \(s\) is defined as \(r_{s}^{2}=\sum_{i=1}^{s}|\vec{r}_{i}-\vec{r}_{0}|^{2}/s\) with the center of mass \(\vec{r}_{0}=\sum_{i=1}^{s}\vec{r}_{i}/s\). We obtain \(s\propto r_{s}^{d_{f}}\) with fractal dimension \(d_{f}\). The mean cluster size is defined as \(S=\sum_{s}n_{s}s^{2}/\sum_{s}n_{s}s\). We also define the (squared) correlation length \(\xi^{2}=2\sum_{s}r_{s}^{2}s^{2}n_{s}/\sum_{s}s^{2}n_{s}\), based on a weighted average that is associated with the radius of gyration \(r_{s}^{2}=\sum_{i=1}^{s}|\vec{r}_{i}-\vec{r}_{0}|^{2}/s\) of a cluster of size \(s\).
The evolution of fractal dimension \(d_{f}\) with strain is illustrated in Fig. 4(b), corresponding to three different sample ages. The overall reduction towards \(d_{f}=2\) may imply that the soft spots tend to form fairly compact clusters at low strains but favor a more planar topology on approach to yielding. Displayed in Fig. 4(c) and (d), both mean cluster size \(S\) and correlation length \(\xi\) indicate a fairly smooth evolution with strain at \(t_{\rm{age}}=0\) but develop quite sharp features as the age is increased towards \(t_{\rm{age}}=90\)ps. The correlation length tends to saturate at \(\xi\simeq 45\) A due to the physical size limit, in a visual agreement with cluster maps illustrated in Fig. 4(a) and (b). The overall reduction in size with increasing \(t_{\rm{age}}\) at initial stages of deformation indicates that annealing and associated structural relaxation leads to annihilation of STZs, in contrast to the observed enhancement in the density of SROs (cf. Fig. 3(b)).
Conclusions -The present study of sheared BMGs has brought new insights into the underlying correlations between aging, structural ordering, and strain localization. We have presented direct evidence that the annealing process plays a pivotal role in controlling the sharpness of the shear-induced elastic-plastic transition, leading to a crossover from diffuse deformation features in as-quenched samples to localized shear-band-like patterns in well-annealed deforming glasses. Our findings suggest that the observed crossover is rooted in the interplay between aging-induced icosahedral ordering and collective formation of STZs. This observation has been quantified via probing several order parameters coupled with measurable dynamical and structural metrics. This includes fluctuations in the atoms' propensity to plastic rearrangements as well as spatial variations in the local density of icosahedral clusters. By analyzing connected networks of soft rearranging regions, we extracted relevant length scales that evolve across the yielding transition and exhibit significant variations with the sample's age. Our findings contribute to a better understanding of the complex interplay between structural order, aging, and plasticity in metallic glasses. This knowledge has implications for the design and optimization of metallic glasses for various engineering applications, whereas the control over strain localization and ductility via preparation protocols may be crucial.
Acknowledgments -This research was funded by the European Union Horizon 2020 research and innovation program under grant agreement no. 857470 and from the
Figure 4: **a**) Binary \(D^{2}_{\rm{min}}\) map at \(\gamma_{xy}=0.1\) corresponding to the annealed CoNiCrFeMn at \(t_{\rm{age}}=90\) ps. **b**) Fractal dimension \(d_{f}\)**c**) Mean cluster size \(S\)**d**) Correlation length \(\xi\) plotted against strain \(\gamma_{xy}\) at multiple age \(t_{\rm{age}}\). Here \(x\) and \(y\) denote flow and gradient directions, respectively. The binary maps show top \(5\%\) sites with largest \(D^{2}_{\rm{min}}\) in black.
European Regional Development Fund via Foundation for Polish Science International Research Agenda PLUS program grant no. MAB PLUS/2018/8.
|
2310.20267 | A non-overlapping optimization-based domain decomposition approach to
component-based model reduction of incompressible flows | We present a component-based model order reduction procedure to efficiently
and accurately solve parameterized incompressible flows governed by the
Navier-Stokes equations. Our approach leverages a non-overlapping
optimization-based domain decomposition technique to determine the control
variable that minimizes jumps across the interfaces between sub-domains. To
solve the resulting constrained optimization problem, we propose both
Gauss-Newton and sequential quadratic programming methods, which effectively
transform the constrained problem into an unconstrained formulation.
Furthermore, we integrate model order reduction techniques into the
optimization framework, to speed up computations. In particular, we incorporate
localized training and adaptive enrichment to reduce the burden associated with
the training of the local reduced-order models. Numerical results are presented
to demonstrate the validity and effectiveness of the overall methodology. | Tommaso Taddei, Xuejun Xu, Lei Zhang | 2023-10-31T08:38:01Z | http://arxiv.org/abs/2310.20267v1 | A non-overlapping optimization-based domain decomposition approach to component-based model reduction of incompressible flows
###### Abstract
We present a component-based model order reduction procedure to efficiently and accurately solve parameterized incompressible flows governed by the Navier-Stokes equations. Our approach leverages a non-overlapping optimization-based domain decomposition technique to determine the control variable that minimizes jumps across the interfaces between sub-domains. To solve the resulting constrained optimization problem, we propose both Gauss-Newton and sequential quadratic programming methods, which effectively transform the constrained problem into an unconstrained formulation. Furthermore, we integrate model order reduction techniques into the optimization framework, to speed up computations. In particular, we incorporate localized training and adaptive enrichment to reduce the burden associated with the training of the local reduced-order models. Numerical results are presented to demonstrate the validity and effectiveness of the overall methodology.
_Keywords:_ component-based model order reduction; optimization-based domain decomposition; non-overlapping methods; Navier-Stokes equations.
## 1 Introduction
Parameterized model order reduction (pMOR) techniques [1, 2, 3, 4] have gained widespread popularity in science and engineering to reduce the computational cost in scenarios that involve repetitive computational tasks, such as many-query and real-time applications. Given the parameter domain \(\mathcal{P}\) and a parameterized partial differential equation (PDE) of interest, pMOR strategies rely on an offline/online computational decomposition: in the offline stage, which is computationally expensive and performed only once, a reduced basis (RB) approximation space is generated by exploiting several high-fidelity (HF) solutions (e.g., finite element, finite volume) to the parameterized PDE for properly chosen parameter values, and a reduced order model (ROM) is then devised; in the online stage, for any new parameter value, the ROM can be solved with computational cost independent of the HF discretization size \(N_{\mathrm{hf}}\), to ensure significant computational savings. Efficient training algorithms, such as proper orthogonal decomposition (POD, [5, 6]) and the weak-Greedy algorithm [3] are available to construct the reduced order basis (ROB). Additionally, effective projection-based techniques [7, 8] can be employed to devise ROMs that are suitable for online calculations.
The combination of RB methods and domain decomposition (DD) methods offers further advantages [9, 10, 11]. First, localized pMOR techniques do not require global HF solutions over the whole domain: this feature has the potential to dramatically reduce the offline computational burden for large-scale systems. Second, localization simplifies the task of defining a parameterization of the problem and enables model reduction of systems with parameter-induced _topology changes_ (cf. section 2.2). Third, the DD framework offers the flexibility to seamlessly integrate ROMs with full order models (FOMs, generated by the HF discretization) or to accommodate multi-physics applications based on independent software.
Various approaches have been proposed to combine RB methods and DD methods which differ in the way local ROMs are coupled at components' interfaces. In the reduced basis element (RBE) method [12, 13, 14], local ROMs are glued together using Lagrange multipliers. This method has been introduced in the context of the Laplace equation [12, 13] and subsequently applied to the Stokes equations [14]. A more recent application of the RBE method to the unsteady 3D Navier-Stokes equations can be found in [15], where a spectral Lagrange multiplier on the 2D interfaces is employed to couple local solutions. Another approach is the static condensation
RBE (scRBE) method [9; 16; 17], which ensures the component coupling through a static condensation procedure [18]. Additionally, approximation spaces for the interfaces (ports) between the components are also constructed [16; 17] to further reduce the computational complexity associated with the static condensation system. Another advantage of the scRBE method is the interchangeability of the components, which enables the study of different systems from a single library of parameterized archetype components. The RB-DD-finite-element (RDF) method [19] uses parametric boundary conditions in the local problems to define versatile local RB spaces for handling of networks composed of repetitive geometries characterized by different parameters. A detailed review of these methods can be found in [19].
Iterative techniques based on substructuring and the Schwarz alternating methods [20; 21] have been adapted to the pMOR framework [22; 10; 11; 23; 24]. In [22], both a non-overlapping Dirichlet-Neumann iterative scheme and a Schwarz method for overlapping sub-domains are proposed to ensure coupling between the FOM and the ROM. The coupling is achieved by ensuring the solution compatibility between the FOM solution trace and ROM solution trace at the interface. Specifically, only Galerkin-free ROMs are considered in the work of [22]. Galerkin-based ROMs are explored in the context of DD in [11], the authors develop a versatile coupling framework for both FOM-ROM coupling and ROM-ROM coupling, which can be applied to both overlapping and non-overlapping domains. Similarly, in [10] Galerkin-based ROMs are employed to speed up Dirichlet-Neumann DD iterations. A Dirichlet-Neumann DD-ROM is developed in [23] to handle non-conforming interfaces. Here, the Dirichlet and Neumann interface data are transferred using the INTERNODES method [25]. In [24], the authors present a DD-ROM technique which is designed for heterogeneous systems: in this approach, components are treated separately, and a parametrization of the interface data is used to generate HF snapshots.
Moreover, several authors have proposed to formulate the coupling problem as a minimization statement [26; 27]. In [26], the optimization problem is framed as the minimization of the difference between the ROM reconstruction and the corresponding FOM solution within the overlapping region between the ROM and the FOM domain. This approach adopts Galerkin-free ROMs and is applied to approximating incompressible flows, such as the interaction between an airfoil and a vortex, and the incompressible turbulent flow past a vehicle with varying geometry. The one-shot overlapping Schwarz method [27] consists in a constrained optimization statement that penalizes the jump at the interfaces of the components, while adhering to the approximate fulfillment of the PDE within each sub-domain. This approach has been validated for a steady nonlinear mechanics problem and also applied to an unsteady nonlinear mechanics problem with internal variables [28], in combination with overlapping partitions. The results of [27] showed that the minimization framework, which enables the application of effective optimization solvers for nonlinear least-square problems, ensures rapid convergence to the solution and is also robust with respect to the overlapping size.
In the present work, we aim to extend the method of [27] to incompressible flows in non-overlapping domains: our point of departure is the variational formulation proposed in [29] and further developed in [30; 31; 32]. As in [29], we formulate the DD problem as an optimal control problem where the control is given by the flux on the components' interfaces and the dependent variables are velocity and pressure in each subdomain; our formulation reads as a constrained minimization problem where the objective functional measures the jump in the dependent variables across the common boundaries between subdomains, while the constraints are the partial differential equations in each subdomain. We modify the formulation of [29] to incorporate an auxiliary control variable for the continuity equation which weakly ensures continuous finite-dimensional pressure across the interface; furthermore, we propose a specialized sequential quadratic programming (SQP) method to efficiently solve the optimization problem without resorting to Lagrange multipliers. We remark that non-overlapping techniques are of particular interest for _heterogeneous_ DD [33] tasks that necessitate the combination of different discretization methods in each subdomain. Non-overlapping methods are also of interest for interface problems with high-contrast coefficients [34] and for fluid flows in repetitive networks [15; 19; 17] such as the vascular system.
We here consider two-dimensional steady-state simulations at moderate Reynolds number; however, our ultimate goal is to devise a flexible computational tool to simulate vascular flows in real, patient-specific geometries. We interpret complex networks as the union of a small number of parameterized components. In order to avoid expensive global solves at training stage, we propose a combined localized training and global enrichment strategy that exclusively uses local HF solves to create local approximations for the archetype components, thus avoiding the need for computationally demanding global HF solves during the training phase.
Our work is related to several previous contributions to component-based (CB) pMOR. First, the variational formulation is strongly related to the recent work by Prusak et al. [35]. The authors of [35] consider separate spaces for velocity and pressure and rely on pressure supremizer enrichment in combination with Galerkin projection to ensure stability of the local problems; furthermore, they resort to a Lagrangian multiplier and gradient-based methods as in [29] to solve the global optimization problem. Instead, we consider a single reduced space for velocity and pressure; we rely on both the Galerkin projection and a Petrov-Galerkin formulation for the local problems; and we rely on the Gauss-Newton and SQP methods for optimization without resorting to Lagrange multipliers. Finally, the authors of [35] do not discuss the problem of localized training, which is of paramount importance for the success of CB techniques. Second, we emphasize that several authors have previously developed CB-pMOR methods for incompressible flows in repetitive geometries [19; 17]; in particular,
the work by Pegolotti and coauthors [15] first considered a CB-pMOR for the unsteady incompressible Navier-Stokes equations in realistic three-dimensional geometries. Third, the localized training and global enrichment strategies are an extension of the method proposed in [36]: localized training strategies have been previously proposed in [16, 17, 24]; similarly, enrichment techniques have been considered in several efforts for linear elliptic PDEs (see, e.g., [37]).
This paper is organized as follows. In section 2, we introduce the optimization-based domain decomposition method and the model problem considered in this work. In section 3, we review the variational formulation introduced in [29]; we present our new formulation; and we discuss the solution method based on Gauss-Newton and sequential quadratic programming. Then in section 4 we discuss the integration of projection-based ROMs into the proposed optimization framework and the hybrid solver that combines both the FOM solver and the ROM solver. In sections 3 and 4 we illustrate the method for a simplified geometric configuration with two components. Section 5 is dedicated to the presentation of the localized training and the adaptive enrichment techniques. Finally, in section 6, we present numerical results that validate the effectiveness of our methodology.
## 2 Optimization-based domain decomposition method for the Navier-Stokes equations
In this work, we consider the incompressible Navier-Stokes equations:
\[\left\{\begin{array}{ll}-\nu\Delta\mathbf{u}+(\mathbf{u}\cdot\nabla)\mathbf{ u}+\nabla p=\mathbf{f}&\text{in}\;\Omega,\\ \nabla\cdot\mathbf{u}=0&\text{in}\;\Omega,\\ \mathbf{u}|_{\Gamma_{\text{dir}}}=\mathbf{u}_{\text{in}},\;\;\mathbf{u}|_{ \Gamma_{\text{dir}}^{0}}=0,\;\;(\nu\nabla\mathbf{u}-p\mathbf{I})\,\mathbf{n}| _{\Gamma_{\text{neu}}}=0,\end{array}\right. \tag{1}\]
where \(\nu>0\) denotes the kinematic viscosity of the fluid, \(\Omega\) is a bounded Lipschitz domain; the open sets \(\Gamma_{\text{dir}},\Gamma_{\text{dir}}^{0},\Gamma_{\text{neu}}\) constitute a partition of \(\partial\Omega\), which are associated to non-homogeneous Dirichlet boundary conditions, homogeneous Dirichlet boundary conditions and Neumann boundary conditions, respectively. We consider two-dimensional problems; the extension to the three-dimensional case and to unsteady problems is beyond the scope of this paper.
### Optimization-based domain decomposition
For the purpose of clarity, we introduce the optimization-based domain decomposition method in the case of two sub-domains. Note that this approach can be readily extended to accommodate many sub-domains, as discussed in the subsequent sections. Consider a non-overlapping partition of \(\Omega\) into two open sub-domains \(\Omega_{1}\) and \(\Omega_{2}\) such that \(\overline{\Omega}=\overline{\Omega}_{1}\cup\overline{\Omega}_{2}\), as illustrated in Figure 1. The interface that separates the two sub-domains is denoted by \(\Gamma_{0}\) so that \(\Gamma_{0}=\overline{\Omega}_{1}\cap\overline{\Omega}_{2}\). The vectors \(\mathbf{n}_{i}\), \(i=1,2\), are the unit outward normals of \(\Omega_{i}\) on \(\Gamma_{0}\) (we thus have \(\mathbf{n}_{1}=-\mathbf{n}_{2}\)). We define the local Dirichlet and Neumann conditions for each component \(\Omega_{i}\), \(i=1,2\) as
\[\Gamma_{i,\text{dir}}=\Gamma_{\text{dir}}\cap\partial\Omega_{i},\quad\Gamma_{ i,\text{dir}}^{0}=\Gamma_{\text{dir}}^{0}\cap\partial\Omega_{i},\quad\Gamma_{i, \text{neu}}=\Gamma_{\text{neu}}\cap\partial\Omega_{i},\] (2a) and the spaces \[\mathcal{X}_{i}:=\left\{(\mathbf{v},q)\in[H^{1}(\Omega_{i})]^{2}\times L^{2}( \Omega_{i})\;:\;\mathbf{v}|_{\Gamma_{\text{dir}}^{0}}=0\right\},\quad\mathcal{ X}_{i,0}:=\left\{(\mathbf{v},q)\in\mathcal{X}_{i}\;:\;\mathbf{v}|_{ \Gamma_{i,\text{dir}}}=0\right\},\quad\mathcal{G}:=[L^{2}(\Gamma_{0})]^{2}. \tag{2b}\]
The local solution \((\mathbf{u}_{i},p_{i})\in\mathcal{X}_{i}\) is fully determined by the flux \(\mathbf{g}\) at the interface \(\Gamma_{0}\): as in [29], we thus refer to \(\mathbf{g}\) as the control. Given the control \(\mathbf{g}\in\mathcal{G}\), the velocity-pressure pair \((\mathbf{u}_{i},\,p_{i})\) satisfies \(\mathbf{u}_{i}|_{\Gamma_{i,\text{dir}}}=\mathbf{u}_{\text{in}}|_{\Gamma_{i, \text{dis}}}\) and
\[\mathcal{R}_{i}(\mathbf{u}_{i},\,p_{i},\mathbf{v},\,q)+\mathcal{E}_{i}( \mathbf{g},\mathbf{v})=0\quad\forall\,(\mathbf{v},\,q)\in\mathcal{X}_{i,0}, \tag{2c}\]
where
\[\mathcal{R}_{i}(\mathbf{u}_{i},\,p_{i},\mathbf{v},\,q)=\int_{\Omega_{i}}\left( \nu\nabla\mathbf{u}_{i}:\nabla\mathbf{v}\,-\,p_{i}(\nabla\cdot\mathbf{v})\,- \,q(\nabla\cdot\mathbf{u}_{i})\,-\,\mathbf{f}_{i}\cdot\mathbf{v}\right)dx, \quad\mathcal{E}_{i}(\mathbf{g},\mathbf{v})=\,(-1)^{i}\,\int_{\Gamma_{0}} \mathbf{g}\cdot\mathbf{v}\,dx, \tag{2d}\]
Figure 1: The domain \(\Omega\) and a partition into two non-overlapping sub-domains.
for \(i=1,2\). Here, the orientation of the flux \(\mathbf{g}\) is chosen to be the same as \(\mathbf{n}_{1}\), i.e., from \(\Omega_{1}\) to \(\Omega_{2}\); the choice of the orientation is completely arbitrary. Note that an arbitrary choice of the control \(\mathbf{g}\) does not guarantee that the local solutions \((\mathbf{u}_{i},p_{i})\) are solutions to (1); however, if \((\mathbf{u}_{1}-\mathbf{u}_{2})|_{\Gamma_{0}}=0\), we find that the field \((\mathbf{u},p)\) such that \((\mathbf{u}|_{\Omega_{1}},p|_{\Omega_{1}})\) and \((\mathbf{u}|_{\Omega_{2}},p|_{\Omega_{2}})\) satisfy (2c) is a weak solution to the global problem (1). The _optimal control_\(\mathbf{g}\) should hence guarantee velocity equality at the interface \(\Gamma_{0}\).
Gunzburger and coauthors [29, 32] proposed the following optimization-based domain-decomposition formulation to compute the desired control and the local solutions:
\[\min_{\begin{subarray}{c}(\mathbf{u}_{1},p_{1})\in\mathcal{X}_{1},\\ (\mathbf{u}_{2},p_{2})\in\mathcal{X}_{2},\\ \mathbf{g}\in\mathcal{O}\end{subarray}}\frac{1}{2}\int_{\Gamma_{0}}\left| \mathbf{u}_{1}-\mathbf{u}_{2}\right|^{2}dx+\frac{\delta}{2}\int_{\Gamma_{0}} \left|\mathbf{g}\right|^{2}dx\quad\text{s.t.}\quad\left\{\begin{array}{ll} \mathcal{R}_{i}(\mathbf{u}_{i},\,p_{i},\,\mathbf{v},\,q)+\mathcal{E}_{i}( \mathbf{g},\mathbf{v})=0&\forall\,(\mathbf{v},\,q)\in\mathcal{X}_{i,0}\\ \mathbf{u}_{i}|_{\Gamma_{i,\text{dir}}}=\mathbf{u}_{\text{in}}|_{\Gamma_{i, \text{dir}}},\end{array}\right.\quad i=1,2. \tag{3}\]
The second term in the objective function of (3) is a regularizer that is designed to penalize controls of excessive size; the positive constant \(\delta\) is chosen to control the relative importance of the two terms in the objective. The proofs of the well-posedness of the optimization formulation, as well as the convergence of the optimal solution to the solution to (1) as the regularization parameter \(\delta\) approaches \(0\), can be found in [32].
### Model problem
As in [15], we assume that the geometry of interest can be effectively approximated through instantiations of the elements of a library of archetype components; the instantiated components are obtained by low-rank geometric transformations of the archetype components. As in [15], we consider a library with two archetype components: "junction" and "channel"; the two archetype components are depicted in Figure 2, where a number is assigned to each component edge. These edge numbers indicate boundary face groups that are associated with the ports and the different types of boundary conditions. Specifically, for the junction, edge numbers \(\{1,4,7\}\) denote the ports and edge numbers \(\{2,3,5,6,8,9\}\) indicate homogeneous Dirichlet boundaries; while for the channel, edge numbers \(\{1,2\}\) represent the ports and edge numbers \(\{3,4\}\) correspond to homogeneuous Dirichlet boundaries.
A system can then be constructed by instantiating the two archetype components as follows:
\[\overline{\Omega}=\bigcup_{i=1}^{N_{\text{ad}}}\overline{\Omega}_{i},\quad \text{where}\quad\Omega_{i}=\Phi^{L_{i}}(\widetilde{\Omega}^{L_{i}},\mu_{i}), \quad i=1,\dots,N_{\text{dd}},\]
where \(L_{i}\in\{1,2\}\) denotes the label of the \(i\)-th component of the system, \(\widetilde{\Omega}^{1}\), \(\widetilde{\Omega}^{2}\) represent the two archetype components, \(\Phi^{L_{i}}\) encompasses geometric transformations such as rotation, translation and non-rigid deformation that are applied to the archetype component to obtain the corresponding instantiated component that appears in the target system. The deformation of the \(i\)-th component is governed by the geometric parameter \(\mu_{i}\); the vector \(\mu_{i}\) includes a scaling factor \(\gamma\), the angle \(\theta\) and a shift \(\mathbf{x}_{\text{shin}}\) that characterize the linear map that ensures the exact fitting of consecutive elements at ports. For the junction component, the the vector \(\mu_{i}\) also includes the angle \(\alpha\), which represents the angle between the main vessel and the branch vessel, as shown in Figure 2(a); for the channel, the vector \(\mu_{i}\) includes the constant \(h_{c}\), which is used in the parameterization of the bottom boundary of the channel as \(y=-h_{c}\left(4t\left(1-t\right)\right)^{\alpha_{c}}\), with \(t\in[0,1]\) and \(\alpha_{c}=4\).
We prescribe a parabolic (Poiseuille) profile at the left boundary \(\mathbf{u}_{\text{in}}\) and we prescribe homogeneous Neumann conditions at the other boundary ports. In conclusion, the complete system configuration is uniquely prescribed by (i) the component labels \(\{L_{i}\}_{i=1}^{N_{\text{ad}}}\) and the geometric parameters \(\mu=\text{vec}(\mu_{1},\dots,\mu_{N_{\text{ad}}})\), and (ii) the Reynolds number \(\mathrm{Re}\) at the inlet. We define the Reynolds number as \(\mathrm{Re}=\frac{H\mu_{0}}{\nu}\), where \(H=1\) denotes the diameter of the vessel at the inlet, \(u_{0}\) represents the centerline velocity imposed at the inlet, and \(\nu\) is the kinematic viscosity. In the numerical implementation, we set \(\nu=\frac{1}{\mathrm{Re}_{\text{ref}}}\) in all the components of the network, and we consider the parametric inflow condition \(u_{0}(\mathrm{Re})=\frac{\mathrm{Re}}{\mathrm{Re}_{\text{ref}}}\).
Figure 3 illustrates two examples of target system, which consist of \(3\) and \(4\) components, respectively: the red numbers indicate the indices of the components, while the blue numbers indicate the internal ports. Note that the two systems are not isomorphic to each other: parameter variations hence induce _topology changes_ that prevent the application of standard monolithic pMOR techniques.
**Remark 1**.: _We here observe that each component includes mixed Dirichlet-Neumann boundary conditions: the presence of Neumann conditions prevents the problem of pressure indeterminacy (up to an additive constant), and the existence of Dirichlet conditions eliminates the need for any additional compatibility condition [30] concerning the control variable \(\mathbf{g}\)._
**Remark 2**.: _We observe that the boundary face group 1 for the two archetype components either corresponds to an internal interface or to the inlet Dirichlet condition (for the first component of the network). In order to handle this scenario, we can either modify the objective function to include the extra-term \(\int_{\Gamma_{\text{div}}}|\mathbf{u}-\mathbf{u}_{\text{in}}|^{2}\,dx\) or to distinguish between inflow and internal channel and junction components. The latter option leads to a library with (\(N_{\text{c}}=4\)) archetype components. We here opt for the second strategy._
## 3 High-fidelity discretization
### Finite element spaces
We proceed to discretize the optimization statement (3). Towards this end, we introduce the HF spaces \(V_{i}^{\text{hf}}\subset[H_{0,\Gamma_{\text{div}}^{0}}^{1}(\Omega_{i})]^{2}\), \(Q_{i}^{\text{hf}}\subset L^{2}(\Omega_{i})\). We further define the tensor product spaces \(\mathcal{X}_{i}^{\text{hf}}=V_{i}^{\text{hf}}\times\mathcal{Q}_{i}^{\text{hf}}\) and the lifted space \(\mathcal{X}_{i,0}^{\text{hf}}=V_{i,0}^{\text{hf}}\times\mathcal{Q}_{i}^{\text {hf}}\) with \(V_{i,0}^{\text{hf}}=\{\mathbf{v}\in V_{i}^{\text{hf}}:\mathbf{v}|_{\Gamma_{ \text{div}}}=0\}\) for \(i=1,2\). We denote by \(\{\boldsymbol{\varphi}_{i,j}\}_{j=1}^{N^{\text{hf}}}\) a basis of \(V_{i}^{\text{hf}}\) and by \(\{\psi_{i,j}\}_{j=1}^{N^{\text{hf}}}\) a basis of \(Q_{i}^{\text{hf}}\); we use notation \(\boldsymbol{\bullet}\) to indicate the FE vector associated with the FE field \(\bullet\). We further define the trace spaces \(\mathcal{\Lambda}_{i}^{\text{hf}}=\{\tau_{0}\mathbf{v}:\mathbf{v}\in V_{i}^{ \text{hf}}\}\) and \(\Xi_{i}^{\text{hf}}=\{\tau_{\Gamma_{0}}q:\mathbf{\varrho}\in Q_{i}^{\text{hf}}\}\), where \(\tau_{\Gamma_{0}}\bullet:=\bullet|_{\Gamma_{0}}\) indicates the trace of the field \(\bullet\) on \(\Gamma_{0}\). We here consider conforming meshes such that nodes at the interface shared by the two sub-domains coincide, that is \(\Lambda_{i}^{\text{hf}}=\Lambda_{2}^{\text{hf}}=\Lambda^{\text{hf}}\) and \(\Xi_{1}^{\text{hf}}=\Xi_{2}^{\text{hf}}=\Xi^{\text{hf}}\); this assumption is exploited in the technical result of B; nevertheless, the formulation can be trivially extended to non-conforming grids. We further define the global spaces \(\mathcal{X}^{\text{hf}}=V^{\text{hf}}\times\mathcal{Q}^{\text{hf}}\) and \(\mathcal{X}_{0}^{\text{hf}}=V_{0}^{\text{hf}}\times\mathcal{Q}^{\text{hf}}\) with \(V_{0}^{\text{hf}}=\{\mathbf{v}\in V^{\text{hf}}:\mathbf{v}|_{\Gamma_{\text{ div}}}=0\}\).
In this work, we adopt a stabilized FE formulation that incorporates the Streamline Upwind/Petrov-Galerkin (SUPG) [38, 39] and the Pressure-Stabilized Petrov-Galerkin (PSPG) [40] stabilizations. The PSPG technique allows the use of the same polynomial degree for both pressure and velocity discretizations; the SUPG technique enhances robustness for high Reynolds numbers. The detailed description of these stabilization formulas is given in A. In conclusion, we consider the following local problems, which are the counterpart of (2\(c\)):
\[\left\{\begin{array}{l}\mathcal{R}_{i}^{\text{hf}}(\mathbf{u}_{i},\,p_{i}, \mathbf{v},\,q)+\mathcal{E}_{i}(\mathbf{g},\mathbf{v})=0\quad\forall\,( \mathbf{v},\,q)\in\mathcal{X}_{i,0}^{\text{hf}}\quad i=1,2.\\ \\ \mathbf{u}_{i}|_{\Gamma_{i,\text{div}}}=\boldsymbol{\Phi}_{i,\mathbf{u}_{\text{ in}}}\end{array}\right.\] (4a) where \[\boldsymbol{\Phi}_{i,\mathbf{u}_{\text{in}}}\in V_{i}^{\text{hf}}\] is the interpolant of the nodal values of \[\mathbf{u}_{\text{in}}\] on \[\Gamma_{i,\text{dir}}\] [20, p. 174]. In view of the discussion
Figure 3: two example of target systems.
Figure 2: archetype components.
below, we rewrite the HF residual as
\[\mathcal{R}_{i}^{\mathrm{hf}}(\mathbf{u}_{i},\,p_{i},\mathbf{v},\,q)=\mathcal{R}_{i,\mathrm{u}}^{\mathrm{hf}}(\mathbf{u}_{i},\,p_{i},\mathbf{v})+\mathcal{R}_{i, \mathrm{p}}^{\mathrm{hf}}(\mathbf{u}_{i},\,p_{i},q); \tag{4b}\]
the first term corresponds to the residual of the momentum equation (1)\({}_{1}\), while the second term corresponds to the residual of the continuity equation (1)\({}_{2}\).
### Variational formulation
Exploiting the previous notation, we can introduce the HF counterpart of the optimization formulation (3):
\[\min_{\begin{subarray}{c}(\mathbf{u}_{1},p_{1})\in\mathcal{X}_{i,0}^{\mathrm{ hf}},\\ (\mathbf{u}_{2},p_{2})\in\mathcal{X}_{i,0}^{\mathrm{hf}},\\ \mathbf{g}\in\Lambda^{\mathrm{hf}}\end{subarray}}\frac{1}{2}\int_{\Gamma_{0}} \left|\mathbf{u}_{1}-\mathbf{u}_{2}\right|^{2}dx+\frac{\delta}{2}\int_{\Gamma _{0}}\left|\mathbf{g}\right|^{2}dx\quad\mathrm{s.t.}\ \left\{\begin{array}{l} \mathcal{R}_{i}^{\mathrm{hf}}(\mathbf{u}_{i},\,p_{i},\mathbf{v},\,q)+ \mathcal{E}_{i}(\mathbf{g},\mathbf{v})=0\quad\forall\,(\mathbf{v},\,q)\in \mathcal{X}_{i,0}^{\mathrm{hf}}\\ \mathbf{u}_{i}|_{\Gamma_{i,\mathrm{dir}}}=\mathbf{\Phi}_{i,\mathbf{u}_{\mathrm{ in}}},\quad i=1,2.\end{array}\right. \tag{5}\]
This formulation coincides with the statement considered in [32] and also [35] -- with the minor difference that we here rely on a stabilized FE formulation for the local problems. In the remainder of this section, we discuss an alternative HF formulation that will be used to define the reduced-order model.
Formulation (5) does not ensure the continuity of pressure across the internal interfaces: we prove this result rigorously in B; here, we provide a sketch of the proof that justifies our new DD statement. If we denote by \((\mathbf{u}^{\mathrm{hf}},p^{\mathrm{hf}})\in\mathcal{X}^{\mathrm{hf}}\) the solution to the global problem such that \(\mathcal{R}^{\mathrm{hf}}(\mathbf{u}^{\mathrm{hf}},p^{\mathrm{hf}},\mathbf{v},q)=0\) and we neglect for simplicity the stabilization term, we obtain
\[\mathcal{R}_{\mathrm{p}}^{\mathrm{hf}}(\mathbf{u}^{\mathrm{hf}},q)=\int_{ \Omega_{1}}\nabla\cdot\mathbf{u}^{\mathrm{hf}}q\,dx+\int_{\Omega_{2}}\nabla \cdot\mathbf{u}^{\mathrm{hf}}q\,dx=0\quad\forall\,q\in Q^{\mathrm{hf}}.\]
Since \(Q^{\mathrm{hf}}\) is a space of continuous functions, it is in general false that \(\mathcal{R}_{i,\mathrm{p}}^{\mathrm{hf}}(\mathbf{u}^{\mathrm{hf}}|_{\Omega_{i} },q)=\int_{\Omega_{i}}\nabla\cdot\mathbf{u}^{\mathrm{hf}}q\,dx=0\) for all \(q\in Q_{i}^{\mathrm{hf}}\), \(i=1,2\); nevertheless, it is possible to show that there exists \(h^{\star}\in\Xi^{\mathrm{hf}}\) such that
\[\mathcal{R}_{i,\mathrm{p}}^{\mathrm{hf}}(\mathbf{u}^{\mathrm{hf}}|_{\Omega_{i} },q)+(-1)^{i}\int_{\Gamma_{0}}h^{\star}q\,dx=0\quad\forall\,q\in Q_{i}^{ \mathrm{hf}}.\]
Similarly, there exists \(\mathbf{g}^{\star}\in\Lambda^{\mathrm{hf}}\) such that
\[\mathcal{R}_{i,\mathrm{u}}^{\mathrm{hf}}(\mathbf{u}^{\mathrm{hf}}|_{\Omega_{i} },p^{\mathrm{hf}}|_{\Omega_{i}},\mathbf{v})+(-1)^{i}\int_{\Gamma_{0}}\mathbf{ g}^{\star}\cdot\mathbf{v}\,dx=0\quad\forall\,\mathbf{v}\in V_{i}^{ \mathrm{hf}},\ \ i=1,2.\]
We conclude that the tuple \((\mathbf{u}^{\mathrm{hf}}|_{\Omega_{1}},p^{\mathrm{hf}}|_{\Omega_{1}},\mathbf{ u}^{\mathrm{hf}}|_{\Omega_{2}},p^{\mathrm{hf}}|_{\Omega_{2}},\mathbf{g}^{ \star},h^{\star})\) is a solution to the minimization problem
\[\min_{\begin{subarray}{c}(\mathbf{u}_{1},p_{1})\in\mathcal{X}_{i,0}^{\mathrm{hf }},\\ (\mathbf{u}_{2},p_{2})\in\mathcal{X}_{i,0}^{\mathrm{hf}},\\ \mathbf{g}^{\mathrm{c}}\in\Lambda^{\mathrm{hf}},h\in\Xi^{\mathrm{hf}}\end{subarray}} \frac{1}{2}\int_{\Gamma_{0}}\left|\mathbf{u}_{1}-\mathbf{u}_{2} \right|^{2}dx+\frac{1}{2}\int_{\Gamma_{0}}\left(p_{1}-p_{2}\right)^{2}dx\] \[\mathrm{s.t.} \left\{\begin{array}{l}\mathcal{R}_{i}^{\mathrm{hf}}(\mathbf{u} _{i},\,p_{i},\mathbf{v},\,q)+\mathcal{E}_{i}(\mathbf{g},\mathbf{v})+(-1)^{i} \int_{\Gamma_{0}}hq\,dx=0\quad\forall\,(\mathbf{v},\,q)\in\mathcal{X}_{i,0}^{ \mathrm{hf}},\\ \mathbf{u}_{i}|_{\Gamma_{i,\mathrm{dir}}}=\mathbf{\Phi}_{i,\mathbf{u}_{\mathrm{ in}}},\end{array}\right.i=1,2.\]
This discussion suggests to consider a modified formulation that explicitly penalizes the jump of the pressure field. We introduce the state \(\mathbf{w}_{i}:=\mathrm{vec}(\mathbf{u}_{i},p_{i})\), \(i=1,2\) and the control \(\mathbf{s}:=\mathrm{vec}(\mathbf{g},h)\); we introduce the control space \(\mathcal{S}^{\mathrm{hf}}=\Lambda^{\mathrm{hf}}\times\Xi^{\mathrm{hf}}\) equipped with the norm
\[\left|\!\left|\!\left|\mathbf{s}=\mathrm{vec}\left(\mathbf{g},h\right)\right|\! \right|\!\right|^{2}=\int_{\Gamma_{0}}\left|\nabla_{\Gamma_{0}}\mathbf{g} \right|^{2}+|\mathbf{g}|^{2}+h^{2}\,dx,\] (6a) where \[\nabla_{\Gamma_{0}}\mathbf{g}\] denotes the gradient of \[\mathbf{g}\] in the tangential direction; we use notation \[\mathbf{w}_{i}(1:2)\] to indicate the first two components of the vector-valued function \[\mathbf{w}_{i}\]. Then, we introduce the variational formulation: \[\min_{\begin{subarray}{c}\mathbf{w}_{1}\in\mathcal{X}_{i,0}^{\mathrm{hf}},\\ \mathbf{w}_{2}\in\mathcal{X}_{i,0}^{\mathrm{hf}},\\ \mathbf{w}_{1}(1:2)|_{\Gamma_{i,\mathrm{dir}}}=\mathbf{\Phi}_{i,\mathbf{u}_{ \mathrm{in}}},\end{subarray}}\quad\mathrm{s.t.}\ \left\{\begin{array}{l}\mathcal{R}_{i}^{\mathrm{hf}}(\mathbf{w}_{i}, \mathbf{z})+\mathcal{E}_{i}^{\mathrm{hf}}(\mathbf{s},\mathbf{z})=0\quad\forall \,\mathbf{z}\in\mathcal{X}_{i,0}^{\mathrm{hf}},\\ \mathbf{w}_{i}(1:2)|_{\Gamma_{i,\mathrm{dir}}}=\mathbf{\Phi}_{i,\mathbf{u}_{ \mathrm{in}}},\end{array}\right.i=1,2;\] (6b) where \[\mathcal{F}_{\delta}\left(\mathbf{w}_{1},\mathbf{w}_{2},\mathbf{s}\right):=\frac{ 1}{2}\|\mathbf{w}_{1}-\mathbf{w}_{2}\|_{L^{2}(\Gamma_{0})}^{2}+\frac{1}{2} \delta\|\!\left|\!\left|\mathbf{s}\right|\!\right|^{2}, \tag{6c}\]
and \(\mathcal{E}_{i}^{\rm hf}(\mathbf{s},\mathbf{v},q)=\mathcal{E}_{i}(\mathbf{g}, \mathbf{v})+(-1)^{i}\int_{\Gamma_{0}}h\,q\,dx\). Note that we replaced the \(L^{2}\) norm for the control \(\mathbf{g}\) with the \(H^{1}\) norm: as discussed in section 6.1, we empirically observe that the use of the \(H^{1}\) norm significantly reduces the oscillations in the profile of \(\mathbf{g}\).
Some comments are in order. First, the addition of the pressure jump and of the control \(h\) ensures that the optimal pressure is continuous in the limit \(\delta\to 0\). Note that at the continuous level the test space \(Q=L^{2}(\Omega)\) is discontinuous; therefore, the control \(h\) is unnecessary. Similarly, if we rely on a P0 discretization for the pressure field [41], the pressure jump is also unnecessary. Second, since velocity and pressure have different units and might also have very different magnitudes, it might be necessary to rescale the objective function to avoid stability issues (see, e.g., [42]). In our numerical experiments, we solve the equations in non-dimensional form, and we do not include any scaling factor.
### Solution methods for (6)
As in [27] and also [32], we resort to a gradient-based optimization method to find local minima of (6). In more detail, we consider the Gauss-Newton method (GNM) and sequential quadratic programming (SQP) [43]. As discussed below, both methods rely on static condensation to devise a reduced system for the control \(\mathbf{s}\).
#### 3.3.1 Gauss-Newton method
We define the local solution map \(\mathcal{H}_{i}:\mathcal{S}^{\rm hf}\to\mathcal{X}_{i}^{\rm hf}\) such that \(\mathcal{H}_{i}(\mathbf{s})(1:2)|_{\Gamma_{i,{\rm dir}}}=\boldsymbol{\Phi}_{i,\mathbf{u}_{\rm in}}\) and
\[\mathcal{R}_{i}^{\rm hf}(\mathcal{H}_{i}(\mathbf{s}),\mathbf{z})+\mathcal{E}_ {i}^{\rm hf}(\mathbf{s},\mathbf{z})=0\quad\forall\,\mathbf{z}\in\mathcal{X}_ {i,0}^{\rm hf},\quad i=1,\,2. \tag{7}\]
Then, we rewrite (6) as an unconstrained optimization problem:
\[\min_{\mathbf{s}\in\mathcal{S}^{\rm hf}}\mathcal{F}_{\delta}^{\rm gn}( \mathbf{s})=\mathcal{F}_{\delta}(\mathcal{H}_{1}(\mathbf{s}),\mathcal{H}_{2}( \mathbf{s}),\mathbf{s}). \tag{8}\]
If we define the space \(\mathfrak{X}^{\rm hf}=\Lambda^{\rm hf}\times\mathcal{S}^{\rm hf}\) equipped with the norm \(\|\mathbf{r}=\operatorname{vec}(\mathbf{w},\mathbf{g},h)\|_{\mathfrak{X}^{ \rm hf}}^{2}=\|\mathbf{w}\|_{L^{2}(\Gamma_{0})}^{2}+\|\mathbf{g}\|_{H^{1}( \Gamma_{0})}^{2}+\|h\|_{L^{2}(\Gamma_{0})}^{2}\) and the operator \(F_{\delta}:\mathcal{S}^{\rm hf}\to\mathfrak{X}^{\rm hf}\) such that \(F_{\delta}(\mathbf{s})=\operatorname{vec}(\tau_{\Gamma_{0}}\left(\mathcal{H}_ {1}(\mathbf{s})-\mathcal{H}_{2}(\mathbf{s})\right),\sqrt{\delta}\mathbf{s})\), we can rewrite (8) as a nonlinear least-square problem, that is
\[\min_{\mathbf{s}\in\mathcal{S}^{\rm hf}}\mathcal{F}_{\delta}^{\rm gn}( \mathbf{s})=\frac{1}{2}\big{\|}F_{\delta}(\mathbf{s})\big{\|}_{\mathfrak{X}^{ \rm hf}}^{2}.\] (9a) The unconstrained problem ( 9a ) can be solved efficiently using GNM: given the initial condition \[\mathbf{s}^{it=0}\], we repeatedly solve, for \[it=0,1,\ldots\], \[\mathbf{s}^{it+1}=\arg\min_{\mathbf{s}\in\mathcal{S}^{\rm hf}}\frac{1}{2}\big{\|} F_{\delta}(\mathbf{s}^{it})+\frac{\partial F_{\delta}(\mathbf{s}^{it})}{ \partial\mathbf{s}}\left(\mathbf{s}-\mathbf{s}^{it}\right)\big{\|}_{\mathfrak{X} ^{\rm hf}}^{2}.\] (9b) with the termination condition \[\frac{\big{\|}\!\big{\|}\!\big{\|}\!\big{\|}\!\mathbf{s}^{it+1}-\mathbf{s}^{it} \big{\|}\!\big{\|}\!\big{\|}\!\big{\|}\!\big{\|}\!\big{\|}\!\mathbf{s}^{it}\|\! \big{\|}}\leq tol,\] (9c) where \[tol>0\] is a predefined tolerance.
We observe that GNM requires the explicit calculation of \(F_{\delta}\) and the gradient of \(F_{\delta}\) with respect to the control at \(\mathbf{s}^{it}\): the former involves the solution to the local problems (7) for all components, while the latter is given by
\[\frac{\partial F_{\delta}(\mathbf{s}^{it})}{\partial\mathbf{s}}=\left[\begin{array} []{c}\tau_{\Gamma_{0}}\left(\frac{\partial\mathcal{H}_{1}(\mathbf{s}^{it})}{ \partial\mathbf{s}}-\frac{\partial\mathcal{H}_{2}(\mathbf{s}^{it})}{\partial \mathbf{s}}\right)\\ \sqrt{\delta}\texttt{id}\end{array}\right]\quad\text{with}\ \ \frac{\partial\mathcal{H}_{i}(\mathbf{s})}{ \partial\mathbf{s}}=-\left(\frac{\partial\mathcal{R}_{i}^{\rm hf}(\mathcal{H}_{i}( \mathbf{s}))}{\partial\mathbf{w}_{i}}\right)^{-1}\mathcal{E}_{i}^{\rm hf}, \tag{10}\]
and \(\mathtt{id}\) is the identity map. We notice that the evaluation of \(\frac{\partial F_{\delta}(\mathbf{s}^{it})}{\partial\mathbf{s}}\) involves the solution to \(N^{\mathbf{s}}\) linear systems where \(N^{\mathbf{s}}\) is the cardinality of the space \(\mathcal{S}^{\rm hf}\); it is hence computationally feasible only if the dimension of the control is moderate: this observation highlights the importance of port reduction [16] for optimization-based methods. Conversely, we remark that the computation of \(\mathcal{H}_{1}(\mathbf{s}^{it}),\mathcal{H}_{2}(\mathbf{s}^{it})\) and their derivatives is embarrassingly parallel with respect to the number of components: as discussed in [27], GNM enables effective parallelization of the solution procedure if compared to standard multiplicative Schwartz iterative methods, provided that the computational cost is dominated by the solution to the local problems (7). Finally, we remark that the least-square problem in (9b) can be solved by explicitly assembling the normal equations; alternatively, we might employ the QR factorization [7]. We omit the details.
#### 3.3.2 Sequential quadratic programming (SQP)
The SQP method solves a sequence of optimization subproblems, each of which optimizes a quadratic model of the objective subject to a linearization of the constraints. Since the objective (cf. (6c)) is quadratic, we hence find the iterative method
\[\left(\mathbf{w}_{1}^{it+1},\mathbf{w}_{2}^{it+1},\mathbf{s}^{it+1}\right)=\arg \min_{\begin{subarray}{c}\mathbf{w}_{1}\in\mathcal{X}_{1,i}^{\text{hf}}\\ \mathbf{w}_{2}\in\mathcal{X}_{2}^{\text{hf}}\\ \mathbf{s}\in\mathcal{S}^{\text{hf}}\end{subarray}}\mathcal{F}_{\delta}\left( \mathbf{w}_{1},\mathbf{w}_{2},\mathbf{s}\right)\quad\text{s.t.}\ \ \left\{\begin{array}{l}\mathcal{R}_{i,it}^{\text{hf}}(\mathbf{z})+ \mathcal{J}_{i,it}^{\text{hf}}(\mathbf{w}_{i}-\mathbf{w}_{i}^{it},\mathbf{z})+ \mathcal{E}_{i}^{\text{hf}}(\mathbf{s},\mathbf{z})=0\\ \mathbf{w}_{i}(1:2)|_{\Gamma_{i,\text{dir}}}=\mathbf{\Phi}_{i,\mathbf{u}_{in}}, \quad\forall\mathbf{z}\in\mathcal{X}_{i,0}^{\text{hf}},\ i=1,2;\end{array}\right.\] (11a) where the linear forms \[\{\mathcal{R}_{i,it}^{\text{hf}}\}_{i}\] and the bilinear forms \[\{\mathcal{J}_{i,it}^{\text{hf}}\}_{i}\] are given by \[\mathcal{R}_{i,it}^{\text{hf}}(\mathbf{z})=\mathcal{R}_{i}^{\text{hf}}(\mathbf{ w}_{i}^{it},\mathbf{z}),\quad\mathcal{J}_{i,it}^{\text{hf}}(\mathbf{w},\mathbf{z})= \frac{\partial\mathcal{R}_{i}^{\text{hf}}}{\partial\mathbf{w}_{i}}\left[ \mathbf{w}_{i}^{it}\right](\mathbf{w},\mathbf{z})\,,\quad\forall\,\mathbf{w} \in\mathcal{X}_{i}^{\text{hf}},\mathbf{z}\in\mathcal{X}_{i,0}^{\text{hf}},\, it=0,1,\ldots. \tag{11b}\]
In the numerical experiments, we consider the same termination condition (9c) used for GNM.
The optimization problem (11a) is quadratic with linear constraints. The solution to (11a) hence satisfies
\[\left\{\begin{array}{l}\mathbf{s}^{it+1}=\arg\min_{\mathbf{s}\in\mathcal{S}^{ \text{hf}}}\left\|\widetilde{F}_{\delta}^{it}+\widetilde{\mathcal{J}}_{\delta }^{it}\left(\mathbf{s}-\mathbf{s}^{it}\right)\right\|_{\mathbf{x}^{\text{hf}}} ^{2};\\ \mathbf{w}_{i}^{it+1}=\mathbf{w}_{i}^{it}-\left(\mathcal{J}_{i,it}^{\text{hf}} \right)^{-1}\left(\mathcal{R}_{i,it}^{\text{hf}}+\mathcal{E}_{i}^{\text{hf}} \mathbf{s}^{it+1}\right),\quad i=1,2;\end{array}\right.\] (12a) where \[\widetilde{F}_{\delta}^{it}=\left[\begin{array}{l}\tau_{\Gamma_{0}}\left( \mathbf{w}_{1}^{\text{hf}}-\mathbf{w}_{2}^{it}\right)\\ \sqrt{3}\mathbf{s}^{it}\end{array}\right],\quad\widetilde{\mathcal{J}}_{\delta }^{it}=\left[\begin{array}{c}\tau_{\Gamma_{0}}\left(\left(\mathcal{J}_{1,it} ^{\text{hf}}\right)^{-1}\mathcal{E}_{1}^{\text{hf}}-\left(\mathcal{J}_{2,it} ^{\text{hf}}\right)^{-1}\mathcal{E}_{2}^{\text{hf}}\right)\\ \sqrt{3}\mathbf{id}\end{array}\right]. \tag{12b}\]
In our implementation, we rely on (12) to solve (11a).
As for GNM, we obtain a least-square problem for the control by applying static condensation: while in the previous section we first derived the unconstrained statement (cf. (8)) and then we applied the optimization method, here we first optimize using SQP and then we apply static condensation at each iteration of the optimization algorithm.
Since the underlying PDE model is nonlinear, GNM requires to perform Newton subiterations to solve the local problems (7) (see also the definition of \(F_{\delta}(\mathbf{s}^{it})\) in (9b)); conversely, SQP does not involve subiterations. The cost per iteration of SQP is hence significantly inferior to the cost of GNM. We empirically observe that the SQP approach mitigates the potential convergence issues of the sub-iterations for the local problems, particularly at the very early stages of the optimization loop.
We observe that (9b) and (12a)\({}_{1}\) are formally equivalent, while (10) and (12b) share the same structure. We conclude that the SQP and GNM approaches can be implemented using the same data structures and can be parallelized in the same way. We omit the details.
**Remark 3**.: _For high-Reynolds number flows, it is important to enhance the robustness of our approach by resorting to pseudo transient continuation (PTC) [44]. PTC introduces an additional pseudo-temporal integration with adaptive time step, that is performed until convergence to a steady-state solution. If we resort to the backward Euler scheme for the discretization of the time derivative, at each PTC step we solve the relaxed problem:_
\[\min_{\begin{subarray}{c}\mathbf{w}_{1}\in\mathcal{X}_{1,i}^{\text{hf}}\\ \mathbf{w}_{2}\in\mathcal{X}_{2}^{\text{hf}}:\\ \mathbf{s}\in\mathcal{S}^{\text{hf}}\end{subarray}}\mathcal{F}_{\delta}\left( \mathbf{w}_{1},\mathbf{w}_{2},\mathbf{s}\right)\quad\text{s.t.}\ \ \left\{\begin{array}{l}\frac{1}{ \Delta t_{k}}\int_{\Omega_{i}}\left(\mathbf{w}_{i}(1:2)-\mathbf{w}_{i}^{k}(1: 2)\right)\cdot\mathbf{v}\,dx\,+\,\mathcal{R}_{i}^{\text{hf}}(\mathbf{w}_{i}, \mathbf{z})+\mathcal{E}_{i}^{\text{hf}}(\mathbf{s},\mathbf{z})=0\\ \mathbf{w}_{i}(1:2)|_{\Gamma_{i,\text{dir}}}=\mathbf{\Phi}_{i,\mathbf{u}_{in}}, \quad\forall\,\mathbf{z}=(\mathbf{v},q)\in\mathcal{X}_{i,0},\ \ i=1,2.\end{array}\right. \tag{13}\]
_where the index \(k\) refers to the temporal loop and \(\Delta t_{k}\) is chosen adaptively based on the residual of the steady-state equations. We refer to [44] and to the references therein for further details. Note that (13) is formally equivalent to (6c): it can hence be solved using the same procedure outlined above. As discussed in \(A\), the time derivative should also be included in the SUPG and PSPG stabilization terms._
## 4 Projection-based reduced order formulation
We rely on the formulation (6b) to define the CB-ROM. Towards this end, first, we identify a low-rank approximation of the control \(\mathbf{s}^{\text{hf}}\) and the local states \(\mathbf{w}_{1}^{\text{hf}}\), \(\mathbf{w}_{2}^{\text{hf}}\); second, we devise local ROMs for the approximation of the solution maps (7); third, we devise specialized GNM and SQP methods for the formulation (6b) based on
approximate solution maps. We conclude the section by discussing the implementation of hybrid formulations that combine full-order and reduced-order local solution maps. We remark that in order to further enhance online performance we should also reduce the online costs associated with the computation of the \(\|\cdot\|_{\mathbf{x}^{\mathrm{hf}}}\) norm in (9b) and (12a)\({}_{1}\) (cf. [27]): we do not address this issue in the present work.
### Construction of the local basis
We denote by \(\{\mu^{(k)}=\mathrm{vec}(\mu_{1}^{(k)},\mu_{2}^{(k)})\}_{k=1}^{n_{\mathrm{train}}}\) a set of \(n_{\mathrm{train}}\) global configurations; we further denote by \(\{\mathbf{w}_{i,k}^{\mathrm{hf}}:i=1,2,k=1,\ldots,n_{\mathrm{train}}\}\) and \(\{\mathbf{s}_{k}^{\mathrm{hf}}:k=1,\ldots,n_{\mathrm{train}}\}\) the corresponding HF state and control estimates based on (6b). We resort to POD to devise a low-dimensional approximation space for the local solution manifolds and for the control
\[\left\{\begin{array}{l}\left[\mathcal{Z}_{i}=\mathrm{span}\{\mathbf{\zeta}_{i, j}\}_{j=1}^{n}\right]=\mathsf{PDO}\left(\{\mathbf{w}_{i,k}^{\mathrm{hf}}- \mathbf{\Psi}_{i,\mathbf{u}_{\mathrm{in}}}^{(k)}\}_{k=1}^{n_{\mathrm{train}}}, \|\cdot\|_{\mathcal{X}_{i}},n\right);\\ \left[\mathcal{W}=\mathrm{span}\{\mathbf{\eta}_{j}\}_{j=1}^{m}\right]=\mathsf{PDO} \left(\{\mathbf{s}_{k}^{\mathrm{hf}}\}_{k=1}^{n_{\mathrm{train}}},\|\cdot\| \!\cdot\!\|\!\|,m\right).\end{array}\right. \tag{14}\]
Here, the function \(\mathsf{PDO}\left(\mathcal{D},\|\cdot\|,n\right)\) returns the POD space of dimension \(n\) associated with the snapshot dataset \(\mathcal{D}\) and the norm \(\|\cdot\|\) using the method of snapshots [45]. To ease the presentation, the integers \(n\) and \(m\) are here chosen _a priori_: in practice, we should choose \(n,m\) based on the energy criterion. The fields \(\mathbf{\Psi}_{1,\mathbf{u}_{\mathrm{in}}}^{(k)},\mathbf{\Psi}_{2,\mathbf{u}_{\mathrm{in}}}^ {(k)}\) satisfy the boundary conditions in (6b); we refer to section 5 for the explicit expression; this implies that the local space \(\mathcal{Z}_{i}\) is contained in \(\mathcal{X}_{i,0}\), for \(i=1,2\). In the remainder, we further use notation \(\mathcal{Z}_{i}^{\mathrm{dir}}=\{\mathbf{\Psi}_{i,\mathbf{u}_{\mathrm{in}}}(\mu) +\mathbf{\zeta}_{i}:\mathbf{\zeta}_{i}\in\mathcal{Z}_{i}\}\) to identify the affine approximation spaces that incorporate Dirichlet boundary conditions. Furthermore, given \(\mathbf{w}_{i}\in\mathcal{Z}_{i}^{\mathrm{dir}}\) and \(\mathbf{s}\in\mathcal{W}\), we define the generalized coordinates \(\mathbf{\alpha}_{1},\mathbf{\alpha}_{2}\in\mathbb{R}^{n}\) and \(\mathbf{\beta}\in\mathbb{R}^{m}\) such that
\[\mathbf{w}_{i}(\mathbf{\alpha}_{i};\mu)=\mathbf{\Psi}_{i,\mathbf{u}_{\mathrm{in}}}(\mu )+\sum_{j=1}^{n}\left(\mathbf{\alpha}_{i}\right)_{j}\mathbf{\zeta}_{i,j},\;\;i=1,2; \qquad\mathbf{s}(\mathbf{\beta})=\sum_{j=1}^{m}\left(\mathbf{\beta}\right)_{j}\mathbf{ \eta}_{j}. \tag{15}\]
### Construction of the local reduced-order models
We rely on (Petrov-)Galerkin projection to define the local ROMs.
Galerkin projection.We consider the local solution maps \(\widehat{\mathcal{H}}_{i}^{\mathrm{g}}:\mathcal{W}\to\mathcal{Z}_{i}^{ \mathrm{dir}}\) such that
\[\mathcal{R}_{i}^{\mathrm{hf}}(\widehat{\mathcal{H}}_{i}^{\mathrm{g}}(\mathbf{s }),\mathbf{z})+\mathcal{E}_{i}^{\mathrm{hf}}(\mathbf{s},\mathbf{z})=0\quad \forall\,\mathbf{z}\in\mathcal{Z}_{i},\quad i=1,\,2. \tag{16}\]
It is useful to rewrite (16) in fully-algebraic form. Towards this end, we define the discrete residuals \(\widehat{\mathbf{R}}_{i}^{\mathrm{g}}:\mathbb{R}^{n}\to\mathbb{R}^{n}\) and \(\widehat{\mathbf{E}}_{i}^{\mathrm{g}}\in\mathbb{R}^{n\times n}\) such that
\[\left(\widehat{\mathbf{R}}_{i}^{\mathrm{g}}(\mathbf{\alpha})\right)_{j}=\mathcal{R }_{i}^{\mathrm{hf}}\left(\mathbf{w}_{i}(\mathbf{\alpha}_{i}),\mathbf{\zeta}_{i,j} \right),\quad\left(\widehat{\mathbf{E}}_{i}^{\mathrm{g}}\right)_{j,k}= \mathcal{E}_{i}^{\mathrm{hf}}\left(\mathbf{\eta}_{k},\mathbf{\zeta}_{i,j}\right),\quad i =1,2,j=1,\ldots,n,k=1,\ldots,m;\] (17a) and the local algebraic solution maps \[\widehat{\mathcal{H}}_{i}^{\mathrm{g}}:\mathbb{R}^{m}\to\mathbb{R}^{n}\] such that \[\widehat{\mathbf{R}}_{i}^{\mathrm{g}}\left(\widehat{\mathcal{H}}_{i}^{\mathrm{ g}}(\mathbf{\beta})\right)+\widehat{\mathbf{E}}_{i}^{\mathrm{g}}\mathbf{\beta}=0,\quad i=1,2. \tag{17b}\]
Least-square Petrov-Galerkin (LSPG, [7]) projection.Given the reduced space \(\mathcal{Y}_{i}\subset\mathcal{X}_{i,0}\), we introduce the local solution maps \(\widehat{\mathcal{H}}_{i}^{\mathrm{pg}}:\mathcal{W}\to\mathcal{Z}_{i}^{ \mathrm{dir}}\) such that
\[\widehat{\mathcal{H}}_{i}^{\mathrm{pg}}(\mathbf{s})=\arg\min_{\mathbf{\zeta}\in \mathcal{Z}_{i}^{\mathrm{dir}}}\sup_{\mathbf{x}\in\mathcal{Y}_{i}}\frac{\mathcal{R }_{i}^{\mathrm{hf}}(\mathbf{\zeta},\mathbf{z})+\mathcal{E}_{i}^{\mathrm{hf}}(\mathbf{s },\mathbf{z})}{\|\mathbf{x}\|_{\mathcal{X}_{i}}}. \tag{18}\]
For \(\mathcal{Y}_{i}=\mathcal{X}_{i,0}\), (18) is referred to as _minimum residual_ projection. In view of the derivation of the algebraic counterpart of (18), we denote by \(\{\mathbf{\upsilon}_{i,k}\}_{k=1}^{\mathrm{ja}}\) an orthonormal basis of \(\mathcal{Y}_{i}\); then, we define the algebraic residuals
\[\left(\widehat{\mathbf{R}}_{i}^{\mathrm{pg}}(\mathbf{\alpha}_{i})\right)_{\ell}= \mathcal{R}_{i}^{\mathrm{hf}}\left(\mathbf{w}_{i}(\mathbf{\alpha}_{i}),\mathbf{ \upsilon}_{i,\ell}\right),\quad\left(\widehat{\mathbf{E}}_{i}^{\mathrm{pg}} \right)_{\ell,k}=\mathcal{E}_{i}^{\mathrm{hf}}\left(\mathbf{\eta}_{k},\mathbf{\upsilon}_ {i,\ell}\right),\] (19a) with \[i=1,2,\ell=1,\ldots,j_{\mathrm{es}},k=1,\ldots,m;\] and the local algebraic solution maps \[\widehat{\mathcal{H}}_{i}^{\mathrm{pg}}:\mathbb{R}^{m}\to\mathbb{R}^{n}\] such that \[\widehat{\mathcal{H}}_{i}^{\mathrm{pg}}(\mathbf{\beta})=\arg\min_{\mathbf{\alpha}\in \mathbb{R}^{n}}\left|\widehat{\mathbf{R}}_{i}^{\mathrm{pg}}\left(\mathbf{\alpha} \right)+\widehat{\mathbf{E}}_{i}^{\mathrm{g}}\mathbf{\beta}\right|\!,\quad i=1,2. \tag{19b}\]
We observe that (19b) reads as a nonlinear least-square problem that can be solved efficiently using GNM; the combination of LSPG ROMs within the DD formulation (6b) is challenging: we address this issue in the next section.
The ROM (18) depends on the choice of the test space \(\mathcal{Y}_{i}\). Following [46, 47], we propose to construct the test space \(\mathcal{Y}_{i}\) using POD. Given the snapshots \(\{\mathbf{w}_{i,k}^{\mathrm{hf}}\}_{k}\) and the ROB \(\{\boldsymbol{\zeta}_{i,j}\}_{j=1}^{n}\), we compute the Riesz elements \(\boldsymbol{\psi}_{i,j,k}\in\mathcal{X}_{i,0}^{\mathrm{hf}}\) such that
\[\left(\boldsymbol{\psi}_{i,j,k},\mathbf{z}\right)_{\mathcal{X}_{i}}\;=\;\frac{ \partial\mathcal{R}_{i}^{\mathrm{hf}}}{\partial\boldsymbol{\psi}_{i}}\left[ \mathbf{w}_{i,k}^{\mathrm{hf}}\right]\left(\boldsymbol{\zeta}_{i,j},\mathbf{z }\right),\quad\forall\;\mathbf{z}\in\mathcal{X}_{i,0}^{\mathrm{hf}},\] (20a) for \[i=1,2\], \[j=1,\ldots,n\], \[k=1,\ldots,n_{\mathrm{train}}\]. Then, we apply POD to find the low-dimensional bases \[\mathcal{Y}_{1}\] and \[\mathcal{Y}_{2}\], \[\left[\mathcal{Y}_{i}=\mathrm{span}\{\boldsymbol{v}_{i,j}\}_{j=1}^{j_{\mathrm{ ens}}}\right]=\texttt{POD}\left(\{\boldsymbol{\psi}_{i,j,k}:\,j=1,\ldots,n,k=1, \ldots,n_{\mathrm{train}}\},\|\cdot\|_{\mathcal{X}_{i}},j_{\mathrm{ ens}}\right),\quad i=1,2. \tag{20b}\]
As in [46, 47], we choose \(j_{\mathrm{es}}=2n\); we refer to [46, Appendix C] for a rigorous justification of the choice of the test space for linear inf-sup stable problems.
**Remark 4**.: _The solution to (16) and (18) is expensive due to the need to evaluate the HF residual and its Jacobian at each iteration. To reduce the computational burden, several authors have proposed to resort to hyper-reduction strategies [48] to speed up assembly costs at prediction stage. We refer to the recent review [49] for a detailed presentation of the subject. Since the local problems (16) and (18) fit in the framework of monolithic pMOR, standard hyper-reduction techniques can be employed. We refer to a future work for the development and the assessment of hyper-reduction techniques for the DD formulation of this work._
### Global formulation
We first introduce the algebraic counterpart of the objective (6c). We denote by \(\{(\mathbf{x}_{\eta}^{\Gamma},\omega_{q}^{\Gamma})\}_{q=1}^{N_{\Gamma}}\) the FE quadrature rule of \(\int_{\Gamma_{0}}[\bullet]\,dx\) and we define the matrices \(\mathbf{A}_{1},\mathbf{A}_{2}\in\mathbb{R}^{3N_{\Gamma}\times n}\) and the vector \(\mathbf{b}\in\mathbb{R}^{3N_{\Gamma}}\) such that
\[(\mathbf{A}_{i})_{q+(\ell-1)N_{\Gamma},j}=\sqrt{\omega_{q}^{\Gamma}}\left( \boldsymbol{\zeta}_{i,j}(\mathbf{x}_{q}^{\Gamma})\right)_{\ell},\quad( \mathbf{b})_{q+(\ell-1)N_{\Gamma}}=\sqrt{\omega_{q}^{\Gamma}}\left( \boldsymbol{\Psi}_{1,\mathbf{u}_{\mathrm{in}}}(\mathbf{x}_{q}^{\Gamma})- \boldsymbol{\Psi}_{2,\mathbf{u}_{\mathrm{in}}}(\mathbf{x}_{q}^{\Gamma}) \right)_{\ell}\] (21a) with \[q=1,\ldots,N_{\Gamma}\], \[\ell=1,2,3\], \[j=1,\ldots,n\] ; then, we rewrite the objective function as \[\boldsymbol{\mathcal{F}}_{\delta}\left(\boldsymbol{\alpha}_{1},\boldsymbol{ \alpha}_{2},\boldsymbol{\beta}\right)=\mathcal{F}_{\delta}\left(\mathbf{w}_{1} (\boldsymbol{\alpha}_{1}),\mathbf{w}_{2}(\boldsymbol{\alpha}_{2}),\mathbf{s}( \boldsymbol{\beta})\right)=\frac{1}{2}\big{|}\mathbf{A}_{1}\boldsymbol{\alpha} _{1}-\mathbf{A}_{2}\boldsymbol{\alpha}_{2}\big{|}^{2}+\frac{\delta}{2}\big{|} \boldsymbol{\beta}\big{|}^{2}. \tag{21b}\]
For the Galerkin local ROMs, the DD ROM can be obtained by simply projecting (6c) onto the reduced spaces, that is
\[\min_{\begin{subarray}{c}\mathbf{w}_{1}\in\mathcal{Z}_{i}^{\mathrm{ thr}};\\ \mathbf{w}_{2}\in\mathcal{Z}_{i}^{\mathrm{2thr}};\\ \mathbf{s}\in\mathcal{W}\end{subarray}}\mathcal{F}_{\delta}\left(\mathbf{w}_{1},\mathbf{w}_{2},\mathbf{s}\right)\quad\text{s.t.}\quad\mathcal{R}_{i}^{ \mathrm{hf}}(\mathbf{w}_{i},\mathbf{z})+\mathcal{E}_{i}^{\mathrm{hf}}(\mathbf{s },\mathbf{z})=0\quad\forall\;\mathbf{z}\in\mathcal{Z}_{i}\quad i=1,2. \tag{22a}\]
Note that non-homogeneous Dirichlet conditions are encoded in the choice of the ansatz. Exploiting the previous notation, we obtain the algebraic counterpart of (22a).
\[\min_{\begin{subarray}{c}\boldsymbol{\alpha}_{1},\boldsymbol{\alpha}_{2}\in \mathbb{R}^{n};\\ \boldsymbol{\beta}\in\mathbb{R}^{n}\end{subarray}}\boldsymbol{\mathcal{F}}_{ \delta}\left(\boldsymbol{\alpha}_{1},\boldsymbol{\alpha}_{2},\boldsymbol{ \beta}\right)\quad\text{s.t.}\quad\widehat{\mathbf{R}}_{i}^{\mathrm{g}}\left( \boldsymbol{\alpha}_{i}\right)+\widehat{\mathbf{E}}_{i}^{\mathrm{g}}\boldsymbol{ \beta}=0,\quad i=1,2. \tag{22b}\]
Problem (22b) can be solved using either GNM or SQP; as for the HF model, the methods require the computation of the derivatives of the local solution maps (17b), which satisfy
\[\frac{\partial\widehat{\mathcal{H}}_{i}^{\mathrm{g}}}{\partial\boldsymbol{\beta }}(\boldsymbol{\beta})=-\left(\frac{\partial\widehat{\mathbf{R}}_{i}^{\mathrm{ g}}}{\partial\boldsymbol{\alpha}_{i}}\left[\widehat{\mathcal{H}}_{i}^{\mathrm{g}}( \boldsymbol{\beta})\right]\right)^{-1}\widehat{\mathbf{E}}_{i}^{\mathrm{g}}. \tag{22c}\]
Note that (22c) can be computed using standard FE routines that are readily available for the full-order model.
The combination of (6b) with the LSPG ROM (19b) is more involved since the resulting component-based ROM cannot be interpreted as the projection of (6b) onto suitable low-dimensional spaces. We here rely on an approximate SQP procedure. At each iteration, given the triplet \((\boldsymbol{\alpha}_{1}^{\mathrm{\mu}},\boldsymbol{\alpha}_{2}^{\mathrm{\mu}}, \boldsymbol{\beta}^{\mathrm{\mu}})\), we compute
\[\widehat{\mathbf{R}}_{i}^{\mathrm{pg},it}=\widehat{\mathbf{R}}_{i}^{\mathrm{pg}}( \boldsymbol{\alpha}_{i}^{it})\in\mathbb{R}^{j_{\mathrm{\mu}}},\quad\widehat{ \mathbf{J}}_{i}^{\mathrm{pg},it}=\frac{\partial\widehat{\mathbf{R}}_{i}^{ \mathrm{pg}}}{\partial\boldsymbol{\alpha}_{i}}(\boldsymbol{\alpha}_{i}^{it})\in \mathbb{R}^{j_{\mathrm{\mu}}\times n};\] (23a) then, we solve the minimization problem \[\min_{\begin{subarray}{c}\boldsymbol{\alpha}_{1},\boldsymbol{\alpha}_{2}\in \mathbb{R}^{n};\\ \boldsymbol{\beta}\in\mathbb{R}^{n}\end{subarray}}\boldsymbol{\mathcal{F}}_{ \delta}\left(\boldsymbol{\alpha}_{1},\boldsymbol{\alpha}_{2},\boldsymbol{\beta} \right)\quad\text{s.t.}\ \left(\widehat{\mathbf{J}}_{i}^{\mathrm{pg},it}\right)^{\top}\left( \widehat{\mathbf{R}}_{i}^{\mathrm{pg},it}+\widehat{\mathbf{J}}_{i}^{\mathrm{pg},it }\left(\boldsymbol{\alpha}_{i}-\boldsymbol{\alpha}_{i}^{it}\right)+\widehat{ \mathbf{E}}_{i}^{\mathrm{pg}}\boldsymbol{\beta}\right)=0,\quad i=1,2. \tag{23b}\]
We observe that for \(n=j_{\text{es}}\) the constraints imply that \(\widehat{\mathbf{R}}_{i}^{\text{ps},it}+\widehat{\mathbf{J}}_{i}^{\text{ps},it} \left(\mathbf{\alpha}_{i}-\mathbf{\alpha}_{i}^{it}\right)+\widehat{\mathbf{E}}_{i}^{ \text{ps}}\mathbf{\beta}=0\) for \(i=1,2\). We hence recover the standard SQP procedure.
A thorough convergence analysis of the SQP procedure (23) is beyond the scope of the present work. Here, we observe that if \(\mathbf{\alpha}_{i}^{it}\to\mathbf{\alpha}_{i}^{\star}\) for \(i=1,2\) and \(\mathbf{\beta}^{it}\to\mathbf{\beta}^{\star}\), the constraints in (23b) reduce to
\[\left(\frac{\partial\widehat{\mathbf{R}}_{i}^{\text{ps}}}{\partial\mathbf{\alpha} _{i}}(\mathbf{\alpha}_{i}^{\star})\right)^{\top}\left(\widehat{\mathbf{R}}_{i}^{ \text{ps}}(\mathbf{\alpha}_{i}^{\star})+\widehat{\mathbf{E}}_{i}^{\text{ps}}\mathbf{ \beta}^{\star}\right)=0,\quad i=1,2.\]
Given \(i\in\{1,2\}\), the latter implies that \(\mathbf{\alpha}_{i}^{\star}\) is a stationary point of the function \(\mathbf{\alpha}_{i}\mapsto\left|\widehat{\mathbf{R}}_{i}^{\text{ps}}(\mathbf{\alpha} _{i})+\widehat{\mathbf{E}}_{i}^{\text{ps}}\mathbf{\beta}^{\star}\right|^{2}\); provided that (19b) admits a unique solution, we hence find that \(\mathbf{\alpha}_{i}^{\star}=\widehat{\mathbf{\mathcal{H}}}_{i}^{\text{ps}}(\mathbf{\beta} ^{\star})\).
### Enrichment of the trial space
In (14), we construct the state and control spaces independently. We might hence obtain that the matrices \(\frac{\partial\widehat{\mathbf{R}}^{\star}}{\partial\mathbf{\beta}}(\mathbf{\beta})\) and \(\frac{\partial\widehat{\mathbf{R}}^{\text{ps}}}{\partial\mathbf{\beta}}(\mathbf{\beta})\) are rank-deficient: as empirically shown in the numerical examples, rank deficiency of the sensitivity matrices leads to instabilities of the ROM and to poor approximations of the control \(\mathbf{s}\). To address this issue, we propose to enrich the trial spaces \(\mathcal{Z}_{1},\mathcal{Z}_{2}\) with the perturbed snapshots \(\{\widetilde{\mathbf{w}}_{i,j,k}\}_{i,j,k}\)
\[\mathcal{R}_{i}^{\text{hf}}(\mathbf{w}_{i,k}^{\text{hf}},\mathbf{z})+\frac{ \partial\mathcal{R}_{i}^{\text{hf}}}{\partial\mathbf{w}_{i}}\left[\mathbf{w} _{i,k}^{\text{hf}}\right]\left(\widetilde{\mathbf{w}}_{i,j,k}+\mathbf{\Psi}_{i, \text{lin}}-\mathbf{w}_{i,k}^{\text{hf}}\;,\;\mathbf{z}\right)+\mathcal{E}_{i }^{\text{hf}}(\mathbf{\eta}_{j}\;,\;\mathbf{z})=0\quad\forall\,\mathbf{z}\in \mathcal{X}_{i,0}^{\text{hf}}. \tag{24}\]
In more detail, given the snapshots \(\{\mathbf{w}_{i,k}^{\text{hf}}:i=1,2,k=1,\ldots,n_{\text{train}}\}\) and the reduced spaces \(\mathcal{Z}_{1},\mathcal{Z}_{2},\mathcal{W}\), we compute the perturbations \(\{\widetilde{\mathbf{w}}_{i,j,k}\}_{j,k}\subset\mathcal{X}_{i,0}^{\text{hf}}\) for \(i=1,2\), and then we update the reduced spaces \(\mathcal{Z}_{1}\) and \(\mathcal{Z}_{2}\) as follows:
\[\mathcal{Z}_{i}^{\text{new}}=\mathcal{Z}_{i}\oplus\mathcal{Z}_{i}^{\prime}, \quad\text{with}\left[\mathcal{Z}_{i}^{\prime}\right]=\mathsf{POD}\left(\{ \Pi_{\mathcal{Z}_{i}^{\perp}}\widetilde{\mathbf{w}}_{i,j,k}:j=1,\ldots,m,k=1,\ldots,n_{\text{train}}\},\|\cdot\|_{\mathcal{X}_{i}},n^{\prime}\right), \tag{25}\]
where \(\Pi_{\mathcal{Z}_{i}^{\perp}}\bullet\) denotes the projection of \(\bullet\) onto the orthogonal complement of the space \(\mathcal{Z}_{i}\) and \(n^{\prime}\) is a given integer.
Some comments are in order. The hierarchical construction of the state approximation space (25) has been proposed in a similar context in [50]. The integer \(n^{\prime}\) should be sufficiently large to ensure stability of the DD formulation; we further comment on the selection of \(n^{\prime}\) in the numerical experiments. Finally, in C, we provide a formal justification of the enrichment strategy for a linear problem.
### Hybrid solver
In the introduction, we anticipated the importance of developing a DD formulation that enables the seamless coupling of local, independently generated models. We here illustrate how to combine the HF model introduced in section 3 with the local ROM introduced in section 4. To provide a concrete reference, we assume that the HF model (7) is solved in \(\Omega_{1}\) and that the LSPG ROM (18) is solved in \(\Omega_{2}\).
We set \(N_{1}^{\mathbf{w}}=N_{1}^{\mathbf{u}}+N_{1}^{\mathbf{p}}\) and we define the basis (cf. section 3.1)
\[\{\mathbf{\xi}_{1,j}\}_{j=1}^{N_{1}^{\mathbf{w}}}=\left\{\text{vec}(\mathbf{\varphi}_{1,1},0),\ldots,\text{vec}(\mathbf{\varphi}_{1,N^{\mathbf{u}}},0),\text{vec}(0,0, \psi_{1,1}),\ldots,\text{vec}(0,0,\psi_{1,N_{1}^{\mathbf{r}}})\right\}.\]
We introduce the vector-valued representation of the lifted state field \(\underline{\hat{\mathbf{w}}}_{1}=\underline{\mathbf{w}}_{1}-\underline{\mathbf{ \Psi}}_{1,\text{lin}}\in\mathbb{R}^{N_{1}^{\mathbf{w}}}\). Then, we introduce the matrices (see (19a) and (21a)) \(\mathbf{A}_{1}^{\text{hf}}\in\mathbb{R}^{3N_{\mathbf{r}}\times N_{1}^{\mathbf{ w}}}\) and \(\mathbf{E}_{1}^{\text{hf}}\in\mathbb{R}^{N_{1}^{\mathbf{w}}\times m}\) such that
\[\left(\mathbf{A}_{1}^{\text{hf}}\right)_{q+(\ell-1)N_{\mathbf{r}},j}=\sqrt{ \omega_{q}^{\Gamma}}\left(\mathbf{\xi}_{1,j}(\mathbf{x}_{q}^{\Gamma})\right)_{\ell },\quad\left(\widehat{\mathbf{E}}_{1}^{\text{hf}}\right)_{j,k}=\mathcal{E}_{i }^{\text{hf}}\left(\mathbf{\eta}_{k},\mathbf{\xi}_{1,j}\right).\]
Then, we can state the SQP method for the hybrid coupled problem:
\[\min_{\begin{subarray}{c}\underline{\hat{\mathbf{w}}}_{1}\in\mathbb{R}^{N_{1} ^{\mathbf{w}}},\\ \underline{\mathbf{\alpha}}_{2}\in\mathbb{R}^{\text{in}};\\ \underline{\mathbf{\beta}}\in\mathbb{R}^{\text{in}};\end{subarray}}\frac{1}{2}\big{|} \mathbf{A}_{1}^{\text{hf}}\,\underline{\hat{\mathbf{w}}}_{1}-\mathbf{A}_{2}\mathbf{ \alpha}_{2}+\mathbf{b}\big{|}^{2}+\frac{\delta}{2}|\mathbf{\beta}|^{2}\quad\text{s.t.} \left\{\begin{array}{l}\mathbf{R}_{1}^{\text{hf},it}+\mathbf{J}_{1}^{\text{hf },it}\left(\underline{\hat{\mathbf{w}}}_{1}-\underline{\hat{\mathbf{w}}}_{1}^{ \text{hf}}\right)+\widehat{\mathbf{E}}_{1}^{\text{hf}}\mathbf{\beta}=0;\\ \left(\widehat{\mathbf{J}}_{2}^{\text{ps},it}\right)^{\top}\left(\widehat{\mathbf{R} }_{2}^{\text{ps},it}+\widehat{\mathbf{J}}_{2}^{\text{ps},it}\left(\mathbf{\alpha}_{2}- \mathbf{\alpha}_{2}^{it}\right)+\mathbf{E}_{2}^{\text{ps}}\mathbf{\beta}\right)=0.\end{array}\right.\] (26a) where \[\left(\mathbf{R}_{1}^{\text{hf},it}\right)_{j}=\mathcal{R}_{1}^{\text{hf}}( \mathbf{w}_{1}^{it},\mathbf{\xi}_{1,j}),\quad\left(\mathbf{J}_{1}^{\text{hf},it} \right)_{j,k}=\frac{\partial\mathcal{R}_{1}^{\text{hf}}}{\partial\mathbf{w}_{i }}\left[\mathbf{w}_{1}^{it}\right]\left(\mathbf{\xi}_{1,k},\mathbf{\xi}_{1,j}\right), \quad j,k=1,\ldots,N_{1}^{\mathbf{w}},\] (26b) with \[\mathbf{w}_{1}^{it}=\hat{\mathbf{w}}_{1}^{it}+\mathbf{\Psi}_{1,\text{lin}}.\]
Problem (26a) can be solved using the static condensation procedure outlined in (12a) and (12b). Note that for (26a) the least-square problem (12a)\({}_{1}\) is of size \(m\): the computational cost is hence independent of \(N_{1}^{\mathbf{w}}\). On the other hand, the cost to assemble the least-square problem in (12a)\({}_{1}\) is dominated by the cost of computing \((\mathbf{J}_{1}^{\mathrm{hf},it})^{-1}\mathbf{E}_{1}^{\mathrm{hf}}\), which requires the solution to \(m\) linear systems of size \(N_{1}^{\mathbf{w}}\). We emphasize that the local models in (26a) only communicate through the vehicle of the control \(\mathbf{s}\) (or equivalently through the generalized coordinates \(\boldsymbol{\beta}\)) and the matrices \(\mathbf{A}_{1}^{\mathrm{hf}},\mathbf{A}_{2}\) in the objective function: the implementation of the local models is hence agnostic to the discretization that is employed in the neighboring subdomain.
## 5 Localized training and adaptive enrichment
In section 4 we devised the CB-ROM based on the DD formulation (6b). The major limitation of the approach is the need for global HF solves to generate the reduced spaces (cf. (14)). In this section, we propose a general strategy to adaptively construct the reduced space for state and control, for the model problem of section 2.2. First, in section 5.1, we present the general multi-component DD formulation and relevant quantities that are employed in the adaptive procedure. Then, in sections 5.2 and 5.3, we present the localized training strategies for the control \(\mathbf{s}\) and for the local states. Finally, in section 5.4 we present the adaptive enrichment strategy that allows the correction of the local approximations based on global reduced-order solves.
### Multi-component formulation
Given the archetype components \(\{\widetilde{\Omega}^{k}\}_{k=1}^{N_{\mathrm{c}}}\) and the reference port \(\widetilde{\Gamma}\), we introduce the instantiated system \(\Omega\subset\mathbb{R}^{2}\) such that \(\overline{\Omega}=\bigcup_{i=1}^{N_{\mathrm{cd}}}\overline{\Omega}_{i}\) with \(\Omega_{i}=\Phi^{L_{i}}(\widetilde{\Omega}^{L_{i}},\mu_{i})\) for \(i=1,\ldots,N_{\mathrm{dd}}\) and ports \(\{\Gamma_{j}\}_{j=1}^{N_{\mathrm{f}}}\) such that \(\Gamma_{j}=\Psi_{j}(\widetilde{\Gamma})\), where \(\mu_{1},\ldots,\mu_{N_{\mathrm{dd}}}\) are geometric parameters associated with the elemental mapping and \(\Psi_{1},\ldots,\Psi_{N_{\mathrm{dd}}}\) are the mappings associated with the ports; we further introduce the union of all ports \(\Gamma:=\bigcup_{j=1}^{N_{\mathrm{f}}}\Gamma_{j}\). For \(i=1,\ldots,N_{\mathrm{dd}}\), we denote by \(\mathcal{I}_{i}^{\Gamma}\subset\{1,\ldots,N_{\mathrm{f}}\}\) the set of the indices of the ports that belong to \(\partial\Omega_{i}\). We further denote by \(\mathbf{n}_{j}^{+}\) the positive normal to the port \(\Gamma_{j}\). We denote by \(\widetilde{\boldsymbol{\Psi}}_{k,\mathbf{u}_{\mathrm{in}}}\) the HF solution to the Navier-Stokes equations in \(\widetilde{\Omega}_{k}\) with inflow condition \(u_{0}(\mathrm{Re}_{\mathrm{ref}})\) for some \(\mathrm{Re}_{\mathrm{ref}}>0\) and Neumann boundary conditions on the remaining ports; then, we introduce the parametric field \(\widetilde{\boldsymbol{\Psi}}_{k,\mathbf{u}_{\mathrm{in}}}(\mathrm{Re})= \frac{\mathrm{Re}}{\mathrm{Re}_{\mathrm{ref}}}\widetilde{\boldsymbol{\Psi}}_{k, \mathbf{u}_{\mathrm{in}}}\).
We introduce the FE spaces \(\widetilde{\mathcal{X}}_{k}^{\mathrm{hf}}\) and \(\widetilde{\mathcal{X}}_{k,0}^{\mathrm{hf}}\) associated with the domain \(\widetilde{\Omega}^{k}\) (cf. section 3.1) for \(k=1,\ldots,N_{\mathrm{c}}\); furthermore, we introduce the reduced spaces \(\widetilde{\mathcal{Z}}_{k}\subset\widetilde{\mathcal{X}}_{k,0}^{\mathrm{hf}}\) and the affine spaces \(\widetilde{\mathcal{Z}}_{k}^{\mathrm{dir}}(\mathrm{Re}):=\widetilde{ \boldsymbol{\Psi}}_{k,\mathbf{u}_{\mathrm{in}}}(\mathrm{Re})+\widetilde{ \mathcal{Z}}_{k}\) --to shorten notation, we omit the dependence of \(\widetilde{\mathcal{Z}}_{k}^{\mathrm{dir}}\) on the Reynolds number. The choice \(\widetilde{\mathcal{Z}}_{k}=\widetilde{\mathcal{X}}_{k,0}^{\mathrm{hf}}\) corresponds to considering the HF discretization in all components of type \(k\). Then, we define the global discontinuous approximation space over \(\Omega\)
\[\mathcal{X}^{\mathrm{dd}}:=\left\{\mathbf{w}\in[L^{2}(\Omega)]^{3}\,:\, \mathbf{w}|_{\Omega_{i}}\circ\Phi^{L_{i}}(\cdot,\mu_{i})\in\widetilde{ \mathcal{Z}}_{L_{i}}^{\mathrm{dir}},\ \ i=1,\ldots,N_{\mathrm{dd}}\right\}. \tag{27}\]
We denote by \(\llbracket\mathbf{w}\rrbracket\in[L^{2}(\Gamma)]^{3}\) the jump of the field \(\mathbf{w}\) on the interfaces of the partition
\[\llbracket\mathbf{w}\rrbracket(\mathbf{x})=\mathbf{w}^{+}(\mathbf{x})- \mathbf{w}^{-}(\mathbf{x})\ \ \forall\,\mathbf{x}\in\Gamma_{j},\ \ \mathbf{w}^{\pm}(\mathbf{x}):=\lim_{\epsilon\to 0^{+}} \mathbf{w}(\mathbf{x}\mp\epsilon\mathbf{n}_{j}^{+}(\mathbf{x})),\quad j=1, \ldots,N_{\mathrm{f}}. \tag{28}\]
Given the port reduced space \(\widetilde{\mathcal{W}}\subset[L^{2}(\widetilde{\Gamma})]^{3}\), we also introduce the global port space over \(\Gamma\)
\[\mathcal{W}^{\mathrm{dd}}:=\left\{\mathbf{s}\in[L^{2}(\Gamma)]^{3}\,:\, \mathbf{s}|_{\Gamma_{j}}\circ\Psi_{j}\in\widetilde{\mathcal{W}},\ \ j=1,\ldots,N_{\mathrm{f}}\right\}. \tag{29}\]
We handle geometry deformations using the _discretize-then-map_ approach (cf. [47]). Given the FE field \(\mathbf{w}\in\mathcal{X}_{i}^{\mathrm{hf}}\), we denote by \(\widetilde{\mathbf{w}}\in\widetilde{\mathcal{X}}_{i}^{\mathrm{hf}}\) the corresponding field in the reference configuration; the two fields share the same FE vector. We introduce norms in the reference components
\[\|\mathbf{w}=\mathrm{vec}\left(\mathbf{u},p\right)\|_{\widetilde{\mathcal{X}} _{k}}^{2}=\int_{\widetilde{\Omega}^{k}}\nabla\mathbf{u}:\nabla\mathbf{u}+| \mathbf{u}|^{2}+p^{2}\,dx,\quad\|\mathbf{s}=\mathrm{vec}\left(\mathbf{g},h \right)\|_{\widetilde{\Gamma}}^{2}=\int_{\widetilde{\Gamma}}\left|\nabla_{ \widetilde{\Gamma}}\mathbf{g}\right|^{2}+|\mathbf{g}|^{2}+h^{2}\,dx, \tag{30}\]
for \(k=1,\ldots,N_{\mathrm{c}}\). Then, we define the corresponding norms for the instantiated components that are obtained by applying the prescribed deformation
\[\|\mathbf{w}\|_{\mathcal{X}_{i}}:=\|\mathbf{w}|_{\Omega_{i}}\circ\Phi^{L_{i}}( \cdot,\mu_{i})\|_{\widetilde{\mathcal{X}}_{L_{i}}},\ \ i=1,\ldots,N_{\mathrm{dd}},\quad\|\mathbf{s}\|^{2}=\sum_{j=1}^{N_{\mathrm{f}}} \left|\left\|\mathbf{s}|_{\Gamma_{j}}\circ\Psi_{j}\right|\right|_{\Gamma}^{2}. \tag{31}\]
Note that the algebraic norms associated with (31) are independent of the geometric parameters that enter in the mappings \(\{\Phi^{L_{i}}(\cdot,\mu_{i})\}_{i}\): there exist indeed \(N_{\mathrm{c}}\) matrices \(\mathbf{X}_{1},\ldots,\mathbf{X}_{N_{\mathrm{c}}}\) such that \(\|\mathbf{w}\|_{\mathcal{X}_{i}}=\sqrt{\underline{\mathbf{w}}^{\top}\mathbf{X}_{L _{i},\underline{\mathbf{w}}}}\) for
\(i=1,\ldots,N_{\rm dd}\). This observation simplifies the implementation of the dual residual norm used in the adaptive strategy (cf. (35)). Similarly, the variational forms associated with the PDE problem are defined for each archetype component and then mapped to obtain the variational forms for each instantiated component. We define the forms \(\widetilde{\mathcal{R}}^{\rm hf}_{k}:\mathcal{X}^{\rm hf}_{k}\times\mathcal{X} ^{\rm hf}_{k,0}\times\mathcal{P}_{k}\times\mathbb{R}_{+}\to\mathbb{R}\) such that
\[\widetilde{\mathcal{R}}^{\rm hf}_{L_{i}}(\widetilde{\mathbf{w}},\widetilde{ \mathbf{z}};\mu_{i},{\rm Re})=\mathcal{R}^{\rm hf}_{i}(\mathbf{w},\mathbf{z}), \quad\forall\,\mathbf{w}\in\mathcal{X}^{\rm hf}_{i},\ \ \mathbf{z}\in\mathcal{X}^{\rm hf}_{i,0}. \tag{32}\]
We further define the boundary form
\[\widetilde{\mathcal{E}}^{\rm hf}_{L_{i},\ell}(\widetilde{\mathbf{\eta}}, \widetilde{\mathbf{z}},\mu_{i})=\int_{\Gamma_{j,\ell}}\widetilde{\mathbf{\eta }}\circ\Psi_{j}^{-1}\cdot\mathbf{z}\circ\Phi_{i}^{-1}\,dx\quad\text{where }\Phi_{i}:= \Phi^{L_{i}}(\cdot;\mu_{i}),\quad\forall\,\widetilde{\mathbf{\eta}}\in L^{2}( \widetilde{\Gamma}_{j};\mathbb{R}^{3}),\ \ \mathbf{z}\in\mathcal{X}^{\rm hf}_{i,0}, \tag{33}\]
where \(j_{i,\ell}\in\{1,\ldots,N_{\rm f}\}\) is the index (in the global numbering) of the \(\ell\)-th port of the \(i\)-th component of the system.
We have now the elements to present the DD Galerkin formulation:
\[\min_{\mathbf{w}\in\mathcal{X}^{\rm dd},\mathbf{z}\in\mathcal{W}^{\rm dd}} \frac{1}{2}\int_{\Gamma}\left[\left|\mathbf{w}\right|\right]^{2}dx+\frac{ \delta}{2}\|\mathbf{s}\|^{2}\ \text{s.t.}\ \mathcal{R}^{\rm hf}_{i}(\mathbf{w}, \mathbf{z})+\mathcal{E}^{\rm hf}_{i}(\mathbf{s},\mathbf{z})=0\ \ \forall\,\mathbf{z}\in \mathcal{Z}_{i},\quad i=1,\ldots,N_{\rm dd};\] (34a) where \[\mathcal{Z}_{i}=\{\boldsymbol{\zeta}\in[H^{1}(\Omega_{i})]^{3}:\,\boldsymbol{ \zeta}\circ\Phi^{L_{i}}(\cdot;\mu_{i})\in\widetilde{\mathcal{Z}}_{L_{i}}\}\] and \[\mathcal{E}^{\rm hf}_{i}(\mathbf{s},\mathbf{z})=\sum_{j\in\mathcal{Z}^{\rm f}_ {i}}\int_{\Gamma_{j}}\mathbf{s}\cdot\mathbf{z}\,dx, \tag{34b}\]
for \(i=1,\ldots,N_{\rm dd}\). Formulation (34a) can be adapted to cope with Petrov-Galerkin ROMs using the strategy outlined in section 4.5: we omit the details.
Given the estimate \((\mathbf{w}^{\star},\mathbf{s}^{\star})\) of the solution to (34a), we devise two error indicators to assess its accuracy; the indicators are employed in section 5.4 to drive the enrichment strategy. First, we define the local errors
\[e_{i}:=\sup_{\mathbf{z}\in\mathcal{X}^{\rm dd}_{i,0}}\frac{\mathcal{R}^{\rm hf }_{i}(\mathbf{w}^{\star},\mathbf{z})+\mathcal{E}^{\rm hf}_{i}(\mathbf{s}^{ \star},\mathbf{z})}{\|\mathbf{z}\|_{\mathcal{X}_{i}}},\quad i=1,\ldots,N_{\rm dd}. \tag{35}\]
The quantity \(e_{i}\) measures the performance of the \(i\)-th ROM to approximate the solution to the Navier-Stokes equations for the control \(\mathbf{s}^{\star}\). We further introduce the jump errors:
\[e_{j}^{\rm jump}:=\sqrt{\int_{\Gamma_{j}}\,|[\mathbf{w}]|^{2}\,dx},\quad j=1, \ldots,N_{\rm f}. \tag{36}\]
The indicator (36) controls the jump of the state estimate at the interfaces: the value of \(e_{j}^{\rm jump}\) can thus be interpreted as the measure of the ability of the control to nullify the jump at the \(j\)-th interface of the domain.
**Remark 5**.: _In order to enhance the compressibility of the local state and control manifolds, following [51], in the numerical experiments, we consider the approximation spaces_
\[\begin{split}\mathcal{X}^{\rm dd}&:=\left\{ \mathbf{w}\in L^{2}(\Omega;\mathbb{R}^{3})\,:\,\mathbf{A}(\theta_{i})\mathbf{ w}|_{\Omega_{i}}\circ\Phi^{L_{i}}(\cdot,\mu_{i})\in\widetilde{\mathcal{Z}}^{ \rm dir}_{L_{i}},\ \ i=1,\ldots,N_{\rm dd}\right\};\\ \mathcal{W}^{\rm dd}&:=\left\{\mathbf{s}\in L^{2}( \Gamma;\mathbb{R}^{3})\,:\,\mathbf{A}(\omega_{j})\mathbf{s}|_{\Gamma_{j}} \circ\Psi_{j}\in\widetilde{\mathcal{W}},\ \ j=1,\ldots,N_{\rm f}\right\};\end{split}\] (37a) _where \[\theta_{i}\] (resp., \[\omega_{i}\] ) is the angle between the inlet port of the \[i\] -th deformed component \[\Omega_{i}\] (resp., the \[j\] -th port \[\Gamma_{j}\] ) and the \[x_{1}\] axis, and_ \[\mathbf{A}(\theta)=\left[\begin{array}{ccc}\cos(\theta)&-\sin(\theta)&0\\ \sin(\theta)&\cos(\theta)&0\\ 0&0&1\end{array}\right]. \tag{37b}\]
_We remark that several authors have considered more sophisticated (Pionla) transformations to improve the compressibility of solution manifolds in internal flows, (e.g. [14]): in this respect, our choice is a compromise between accuracy and simplicity of implementation._
### Pairwise training for the control variables
Following [16, 17], we pursue a pairwise-training approach to generate the port space \(\widetilde{\mathcal{W}}\). We perform HF simulations for systems of two components that represent all possible connections (channel-channel, channel-junction, junction-junction, junction-channel) based on random Dirichlet boundary conditions at the inflow,
random Neumann conditions at the outflow, and a random selection of the Reynolds number and the geometric parameters in prescribed parameter ranges (cf. Figure 4). The HF data for the ports are retrieved and stored, and finally the port space \(\widetilde{\mathcal{W}}\) is constructed using POD. Recalling (37), the HF data are rotated using (37b) before applying the compression technique.
Similarly to [16, 52], we consider the inlet velocity
\[\mathbf{u}_{\mathrm{in}}(y)=-\frac{\mathrm{Re}}{\mathrm{Re}_{\mathrm{ref}}} \left(u_{0}(y)+\delta_{u}\sum_{k=1}^{R}\frac{c_{k}}{k^{2}}P_{k}(-1+2y)\right) \mathbf{n}, \tag{38}\]
where \(\mathrm{Re}\sim\mathrm{Uniform}(\mathrm{Re}_{\mathrm{min}},\mathrm{Re}_{ \mathrm{max}})\), \(\{P_{k}\}_{k}\) are zero-flowrate weighted polynomials (cf. [17, section 3.1.1])
\[P_{k}(y)=\left\{\begin{array}{ll}(1-y^{2})y,&\mbox{if $k=1$},\\ (1-y^{2})(5y^{2}-1),&\mbox{if $k=2$},\\ (1-y^{2})\mathcal{E}_{k}(y),&\mbox{if $3\leq k\leq R$},\end{array}\right.\]
and \(\{\mathcal{E}_{k}\}_{k}\) are the Legendre polynomials. The coefficients of the expansion are sampled from a standard Gaussian distribution, \(c_{1},\ldots,c_{R}\stackrel{{\mathrm{iid}}}{{\sim}}\mathcal{N}( 0,1)\), \(\mathbf{n}\) denotes the outward normal to \(\Omega\) on the inlet boundary, \(y\in(0,1)\) is the curvilinear coordinate, \(u_{0}(y)=4y(1-y)\) is the Poiseuille velocity profile, the coefficient \(\delta_{u}\) is selected _a posteriori_ to ensure that the inflow is positive for all \(y\in(0,1)\). Similarly, we prescribe the outward flux as
\[\mathbf{g}_{\mathrm{out}}(y)=\left(g_{0}+\delta_{g}\sum_{k=1}^{R}c_{k}^{ \mathrm{out}}\mathcal{E}_{k}(-1+2y)\right)\mathbf{n},\quad c_{1}^{\mathrm{out }},\ldots,c_{R}^{\mathrm{out}}\stackrel{{\mathrm{iid}}}{{\sim}} \mathcal{N}(0,1), \tag{39}\]
where \(g_{0}\sim\mathrm{Uniform}(g_{0\mathrm{min}},g_{0\mathrm{max}})\), and we choose the coefficient \(\delta_{g}\) to prevent reverse flow.
### Localized training for the state variables
After having built the reduced space for the control, we repeatedly solve (34a) for several random configurations and several parameter values to acquire datasets of simulations for each archetype component. Thanks to port reduction, the computational cost of the global problem is significantly reduced if compared with the full HF model; nevertheless, we choose to consider systems with a moderate number of components (up to four) to further reduce offline costs. The HF data for components of the same type are mapped in the reference configurations, rotated through (37b), and are then used to build the local reduced spaces \(\widetilde{\mathcal{Z}}_{1},\ldots,\widetilde{\mathcal{Z}}_{N_{c}}\).
We observe that the training strategy is not fully local since it requires to assemble systems with up to four components. In our experience, the practical implementation of a fully localized training strategy for incompressible flows is extremely challenging due to the need to ensure that the fluid flows from left to right and that the prescribed Neumann conditions lead to physical velocities. The choice of considering global training based on a reduced control space for systems of moderate dimension represents a trade-off between offline efficiency and accuracy. The adaptive strategy presented in the next section provides a systematic way to improve the quality of the local reduced spaces.
### Adaptive enrichment
In Algorithm 1, we present the full adaptive strategy for the construction of the reduced spaces. The procedure extends the method introduced in [36]; to clarify the presentation, we postpone two steps of the algorithm to sections 5.4.1 and 5.4.2.
Figure 4: channel-junction connection for the training of \(\mathbf{s}\).
```
1: Generate the reduced space \(\widetilde{\mathcal{W}}\) for the control through pairwise training (cf. section 5.2).
2: Generate the local spaces \(\{\widetilde{\mathcal{Z}}_{k}\}_{k=1}^{N_{c}}\) for the state through global training (cf. section 5.3).
3: Enrich the reduced spaces \(\{\widetilde{\mathcal{Z}}_{k}\}_{k=1}^{N_{c}}\) based on the port space \(\widetilde{\mathcal{W}}\) (cf. section 5.4.2).
4: Sample \(n_{\text{train}}^{\text{glo}}\) global configurations, \(\mathscr{P}_{\text{train}}:=\{\mu^{j}\}_{j=1}^{n_{\text{train}}^{\text{glo}}}\).
5: (if LSPG projection is employed) Build the empirical test space (cf. section 4.2)
6:for\(it=1,\ldots,\texttt{maxit}\)do
7: Initialize the datasets \(\mathscr{D}_{(1)}=\ldots=\mathscr{D}_{(N_{c})}=\emptyset\) and \(\mathscr{D}_{\mathbf{s}}=\emptyset\).
8:for\(\mu\in\mathscr{P}_{\text{train}}\)do
9: Compute the reduced solution using the CB-ROM solver (cf. (34a)).
10: Compute local residuals \(\{e_{i}\}_{i=1}^{N_{\text{data}}}\) (cf. (35)) and the jumps \(\{e_{j}^{\text{port}}\}_{j=1}^{N_{c}}\) (cf. (36))
11: Mark the \(m_{\mathbf{w}}\) instantiated components with the largest residuals of each type \(\{\mathbb{I}_{\text{mark}}^{n,(k)}\}_{k=1}^{N_{c}}\).
12: Mark the \(m_{\mathbf{s}}\) instantiated ports with the largest port jumps of each type \(\mathbb{I}_{\text{mark}}^{n,\text{p}}\).
13: Update the datasets \(\mathscr{D}_{(1)},\ldots,\mathscr{D}_{(N_{c})}\) and \(\mathscr{D}_{\mathbf{s}}\) (cf. section 5.4.1)
14:endfor
15: Update the port POD space \(\widetilde{\mathcal{W}}=\widetilde{\mathcal{W}}\oplus\text{POD}\left(\{ \Pi_{\widetilde{\mathcal{W}}^{1}}\widetilde{\mathbf{s}}:\widetilde{\mathbf{s }}\in\mathscr{D}_{\mathbf{s}}\},\|\cdot\|_{\widetilde{\Gamma}^{1}},n^{\text{ glo}}\right)\).
16: Update the reduced spaces \(\widetilde{\mathcal{Z}}_{k}=\widetilde{\mathcal{Z}}_{k}\oplus\text{POD}\left( \{\Pi_{\widetilde{\mathcal{Z}}_{k}^{1}}\widetilde{\mathbf{w}}:\widetilde{ \mathbf{w}}\in\mathscr{D}_{(k)}\},\|\cdot\|_{\widetilde{\mathcal{X}}_{k}^{1} },n^{\text{glo}}\right)\), \(k=1,\ldots,N_{c}\).
17: (Optional) Enrich the reduced spaces \(\{\widetilde{\mathcal{Z}}_{k}\}_{k=1}^{N_{c}}\) based on the port space (cf. section 5.4.2).
18: (if LSPG projection is employed) Update the empirical test space (cf. section 4.2)
```
**Algorithm 1** Adaptive enrichment procedure.
As in [36], we add \(m_{\mathbf{w}}\) (resp., \(m_{\mathbf{s}}\)) snapshots to the state (resp., control) datasets for each element of \(\mu\in\mathcal{P}_{\text{train}}\), instead of selecting the marked elements after having computed the local indicators for all configurations: this choice avoids the storage of all reduced global solutions and ultimately simplifies the implementation. In our experience, the enrichment of the state spaces is only needed for localized training (Line 3 of the Algorithm) but not after each update of the control space \(\widetilde{\mathcal{W}}\) (Line 17 of the Algorithm): a possible explanation is that the enrichment step inherently couples the construction of the two spaces. Further numerical investigations are necessary to investigate this aspect.
Algorithm 1 depends on several user-defined parameters. The localized training of the control space depends on (i) the sampling distributions for the Dirichlet inflow boundary condition (38) and for the Neumann outflow condition (39); (ii) the number \(n_{\text{loc}}^{\mathbf{s}}\) of samples; and (iii) the number \(m_{0}\) of retained POD modes. The localized training for the state variables depends on (i) the number \(N_{\text{dd}}\) components of the networks considered; (ii) the number \(n_{\text{loc}}^{\mathbf{w}}\) of samples; and (iii) the number \(n_{0}\) of retained POD modes for each archetype component. The enrichment strategy depends on (i) the number \(n^{\prime}\) of added modes (cf. section 5.4.2). The adaptive loop depends on (i) the number \(\texttt{maxit}\) of outlet loop iterations; (ii) the number \(n_{\text{train}}^{\text{glo}}\) of global configurations; (iii) the numbers \(m_{\mathbf{w}}\) and \(m_{\mathbf{s}}\) of marked components and ports; (iv) the number \(n^{\text{glo}},m^{\text{glo}}\) of modes added at each iteration for state and control variables. We envision that the selection of several parameters can be automated: to provide a concrete reference, the parameters \(n^{\text{glo}},m^{\text{glo}}\) can be updated based on a energy/projection criterion. Nevertheless, further investigations are necessary to provide actionable guidelines to select all the parameters.
#### 5.4.1 Computation of the local solutions
Given the sampled port \(\Gamma_{j}\), we solve the HF model with flux boundary conditions given by the control \(\mathbf{s}^{\star}\) on the remaining port (cf. Figure 5) in the domain \(\Omega^{\star}=\Omega_{j}^{+}\cup\Omega_{j}^{-}\) where \(\Omega_{j}^{+},\Omega_{j}^{-}\) are the elements of the network that share \(\Gamma_{j}\). Given the sampled component \(\Omega_{i}\), we consider two separate strategies: (i) we solve the global hybrid model in which we replace the local ROM with the local HF model in the sampled component, or (ii) we solve the HF model in the sampled component with boundary conditions prescribed by the control estimate \(\mathbf{s}^{\star}\). The first option is significantly less computationally expensive; however, we experienced some convergence issues for very inaccurate controls \(\mathbf{s}^{\star}\). For this reason, in the numerical experiments, we rely on global hybrid solves for the first iteration of the algorithm and to fully local solves for the subsequent iterations.
#### 5.4.2 Enrichment of the state spaces
It suffices to generalize the procedure of section 4.4. We denote by \(\{\widehat{\mathbf{w}}_{\ell}^{k}\}_{\ell=1}^{n_{\text{train}}}\) a dataset of snapshots associated with the \(k\)-th archetype component and the local parameters \(\{\mu_{\ell}^{k}\}_{\ell=1}^{n_{\text{train}}}\) and \(\{\text{Re}_{\ell}\}_{\ell=1}^{n_{\text{train}}}\). The dataset \(\{\widehat{\mathbf{w}}_{\ell}^{k}\}_{\ell=1}^{n_{\text{train}}^{k}}\) is extracted by the global simulations performed in the internal loop (cf. Lines \(8-14\)) of Algorithm 1 or from the simulations performed to generate the initial local space \(\widetilde{\mathcal{Z}}_{k}\) (cf. Line 2). We denote by \(\{\mathbf{\eta}_{j}^{\prime}\}_{j=1}^{m}\) the newly-added modes of the port space; we further recall the definitions of the local residuals and (32) boundary forms (33). Then, we define \(\widehat{\mathbf{w}}_{\ell,j,q}^{k}\) such that (compare with (24))
\[\widetilde{\mathcal{R}}_{k}^{\text{hf}}(\widehat{\mathbf{w}}_{\ell}^{k}, \mathbf{z};\mu_{\ell}^{k},\text{Re}_{\ell})+\frac{\partial\widetilde{\mathcal{ R}}_{k}^{\text{hf}}}{\partial\widehat{\mathbf{w}}_{k}}\left[\widetilde{\mathbf{w}}_{ \ell}^{k},\mu_{\ell}^{k},\text{Re}_{\ell}\right]\left(\widetilde{\mathbf{w}}_{ \ell,j,q}^{k}+\widetilde{\mathbf{w}}_{k,\text{u}_{\text{in}}}(\text{Re}_{\ell })-\widetilde{\mathbf{w}}_{\ell}^{k}\,\ \mathbf{z}\right)+\widetilde{\mathcal{E}}_{k,q}^{\text{hf}}(\mathbf{\eta}_{j}^{ \prime}\,\ \mathbf{z})=0\quad\forall\,\mathbf{z}\in\widetilde{\mathcal{A}}_{k,0}^{ \text{hf}},\]
for \(\ell=1,\ldots,n_{\text{train}}^{k}\), \(j=1,\ldots,m\), and \(q=1,\ldots,N_{\text{port}}^{k}\) (\(N_{\text{port}}^{k}=2\) for the channel component, and \(N_{\text{port}}^{k}=3\) for the junction component). After having computed the snapshots \(\{\widehat{\mathbf{w}}_{\ell,j,q}^{k}\}_{\ell,j,q}\), we update the reduced space \(\widetilde{\mathcal{Z}}_{k}\) with \(n^{\prime}\) modes using POD (cf. (25)).
## 6 Numerical results
We present numerical results of the proposed method for the parameterized incompressible flow of section 2.2. The parameters are the Reynolds number and the geometric parameters \(\alpha\) and \(h_{c}\) introduced for each instantiated component. We consider a P2 FE discretization with 1281 degrees of freedom for the channel, and 3329 degrees of freedom for the junction. The regularization constant \(\delta\) is set equal to \(10^{-8}\).
### HF solver
We present the HF results for the Reynolds number \(\text{Re}=100\) and the geometric configuration shown in Figure 3(b). In Figure 6(a)-(b)-(c), we show the solution to the global HF problem (i.e., without domain decomposition) for the x-direction velocity, y-direction velocity, and the pressure, respectively. Figures 6(d)-(e)-(f) illustrate the difference between the solution to the global problem and the solution to the (multi-component generalization of the) DD formulation (6b). Our new formulation exhibits high accuracy, with a pointwise error of the order of \(10^{-6}\) for the three variables. Here, we employ the SQP method introduced in section 3.3.2; GNM (cf. section 3.3.1) does not converge for this value of the Reynolds number. For the solution to the DD problem, the global prediction at the interfaces is obtained by averaging the solution in the two neighboring sub-domains.
Figure 5: computation of local solution. Port update: if \(\Gamma_{1}\) is the sampled port, we solve the HF model in the components \(\Omega_{1}\cup\Omega_{2}\) with Neumann boundary conditions on the port \(\Gamma_{2}\) given by the predicted control \(\mathbf{s}^{*}\) (cf. Line 9, Algorithm 1). State update: if \(\Omega_{2}\) is the sampled component, we either solve the global problem using the HF discretization in \(\Omega_{2}\) and the ROM discretization in \(\Omega_{3}\) (option 1), or we solve the HF model in \(\Omega_{2}\) with Neumann boundary conditions on the ports \(\Gamma_{1}\) and \(\Gamma_{2}\) given by \(\mathbf{s}^{*}\) (option 2).
In Figure 7, we present the comparison between the monolithic FE solution and the solution to the DD formulation (5). The results of Figure 7 show much larger pointwise errors for both velocity and pressure -- the error for the pressure is \(\mathcal{O}(10^{-2})\) as opposed to \(\mathcal{O}(10^{-6})\). This result justifies the addition of the control \(h\) for the continuity equation.
In Figure 8, we present the variable jump across the interfaces for the new formulation (6b) and the standard formulation (5). For (5), the jump of the velocity field is modest, but it is significant (\(\mathcal{O}(10^{-1})\)) for the pressure. In contrast, for (6b), the jump of both velocity and pressure is extremely modest. These results further corroborate the introduction of the control \(h\) for the continuity equation.
Figure 6: HF formulation. (a)-(b)-(c) Behavior of the solution to the monolithic FE problem. (d)-(e)-(f) difference between the monolithic FE solution and the DD solution based on (6b).
Figure 7: HF formulation. Difference between the monolithic FE solution and the DD solution based on (5).
Figure 9 investigates the effect of the choice of the penalization norm for the control. In more detail, we compare the behavior of the horizontal control \(g_{x}\) for the first port \(\Gamma_{1}\) in Figure 3(b) for both \(L^{2}\) regularization and \(H^{1}\) regularization. We observe that the use of the \(H^{1}\) regularization dramatically reduces the spurious oscillations in the proximity of the boundaries of the domain. We further observe that, since \(\mathbf{n}=\mathrm{vec}(1,\,0)\), the control \(g_{x}\) should equal the viscous flux \(-p+\nu\frac{\partial u_{x}}{\partial x}\); provided that \(p\gg\left|\nu\frac{\partial u_{x}}{\partial x}\right|\), we hence find that \(g_{x}\approx-p\).
### MOR procedure for networks of moderate size
We now evaluate the performance of the ROM introduced in section 4 for the system configuration shown in Figure 3. Since the total number of degrees of freedom is relatively modest, we can afford to solve the multi-component generalization of (6b) with HF local models and HF control. This enables a rigorous assessment of the results. For the test cases presented in this section and in section 6.3, we choose the dimension of the original ROB (i.e., without ROB enrichment) for the state \(n\) to be equal to the dimension of the ROB for the control \(m\).
Figure 8: HF formulation. (a)-(b)-(c) interface jump of the solution to (6b). (d)-(e)-(f) interface jump of the solution to (5).
Figure 9: Comparison of the \(H^{1}\) norm and the \(L^{2}\) norm for the regularization term.
#### 6.2.1 Performance for a fixed geometry
We freeze the value of the geometric parameters and we let the Reynolds number vary in the domain \(\mathcal{P}=[50,150]\). We train the local ROMs based on \(n_{\text{train}}=60\) snapshots with equi-spaced parameters in \(\mathcal{P}\), and we assess the performance of the resulting CB-ROM based on \(n_{\text{test}}=10\) randomly-selected out-of-sample parameters. We measure performance of the ROMs in terms of the average out-of-sample relative prediction error for the four components:
\[E_{\text{avg},\,i}:=\frac{1}{n_{\text{test}}}\sum_{\mu\in\mathcal{P}_{\text{ test}}}\frac{\|\mathbf{w}_{i}^{\text{hf}}(\mu)-\widehat{\mathbf{w}}_{i}(\mu)\|_{ \mathcal{X}_{i}}}{\|\mathbf{w}_{i}^{\text{hf}}(\mu)\|_{\mathcal{X}_{i}}},\quad i =1,\cdots,N_{\text{dd}}=4, \tag{40}\]
and the three ports:
\[E_{\text{avg},\,j}^{\text{port}}:=\frac{1}{n_{\text{test}}}\sum_{\mu\in \mathcal{P}_{\text{test}}}\frac{\big{\|}\!\big{\|}\mathbf{s}_{j}^{\text{hf}}( \mu)-\widehat{\mathbf{s}}_{j}(\mu)\big{\|}\!\big{\|}_{\Gamma_{i}}}{\big{\|} \!\big{\|}\!\big{\|}\!\big{\|}\!\big{\|}\!\big{\|}\!\big{\|}\!\big{\|}\!\big{\|} \!\big{\|}\!\big{\|}\!\big{\|}\!
In Figure 13, we illustrate the performance of the ROM when we employ the enrichment strategy discussed in section 4.4. To facilitate comparison, we include dashed lines representing the results obtained without employing ROB enrichment, which corresponds to the data presented in Figure 11. Here, the number of additional modes \(n^{\prime}\) (cf. section 4.4) is chosen to be equal to the dimension of the ROB of the control, \(m\). The ROB enrichment strategy significantly reduces the prediction error for the control; the state prediction achieved with ROB enrichment is comparable to the case without ROB enrichment and is not provided below. We further remark that the enrichment does not contribute to increase the number of SQP iterations: to provide a concrete reference, for \(m=10\), SQP converges in six iterations for both cases.
Figure 11: performance for a fixed geometry. Behavior of the error (41) for the three ports (no enrichment).
Figure 12: performance for a fixed geometry. Profile of the two components of the control \(\mathbf{g}\) for one representative parameter value along the port 3 (no enrichment).
Figure 13: performance for a fixed geometry. Behavior of the error (41) for the three ports (with enrichment).
#### 6.2.2 Performance for a parametric geometry
We incorporate the geometric parameters described in section 2.2, along with the Reynolds number. For each junction component in the network, we set \(\alpha\in[\frac{\pi}{8},\frac{\pi}{6}]\); for each channel component, we set \(h_{c}\in[0.1,\,0.3]\); finally, we consider \(\mathrm{Re}\in[50,150]\) with \(\mathrm{Re}_{\mathrm{ref}}=100\). We train the ROMs based on \(n_{\mathrm{train}}=120\) snapshots and assess performance based on \(n_{\mathrm{test}}=10\) randomly-selected out-of-sample parameters. As for the previous test, we analyze the prediction error for both \(\mathbf{w}\) and \(\mathbf{s}\) associated with the different ROMs. Figure 14 illustrates the prediction error \(E_{\mathrm{avg},\,i}\) for the four components, while Figure 15 shows the prediction error \(E_{\mathrm{avg},\,k}^{\mathrm{port}}\) for the three ports. Interestingly, the Galerkin method is as effective as the minimal residual and the Petrov-Galerkin methods. All three ROMs yield a state prediction relative error of approximately \(\mathcal{O}(10^{-4})\) for \(n=20\); on the other hand, the control prediction error is roughly \(\mathcal{O}(10^{-1})\) for the third port, and \(\mathcal{O}(10^{-2})\) for the other two ports, for \(n=20\). In Figure 16, we perform a comparison of ROM errors associated to the three ports, with and without the ROB enrichment strategy outlined in section 4.4. The dashed lines represent the results obtained in the absence of ROB enrichment, which correspond to the data shown in Figure 15. As for the previous test, the ROB enrichment strategy significantly improves the accuracy of the control prediction. Here, the number of additional modes \(n^{\prime}\) (cf. section 4.4) is chosen to be twice as large as the dimension of the ROB for the ports \(m\).
### Localized training and adaptive enrichment
In the previous test cases, a distinct reduced space is employed for each instantiated component: the same configuration is used for both training and assessment. This approach is computationally demanding when dealing with systems that comprise a large number of components; it is also unfeasible in the presence of topology changes. To address this issue, we apply the localized training and adaptive enrichment algorithms developed in section 5.
#### 6.3.1 Application to networks with four components
We apply the localized training strategy of sections 5.2 and 5.3, for the same test set of section 6.2.2. In order to build the reduced space for the control, we consider 60 randomly selected boundary conditions for each connection described in section 5.2; on the other hand, we generate the reduced space for the state using 20 randomly-sampled networks with four components and the reduced space for the control.
Figure 17 presents the prediction error \(E_{\mathrm{avg,\,i}}\) for the four components, while Figure 18 shows the prediction error \(E_{\mathrm{avg,\,k}}^{\mathrm{port}}\) for the three ports; we do not rely on the enrichment of the state space (cf. section 4.4). The results are comparable to those obtained in section 6.2.2 with slight deterioration in accuracy. Figure 19 displays the ROM errors for the three ports using ROB enrichment (\(n^{\prime}=2m\)), as represented by the solid line. The results exhibit significant improvement when compared to those obtained without the use of the state space enrichment, as illustrated by the dashed lines, which correspond to the data shown in Figure 18.
Figure 16: performance for a parametric geometry. Behavior of the error (41) for the three ports (with enrichment).
Figure 15: performance for a parametric geometry. Behavior of the error (41) for the three ports (no enrichment).
Figure 17: localized training for networks with four components. State prediction error for the four sub-domains.
Figure 18: localized training for networks with four components. Control prediction error for the three ports (without enrichment).
#### 6.3.2 Application to networks with ten components
We apply the full training procedure described in Algorithm 1 to \(n_{\text{test}}=10\) randomly selected configurations with ten components. As for the previous test case, we consider independent geometric variations for each instantiated component and we consider \(\text{Re}\in[50,150]\). We only present results for local Galerkin ROMs: the results obtained using minimum residual projection are comparable and are hence omitted.
Figure 20 shows the local relative error for the state and for the control, over the test set for the CB-ROM based on localized training: we use the same dataset considered in section 6.3.1 with port-based enrichment (cf. section 4.4). We observe that the error is roughly \(10\%\) for both state and control and does not decrease as we increase the number of modes.
Figure 21 shows the results for the full application of Algorithm 1. We initialize the algorithm with a ROB of size \(m_{0}=10\) for the control using localized training; we apply the strategy of section 5.3, together with port-based enrichment, to find reduced spaces for the state of size \(n_{0}=10+10\) for each component. Then, we apply adaptive enrichment: we consider \(n_{\text{train}}^{\text{glo}}=50\) global randomly-selected configurations with ten components; we mark \(m_{\mathbf{s}}=1\) port, and \(m_{\mathbf{w}}=1-3\) components of each type (specifically, we mark 1 component with the largest error of each type, along with the 2 adjacent components of the marked port). Then, we augment the bases for state and control with \(n^{\text{glo}}=m^{\text{glo}}=10\) modes. We do not apply the port-based enrichment strategy after each iteration of the adaptive loop (cf. Line 17). If Figure 21, iteration _it_ corresponds to local ROBs of size \(m=10(it+1)\) and \(n=10(it+2)\).
We observe that the enrichment strategy clearly enhances the performance of the CB-ROM. This result empirically demonstrates the importance of adaptive enrichment when dealing with nonlinear PDEs.
Figure 19: localized training for networks with four components. Control prediction error for the three ports (with enrichment).
Figure 20: application to networks with ten components. Boxplots of the out-of-sample error for reduced spaces of several sizes obtained using localized training (without adaptive enrichment).
## 7 Conclusions
We developed and numerically validated a component-based model order reduction procedure for incompressible flows governed by the Navier-Stokes equations. Our point of departure is the optimization-based formulation of [29]: we included an additional control variable \(h\) for the continuity equation that weakly enforces the continuity of pressure at interfaces; furthermore, we modified the regularization term to damp spurious oscillations of the control. We relied on sequential quadratic programming to solve the nonlinear optimization problem: at each iteration of the procedure, we relied on static condensation of the local degrees of freedom to enable trivial parallelism of the local solves and avoid the introduction of Lagrange multipliers. We relied on projection-based (Galerkin and Petrov-Galerkin) ROMs to speed up the solution to the local subproblems and we exploited port reduction to reduce the cost of the global problem. Finally, we adapted the localized training and adaptive enrichment strategy of [36] to build the local approximation spaces without the need for expensive global HF solves.
We illustrated the many pieces of our methodology for a parametric steady Navier-Stokes problem at moderate (\(\mathcal{O}(10^{2})\)) Reynolds number. The new DD formulation enables much tighter control of the discrepancy between the FE monolithic solver and the DD solution. LSPG projection is superior to Galerkin projection in the absence of geometric variability; interestingly, Galerkin and LSPG projection show comparable performance for all the test cases that involve varying geometries. The port-based enrichment of the state space (cf. section 4.4) is key to adequately approximate the control variables. The localized training strategy discussed in this paper leads to poor reconstructions of the state; adaptive enrichment driven by local error indicators is hence necessary to achieve accurate reconstructions.
In the future, we plan to extend our method to a broader class of problems including multi-physics (fluid-structure interaction) and unsteady problems, and to more challenging (higher-Reynolds, three-dimensional) test cases. Towards this end, it is of paramount importance to devise effective hyper-reduction techniques to speed up local solves and also the assembly of the objective function. We also plan to combine first-principles models with data-fitted models to enhance the flexibility of the method.
## Acknowledgements
The work of Lei Zhang is supported by the Fundamental Research Funds for the Central Universities of Tongji University.
## Appendix A Stabilized FE formulation
For completeness, we review the stabilized finite element formulation employed in the numerical results; we refer to [38, 39] for a thorough review of stabilized FE methods for incompressible flows. We denote by \(\{\mathtt{D}_{k}^{i}\}_{k}\) the elements of the mesh of \(\Omega_{i}\); we further denote by \(h_{k,i}\) the size of the \(k\)-th element of the mesh, and by \(r\) the degree of the polynomials.
Figure 21: application to networks with ten components. Boxplots of the out-of-sample error for several iterations of Algorithm 1.
We consider the residual:
\[\mathcal{R}_{i}^{\rm hf}(\mathbf{u}_{i},\,p_{i},\mathbf{v},\,q)=\mathcal{R}_{i}( \mathbf{u}_{i},\,p_{i},\mathbf{v},\,q)+\mathcal{R}_{i}^{\rm snp}(\mathbf{u}_{i}, \,p_{i},\mathbf{v})+\mathcal{R}_{i}^{\rm psp}(\mathbf{u}_{i},\,p_{i},\mathbf{v })+\mathcal{R}_{i}^{\rm lisc}(\mathbf{u}_{i},\mathbf{v}),\quad\forall\,( \mathbf{v},q)\in\mathcal{X}_{i,0}.\] (42a) The form \[\mathcal{R}_{i}\] corresponds to the local residual introduced in ( 24 ), while the other three terms are designed to improve the stability of the discrete problem. The form \[\mathcal{R}_{i}^{\rm sups}\] corresponds to the Streamline upwind Petrov-Galerkin (SUPG, [53]) stabilization, which is designed to handle advection-dominated flows, \[\mathcal{R}_{i}^{\rm sups}(\mathbf{u},\,p,\mathbf{v})=\sum_{k}\int_{\mathbb{ D}_{k}^{\rm l}}\left(\left(\mathbf{u}\cdot\nabla\right)\mathbf{u}+\nabla p-\nu \Delta\mathbf{u}-\mathbf{f}\right)\left(\tau_{\rm sups}\mathbf{u}\cdot\nabla \mathbf{v}\right)\,dx;\] (42b) the form \[\mathcal{R}_{i}^{\rm psp}\] is the Pressure-Stabilized Petrov-Galerkin (PSPG) term [40] that is added to the mass conservation equation to eliminate spurious modes in the pressure solution when considering the same polynomial order for pressure and velocity, \[\mathcal{R}_{i}^{\rm psp}(\mathbf{u},\,p,q)-\sum_{k}\int_{\mathbb{D}_{k}^{\rm l }}\tau_{\rm psp}\left(\left(\mathbf{u}\cdot\nabla\right)\mathbf{u}+\nabla p- \nu\Delta\mathbf{u}-\mathbf{f}\right)\cdot\nabla q\,dx;\] (42c) finally, \[\mathcal{R}_{i}^{\rm lisc}\] is the least-squares incompressibility constraint (LSIC) stabilization term that is added to the momentum equation to improve accuracy and conditioning of the discrete problem [54, 55, 56], \[\mathcal{R}_{i}^{\rm lisc}(\mathbf{u},\mathbf{v})=\sum_{k}\int_{\mathbb{D}_{k} ^{\rm l}}\left(\nabla\cdot\mathbf{u}\right)\tau_{\rm lisc}\nabla\cdot\mathbf{v }\,dx. \tag{42d}\]
In the numerical experiments, following [57], we select the parameters \(\tau_{\rm sups}\), \(\tau_{\rm psp}\), and \(\tau_{\rm lisc}\) as \(\tau_{\rm sups}=\tau_{\rm psp}=\alpha_{\rm sups}\left[\left(\frac{2|\mathbf{u} _{i}|}{h_{k,i}}\right)^{2}+9\left(\frac{4\nu}{h_{k,i}^{2}}\right)^{2}\right]^{ -\frac{1}{2}}\), \(\tau_{\rm lisc}=\frac{h_{k,i}^{2}}{\rho^{4}\tau_{\rm sups}}\), where \(0\leq\alpha_{\rm sups}\leq 1\) is a constant that enables the adjustment of \(\tau_{\rm sups}\) for higher-order elements. In the PTC formulation (cf. (13)), we modify the coefficients \(\tau_{\rm sups}\) and \(\tau_{\rm psp}\) to account for the time step \(\tau_{\rm sups}=\tau_{\rm psp}=\alpha_{\rm sups}\left[\left(\frac{2}{\Delta t }\right)^{2}+\left(\frac{2|\mathbf{u}_{i}|}{h_{k,i}}\right)^{2}+9\left(\frac{ 4\nu}{h_{k,i}^{2}}\right)^{2}\right]^{-\frac{1}{2}}\).
## Appendix B Justification of the pressure jump in the minimization formulation
We consider the configuration depicted in Figure 1 and we assume that the meshes of \(\Omega_{1}\) and \(\Omega_{2}\) are conforming on \(\Gamma_{0}\). We denote by \(\{\phi_{i}\}_{i=1}^{N_{\rm w}}\) the Lagrangian basis associated with the global space \(\mathcal{X}^{\rm hf}\); we denote by \(\mathcal{I}_{1},\mathcal{I}_{2}\) the degrees of freedom associated with the domains \(\Omega_{1}\) and \(\Omega_{2}\), respectively. We further denote by \(\mathcal{I}_{0}=\mathcal{I}_{1}\cap\mathcal{I}_{2}\) the nodes on the interface \(\Gamma_{0}\); we introduce the local and global Dirichlet nodes \(\mathcal{I}_{1,\rm dir},\mathcal{I}_{2,\rm dir}\subset\{1,\ldots,N_{\rm w}\}\) and \(\mathcal{I}_{\rm dir}=\mathcal{I}_{1,\rm dir}\cap\mathcal{I}_{2,\rm dir}\). By construction, \(\mathcal{I}_{\rm dir}\cap\mathcal{I}_{0}=\emptyset\) (cf. Figure 1). Finally, we recall the definition of the global problem
\[\mathbf{w}^{\rm hf}(1:2)|_{\Gamma_{\rm dir}}=\mathbf{\Phi}_{\mathbf{u}_{\rm in}},\quad\mathcal{R}^{\rm hf}(\mathbf{w}^{\rm hf},\mathbf{z})=0\quad\forall\, \mathbf{z}\in\mathcal{X}_{0}^{\rm hf}, \tag{43}\]
and the two local problems
\[\mathbf{w}_{i}^{\rm hf}(1:2)|_{\Gamma_{i,\rm dir}}=\mathbf{\Phi}_{i,\mathbf{u} _{\rm in}},\quad\mathcal{R}_{i}^{\rm hf}(\mathbf{w}_{i}^{\rm hf},\mathbf{z})+(- 1)^{i}\int_{\Gamma_{0}}\mathbf{s}\cdot\mathbf{z}\,dx=0\quad\forall\,\mathbf{z} \in\mathcal{X}_{i,0}^{\rm hf},\quad i=1,2; \tag{44}\]
which depend on the control \(\mathbf{s}\).
Since the meshes are conforming, it is possible to verify that
\[\mathcal{X}_{i}^{\rm hf}=\mathrm{span}\{\phi_{j}|_{\Omega_{i}}:j\in\mathcal{I}_ {i}\},\quad i=1,2.\]
Furthermore, the global residual can be expressed as1
Footnote 1: The proof of (45) exploits the expressions of the residuals (42) and (24). We omit the details. We further emphasize that at the right hand side of (45) we should use notation \(\mathcal{R}_{i}^{\rm hf}(\mathbf{w}^{\rm hf}|_{\Omega_{i}},\mathbf{z}|_{\Omega_{ i}})\) for \(i=1,2\).
\[\mathcal{R}^{\rm hf}(\mathbf{w}^{\rm hf},\mathbf{z})=\mathcal{R}_{1}^{\rm hf}( \mathbf{w}^{\rm hf}|_{\Omega_{1}},\mathbf{z}|_{\Omega_{1}})+\mathcal{R}_{2}^{\rm hf }(\mathbf{w}^{\rm hf}|_{\Omega_{2}},\mathbf{z}|_{\Omega_{2}})\quad\forall\, \mathbf{z}\in\mathcal{X}_{0}^{\rm hf}. \tag{45}\]
Identity (45) implies that
\[\mathcal{R}_{i}^{\rm hf}(\mathbf{w}^{\rm hf},\mathbf{\phi}_{j})=0\quad\forall\,j \in\mathcal{I}_{i}\setminus\mathcal{I}_{0},\quad i=1,2; \tag{46}\]
therefore, since the bilinear form \(a(\mathbf{w},\mathbf{z})=\int_{\Gamma_{0}}\mathbf{w}\cdot\mathbf{z}\,dx\) is coercive in \(\mathcal{Y}:=\mathrm{span}\{\mathbf{\phi}_{j}:j\in\mathcal{I}_{0}\}\), there exists a unique \(\mathbf{s}^{\star}=\mathrm{vec}(\mathbf{g}^{\star},h^{\star})\in\mathcal{Y}\) such that
\[\mathcal{R}_{i}^{\rm hf}(\mathbf{w}^{\rm hf},\mathbf{z})+(-1)^{i}\int_{\Gamma_{ 0}}\mathbf{s}^{\star}\cdot\mathbf{z}\,dx=0\quad\forall\,\mathbf{z}\in\mathcal{ X}_{i,0}^{\rm hf}, \tag{47}\]
for \(i=1,2\).
Exploiting the previous discussion, we can prove the following result.
**Lemma 1**.: _Let \(\mathbf{w}^{\mathrm{hf}}\) be a solution to (43). The following hold._
1. _The triplet_ \((\mathbf{w}^{\mathrm{hf}}|_{\Omega_{1}},\mathbf{w}^{\mathrm{hf}}|_{\Omega_{2}}, \mathbf{s}^{\star})\) _where_ \(\mathbf{s}^{\star}\) _satisfies (_47_) is a global minimum of (_6c_) for_ \(\delta=0\)_._
2. _Any global minimum of (_6c_) for_ \(\delta=0\) _solves (_43_); in particular, if the solution_ \(\mathbf{w}^{\mathrm{hf}}\) _to (_43_) is unique, the optimization problem (_6b_) admits a unique solution for_ \(\delta=0\)_._
Proof.: Equation (47) implies that the triplet \((\mathbf{w}^{\mathrm{hf}}|_{\Omega_{1}},\mathbf{w}^{\mathrm{hf}}|_{\Omega_{2}},\mathbf{s}^{\star})\) satisfies the constraints of (6b) (cf. (44)); since \(\mathbf{w}^{\mathrm{hf}}\) is continuous, the objective function of (6b) (cf. (6c)) is equal to zero for \(\delta=0\). Since the function (6c) is non-negative, we conclude that \((\mathbf{w}^{\mathrm{hf}}|_{\Omega_{1}},\mathbf{w}^{\mathrm{hf}}|_{\Omega_{2}},\mathbf{s}^{\star})\) is a global minimum of (6b).
Exploiting the first part of the proof, we find that any global minimum \((\mathbf{w}_{1},\mathbf{w}_{2},\mathbf{s})\) of (6b) satisfies \(\mathcal{F}_{\delta=0}\left(\mathbf{w}_{1},\mathbf{w}_{2},\mathbf{s}\right) \mathcal{F}_{\delta=0}(\mathbf{w}^{\mathrm{hf}}|_{\Omega_{1}},\mathbf{w}^{ \mathrm{hf}}|_{\Omega_{2}},\mathbf{s}^{\star})=0\). This implies that the function \(\mathbf{w}:\Omega\rightarrow\mathbb{R}^{3}\) such that \(\mathbf{w}|_{\Omega_{1}}=\mathbf{w}_{1}\) and \(\mathbf{w}|_{\Omega_{2}}=\mathbf{w}_{2}\) is continuous and belongs to \(\mathcal{X}^{\mathrm{hf}}\). Recalling (44), we have that \(\mathbf{w}\) satisfies \(\mathbf{w}(1:2)|_{\Gamma_{\mathrm{dir}}}=\Phi_{\mathbf{u}_{\mathrm{in}}}\). Furthermore, since \(\mathbf{z}|_{\Omega_{i}}\in\mathcal{X}_{i,0}\) for any \(\mathbf{z}\in\mathcal{X}^{\mathrm{hf}}\), we have that
\[\mathcal{R}^{\mathrm{hf}}(\mathbf{w},\mathbf{z})\overset{\eqref{eq:w}}{=} \mathcal{R}^{\mathrm{hf}}_{1}(\mathbf{w}_{1},\mathbf{z}|_{\Omega_{1}})+ \mathcal{R}^{\mathrm{hf}}_{2}(\mathbf{w}_{2},\mathbf{z}|_{\Omega_{2}}) \overset{\eqref{eq:w}}{=}-\int_{\Gamma_{0}}\mathbf{s}\cdot\mathbf{z}\,dx+\int _{\Gamma_{0}}\mathbf{s}\cdot\mathbf{z}\,dx=0,\]
which is (43). We conclude that \(\mathbf{w}\) solves (43). If the solution to (43) is unique, exploiting the previous argument, any solution \((\mathbf{w}_{1},\mathbf{w}_{2},\mathbf{s})\) should satisfy \(\mathbf{w}_{1}=\mathbf{w}^{\mathrm{hf}}|_{\Omega_{1}}\) and \(\mathbf{w}_{2}=\mathbf{w}^{\mathrm{hf}}|_{\Omega_{2}}\). Furthermore, since the solution to (47) is unique, we also find \(\mathbf{s}=\mathbf{s}^{\star}\). In conclusion, (6b) has a unique global minimum.
Lemma 1 illustrates the connection between the monolithic problem and the solution to the optimization problem (6b); the well-posedness analysis in [32] shows that in the continuous limit (i.e., \(N_{\mathbf{w}}\rightarrow\infty\)) \(h^{\star}=0\); nevertheless, in general \(h^{\star}\neq 0\) for finite-dimensional discretizations. To illustrate this fact, we consider the solution to the Stokes problem (see Figure 1 for the definitions of the boundary subdomains)
\[\left\{\begin{array}{ll}-\Delta\mathbf{u}+\nabla p=\mathrm{vec}(x_{1}, \cos(x_{2}^{2}))&\text{in}\;\Omega=(0,1)^{2},\\ \nabla\cdot\mathbf{u}=0&\text{in}\;\Omega,\\ \mathbf{u}|_{\Gamma_{\mathrm{dir}}}=\mathrm{vec}((1-x_{2})x_{2},0),\;\; \mathbf{u}|_{\Gamma_{\mathrm{dir}}^{0}}=0,\;\;(\nabla\mathbf{u}-p\mathbf{I}) \,\mathbf{n}|_{\Gamma_{\mathrm{neu}}}=0,\end{array}\right.\]
based on a P3-P2 Taylor-Hood discretization for three meshes of increasing size. Figure 22(a) shows the final mesh used for computations whereas the blue dots indicate the interface \(\Gamma_{0}\); Figure 22(b) shows the behavior of \(h^{\star}\) for three meshes with \(4^{2},8^{2},16^{2}\) global elements: as expected, as we increase the size of the mesh, the magnitude of \(h^{\star}\) decreases.
## Appendix C Justification of the enrichment strategy
We consider the algebraic problem:
\[\min_{\mathbf{w}\in\mathbb{R}^{N},\mathbf{s}\in\mathbb{R}^{M}}\left|\mathbf{ C}\mathbf{w}-\mathbf{b}\right|\;\;\;\mathrm{s.t.}\;\;\mathbf{A}\mathbf{w}+\mathbf{B} \mathbf{s}=\mathbf{f}, \tag{48}\]
Figure 22: justification of the pressure jump; Stokes model problem.
with \(\mathbf{A}\in\mathbb{R}^{N\times N}\), \(\mathbf{B}\in\mathbb{R}^{N\times M}\), \(\mathbf{C}\in\mathbb{R}^{M\times N}\), \(\mathbf{b}\in\mathbf{R}^{M}\), \(\mathbf{f}\in\mathbf{R}^{N}\) and \(N>M\). If \(\mathbf{A}\) is full rank, any solution \((\mathbf{w}^{\star},\mathbf{s}^{\star})\) to (48) satisfies \(\mathbf{w}^{\star}=\mathbf{A}^{-1}\left(\mathbf{f}-\mathbf{B}\mathbf{s}^{ \star}\right)\) and \(\mathbf{s}^{\star}=\arg\min_{\mathbf{s}\in\mathbb{R}^{M}}\left|\mathbf{D} \mathbf{s}-\mathbf{c}\right|\), with \(\mathbf{D}=\mathbf{C}\mathbf{A}^{-1}\mathbf{B}\) and \(\mathbf{c}=\mathbf{b}-\mathbf{A}^{-1}\mathbf{f}\). Therefore, provided that \(\mathbf{C}\) is full rank, (48) is well-posed if and only if \(\mathbf{A}^{-1}\mathbf{B}\) is full rank.
Let \(\mathbf{Z}=[\boldsymbol{\xi}_{1},\ldots,\boldsymbol{\xi}_{n}]\in\mathbb{R}^{N \times n}\), \(\mathbf{W}=[\boldsymbol{\eta}_{1},\ldots,\boldsymbol{\eta}_{m}]\in\mathbb{R} ^{M\times m}\) and \(\mathbf{Y}\in\mathbb{R}^{N\times n}\) be orthogonal matrices with \(n<N\) and \(m<M\); exploiting these definitions, we define the projected problem
\[\min_{\boldsymbol{\alpha}\in\mathbb{R}^{n},\boldsymbol{\beta}\in\mathbb{R}^{m }}\left|\overline{\mathbf{C}}\boldsymbol{\alpha}-\mathbf{b}\right|\quad\text{ s.t. }\overline{\boldsymbol{\alpha}}\boldsymbol{\alpha}+\overline{\mathbf{B}} \boldsymbol{\beta}=\overline{\mathbf{f}}, \tag{49}\]
with \(\overline{\mathbf{C}}=\mathbf{C}\mathbf{Z}\), \(\overline{\mathbf{A}}=\mathbf{Y}^{\top}\mathbf{A}\mathbf{Z}\), \(\overline{\mathbf{B}}=\mathbf{Y}^{\top}\mathbf{B}\mathbf{W}\) and \(\overline{\mathbf{f}}=\mathbf{Y}^{\top}\mathbf{f}\). It is straightforward to prove the following result: here, \(\operatorname{col}(\mathbf{X})\) denotes the linear space spanned by the columns of the matrix \(\mathbf{X}\), while \(\operatorname{orth}(\mathbf{X})\) is the orthogonal matrix that is obtained by orthogonalizing the columns of \(\mathbf{X}\).
**Lemma 2**.: _Let \(\operatorname{col}(\mathbf{A}^{-1}\mathbf{B}\mathbf{W})\subset\operatorname{ col}(\mathbf{Z})\) and let \(\mathbf{Y}=\operatorname{orth}(\mathbf{A}\mathbf{Z})\). Then, \(\overline{\mathbf{A}}^{-1}\overline{\mathbf{B}}\) is full rank, and (49) is well-posed._
Proof.: We first prove that \(\overline{\mathbf{A}}\in\mathbb{R}^{n\times n}\) is invertible. By contradiction, there exists \(\boldsymbol{\alpha}\in\mathbb{R}^{n}\) such that \(\overline{\mathbf{A}}\boldsymbol{\alpha}=0\). Since \(\mathbf{Y}=\operatorname{orth}(\mathbf{A}\mathbf{Z})\), there exists \(\boldsymbol{\beta}\in\mathbb{R}^{n}\) such that \(\mathbf{A}\mathbf{Z}\boldsymbol{\alpha}=\mathbf{Y}\boldsymbol{\beta}\). We hence find
\[0=\boldsymbol{\beta}^{\top}\overline{\mathbf{A}}\boldsymbol{\alpha}=(\mathbf{Y }\boldsymbol{\beta})^{\top}\mathbf{A}\mathbf{Z}\boldsymbol{\alpha}=|\mathbf{ A}\mathbf{Z}\boldsymbol{\alpha}|^{2}.\]
The latter implies that \(\mathbf{Z}\boldsymbol{\alpha}\) is a non-trivial element of the kernel of \(\mathbf{A}\): this is in contradiction with the hypothesis that \(\mathbf{A}\) is invertible.
Exploiting the same argument, we prove that \(\overline{\mathbf{B}}\) is full rank. By contradiction, there exists \(\boldsymbol{\beta}\in\mathbb{R}^{m}\) such that \(\overline{\mathbf{B}}\boldsymbol{\beta}=0\). Since \(\operatorname{col}(\mathbf{Y})=\operatorname{col}(\mathbf{A}\mathbf{Z})\) and \(\operatorname{col}(\mathbf{A}^{-1}\mathbf{B}\mathbf{W})\subset\operatorname{ col}(\mathbf{Z})\), there exist \(\boldsymbol{\alpha},\boldsymbol{\alpha}^{\prime}\in\mathbb{R}^{n}\) such that \(\mathbf{B}\mathbf{W}\boldsymbol{\beta}=\mathbf{A}\mathbf{Z}\boldsymbol{\alpha}^ {\prime}=\mathbf{Y}\boldsymbol{\alpha}\). We hence find
\[0=\boldsymbol{\alpha}^{\top}\overline{\mathbf{B}}\boldsymbol{\beta}=(\mathbf{ Y}\boldsymbol{\alpha})^{\top}\mathbf{B}\mathbf{W}\boldsymbol{\beta}=|\mathbf{ B}\mathbf{W}\boldsymbol{\beta}|^{2}.\]
The latter implies that \(\mathbf{W}\boldsymbol{\beta}\) is a non-trivial element of the kernel of \(\mathbf{B}\): this is in contradiction with the hypothesis that \(\mathbf{B}\) is full-rank.
Lemma 2 provides a rigorous justification of the enrichment strategy in section 4.4. The matrix \(\widetilde{\mathbf{W}}=-\mathbf{A}^{-1}\mathbf{B}\mathbf{W}\) corresponds to the derivative of the state \(\mathbf{w}\) with respect to the control \(\mathbf{\tilde{s}}=\mathbf{W}\boldsymbol{\beta}\); the columns \(\widetilde{\mathbf{w}}_{1},\ldots,\widetilde{\mathbf{w}}_{m}\) of the matrix \(\widetilde{\mathbf{W}}\) satisfy
\[\mathbf{A}\widetilde{\mathbf{w}}_{k}+\mathbf{B}\boldsymbol{\eta}_{k}=0,\quad k =1,\ldots,m,\]
which corresponds to (24). Similarly, as discussed in [46, 47], the choice of the test space in Lemma 2 is consistent with (20a).
|
2304.00028 | Fracton Self-Statistics | Fracton order describes novel quantum phases of matter that host
quasiparticles with restricted mobility, and thus lies beyond the existing
paradigm of topological order. In particular, excitations that cannot move
without creating multiple excitations are called fractons. Here we address a
fundamental open question -- can the notion of self-exchange statistics be
naturally defined for fractons, given their complete immobility as isolated
excitations? Surprisingly, we demonstrate how fractons can be exchanged, and
show that their self-statistics is a key part of the characterization of
fracton orders. We derive general constraints satisfied by the fracton
self-statistics in a large class of Abelian fracton orders. Finally, we show
the existence of nontrivial fracton self-statistics in some twisted variants of
the checkerboard model and Haah's code, establishing that these models are in
distinct quantum phases as compared to their untwisted cousins. | Hao Song, Nathanan Tantivasadakarn, Wilbur Shirley, Michael Hermele | 2023-03-31T18:00:00Z | http://arxiv.org/abs/2304.00028v2 | # Fracton Self-Statistics
###### Abstract
Fracton order describes novel quantum phases of matter that host quasiparticles with restricted mobility, and thus lie beyond the existing paradigm of topological order. In particular, excitations that cannot move without creating other excitations are called fractons. Here we address a fundamental open question--can the notion of self exchange statistics be naturally defined for fractons, given their complete immobility as isolated excitations? Surprisingly, we demonstrate how fractons can be exchanged, and show their self-statistics is a key part of the characterization of fracton orders. We derive general constraints satisfied by the fracton self-statistics in a large class of abelian fracton orders. Finally, we show the existence of nontrivial fracton self-statistics in some twisted variants of the checkerboard model and Haah's code, establishing that these models are in distinct quantum phases as compared to their untwisted cousins.
Introduction.Particle statistics is a fundamental aspect of quantum mechanics. While elementary particles that compose our universe must be either bosons or fermions due to the topological triviality of double exchanges in 3D space, emergent quasiparticles in 2D quantum many-body systems can exhibit anyonic statistics [1; 2], which are crucial for characterizing conventional topological order. Recently, the theoretical discovery of fracton order in 3D [3; 4; 5; 6; 7; 8; 9] has revealed a new situation where quasiparticles lack their usual freedom to move in space, calling for a reexamination of the notion of statistics [10; 11; 12].
Fracton systems have emerged as an active frontier of quantum physics [13; 14], attracting great interest from condensed matter, quantum information and quantum field theory viewpoints. Fracton order is defined by the emergence of quasiparticles with restricted mobility, including _fractons_, which cannot move without splitting into more than one excitation. Single isolated fractons are thus immobile. Fracton models can also host excitations which are mobile only within planes or lines. Statistical processes involving or interpretable in terms of partially mobile excitations have been studied [15; 16; 17; 18; 19; 20; 21; 22]. Moreover, fractons can be non-Abelian in the sense of carrying protected topological degeneracy [16; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34]. Nevertheless, a fundamental question remains open: does a notion of self exchange statistics make sense for fractons, given their complete immobility as isolated excitations?
In this Letter, we provide a resolution to this puzzle. By allowing the fracton quasiparticle to split into multiple coordinated pieces, it is possible to prepare two arbitrarily separated realizations of the same fracton superselection sector. Moreover, such a pair of excitation patterns can be physically exchanged, giving rise to a fracton self-statistics. Our findings apply to both fracton phases of foliated [35; 36] and fractal [5; 6] nature. Furthermore, we point out instances where the self-statistics of fractons is in fact the only known statistical invariant that distinguishes between two fracton phases. We provide explicit examples by distinguishing the twisted checkerboard models [11] and a twisted Haah's code [37] from their untwisted counterparts. Thus, we show that fracton self-statistics is a fundamental invariant needed to characterize fracton phases of matter.
Foliated fractons.To illustrate the principle, we start with the simplest relevant setting, in which all fractons \(a\) are Abelian [38] and satisfy the fusion constraint
\[a\times\overline{\iota_{a}}\times\mathbb{t}^{\ast}\cdot a\times\overline{ \iota_{a}}=1 \tag{1}\]
for all \(\mu,\nu\in\{x,y,z\}\) such that \(\mu\neq\nu\), where \(\mathbf{t}_{\mu(\nu)}\) is the elementary lattice vector in the \(\mu\left(\nu\right)\) direction, \(\iota_{a}\) denotes the analogue of \(a\) at a \(\mathbf{t}\)-shifted position, and \(\overline{\iota_{a}}\) is the antiparticle to fracton \(\iota_{a}\). This constraint guarantees the existence of rectangular [39] membrane operators of arbitrary size that generate quadrupolar configurations of a given species \(a\) at its corners. Fractons satisfying the fusion constraint will be referred to as (Abelian) _foliated_.
A large body of models hosting foliated fractons are known in the literature, including the X-cube, checkerboard, and their many variants [17; 40; 41; 42; 8; 40; 42]. Let us refer to the checkerboard model as a concrete example, for its twisted variants will clearly demonstrate the usage of fracton self-statistics.
The _checkerboard model_[8] is defined on a 3D checkerboard lattice (Fig. 1a) with one qubit per vertex \(\nu\). Its Hamiltonian
\[H_{\mathrm{cb}}=-\sum_{c}\left(A_{c}+B_{c}\right) \tag{2}\]
is a summation over gray cubes \(c\) in Fig. 1(a), where
\[A_{c}:=\prod_{\nu\in c}X_{\nu},\qquad B_{c}:=\prod_{\nu\in c}Z_{\nu}, \tag{3}\]
are products of Pauli \(X\) or \(Z\) operators at the eight vertices of \(c\). This is an exactly solvable gapped model with spectrum labeled by simultaneous eigenvalues \(\{A_{c},B_{c}=\pm 1\}\).
An isolated excitation \(A_{c}=-1\) is an example of a foliated fracton. It can be moved, however, at the expense of fractionalizing into more than one excitation, e.g., \(\eta=\{c\}\rightarrow\eta_{i}=\{c^{\prime}_{i},c^{\prime\prime}_{i},c^{\prime \prime\prime}_{i}\}\) by a rectangular membrane operator; see Fig. 1(b). Therefore, the excitation patterns \(\eta_{1}\) (red), \(\eta_{2}\) (green), \(\eta_{3}\) (blue), and \(\eta\) (orange) are all realizations of the same fracton superselection sector.
Self-statistics of foliated fractons.Generically, a foliated fracton \(a\) is characterized by a set of four self-statistical phases \(\theta^{[xy]}_{a}\), \(\theta^{[xy]}_{a}\), \(\theta^{[xy]}_{a}\), and \(\theta^{[xy]}_{a}\), each corresponding to a "windmill" self-exchange process. The process corresponding to \(\theta^{[xy]}_{a}\) is depicted in Fig. 2. It begins with an excited state with \(a\) at the center of the windmill, in addition to a triplet of excitations denoted \(\widehat{a}\) that belongs to the same superselection sector as \(a\). The process proceeds with a sequence of six membrane operators (Fig. 3a) whose total action exchanges \(a\) with \(\widehat{a}\), returning to the starting state in such a way that all arbitrary phases cancel. It can be regarded as a fractonic generalization of the T-shaped anyon exchange process [43].
The processes corresponding \(\theta^{[xy]}_{a}\), \(\theta^{[xy]}_{a}\), and \(\theta^{[xy]}_{a}\) are defined analogously, but along windmills [44] related to that of \([xyz]\) by a \(180^{\circ}\)-rotation about the \(x\), \(y\), and \(z\) axis respectively. For instance, the \([x\overline{yz}]\) process consists of the membrane operators located as in Fig. 3(b). Actually, there are eight windmill processes in total; the \(yz\) plane has four quadrants for placing \(M_{2}\), and one has two choices of \(M_{1}\) and \(M_{3}\) (along the \(xy\) and \(zx\) planes respectively) after fixing \(M_{2}\) as depicted in Fig. 3(b) and (c). The four extra processes (denoted \([\overline{xyz}]\), \([\overline{yz}]\), \([\overline{xyz}]\), and \([\overline{xyz}]\)) are defined on windmills which are the spatial inversions of those for \([xyz]\), \([\overline{xyz}]\), \([\overline{xyz}]\), and \([\overline{xyz}]\). However, each process produces the same statistical phase as its spatial inverse. For instance, the equivalence between \([\overline{xyz}]\) and \([\overline{xyz}]\) is demonstrated in Fig. 3(d).
One might expect that \(\theta^{[xy]}_{a}\), \(\theta^{[xy]}_{a}\), \(\theta^{[xy]}_{a}\), and \(\theta^{[xy]}_{a}\) are four independent self-statistical phases. To the contrary, they are subject to a constraint
\[\theta^{[xy]}_{a}\theta^{[xy]}_{a}\theta^{[xy]}_{a}\theta^{[xy]}_{a}=1, \tag{4}\]
leaving only three of them independent in general.
This constraint is most naturally derived by utilizing a quantity \(S^{\mu}_{ab}\) for \(\mu=x,y,z\), defined as the mutual braiding statistics between dipoles \(a\times\overline{\kappa_{a}}\) and \(b\times\overline{\kappa_{b}}\) in the large \(l\) limit. The dipoles are plannions (i.e., quasiparticles mobile only within a plane). The braiding direction is fixed by \(\mu\) via the right hand rule. See Fig. 4(a).
A proof of Eq. (4) is as follows. If \(a\) is exchanged twice with \(\widehat{a}\), both sets of excitations return to their original position. Thus, the total process can be smoothly deformed into one where one excitation is stationary, and the other braids around it. For instance, we can deform the \([x\overline{yz}]\) windmill process into one along the cyclic "path" in Fig. 4(b). Similarly, the \([\overline{xyz}]\) process (which produces statistics \(\theta^{[xy]}_{a}\equiv\theta^{[xy]}_{a}\)) can be deformed into the one depicted in Fig. 4(c). If, for convenience, the two deformed exchanges are started and ended with the intermediate state containing excitations \(a\) and \(\widehat{a}^{\prime}\), their composite gives the process in Fig. 4(d). It implies
\[\theta^{[xy]}_{a}\theta^{[xy]}_{a}=S^{x}_{aa}. \tag{5}\]
Figure 2: The \([xyz]\) windmill process. The starting state has an excitation \(a\) at the center of the “windmill”, along with three other excitations collectively called \(\widehat{a}\) in the same superselection sector as \(a\) following Eq. (1). The process consists of three membrane operators \(M_{1}\), \(M_{2}\), \(M_{3}\), and their inverses, successively moving the four excitations from the corners of the \(yz\) square, to the corners of the \(xy\) square, to the corners of the \(xy\) square, and finally back to the original configuration. The process is designed such that the phase arbitrariness in the choice of \(M_{i}\) is precisely cancelled by the action of \(M_{i}^{\dagger}\). Therefore, the universal statistical phase is well-defined by \(\theta^{[xy]}_{a}=M_{3}^{\dagger}M_{2}M_{1}^{\dagger}M_{3}M_{2}^{\dagger}M_{1}\).
Armed with this relation, we can now prove Eq. (4) by a 180-rotation of Eq. (5) about the \(y\) axis to obtain \(\theta_{a}^{[y\overline{z}]}\theta_{a}^{[\overline{y}\overline{z}]}=(S_{aa}^{x})^ {*}\), and multiplying it with Eq. (5).
The mutual statistics also appear in the following formula for the self-statistics of a fusion product of two fractons, and analogous formulas due to cubic symmetry:
\[\theta_{\alpha\beta b}^{[xy]}=\theta_{a}^{[xy]}\theta_{b}^{[y\overline{z}]}S_ {ab}^{x}S_{ab}^{y}S_{ab}^{z}. \tag{6}\]
See SM [45] for a proof. Note that this relation implies
\[S_{ab}^{x}S_{ab}^{y}S_{ab}^{z}=S_{ba}^{x}S_{ba}^{y}S_{ba}^{z}. \tag{7}\]
It is interesting to note that Eqs. (5) and (6) generalize the constraints \(\theta_{a}^{2}=S_{aa}\) and \(\theta_{\alpha\beta b}=\theta_{a}\theta_{b}S_{ab}\) of 2D Abelian topological orders, where \(\theta_{a}\) is the topological spin and \(S\) the topological \(S\)-matrix. For an Abelian planon \(a\) that satisfies the foliation condition Eq. (1), analogous windmill processes are reducible into 2D braidings and the above discussions reduce to these familiar 2D equations.
Now assume a foliated fracton satisfies \(a^{N}=1\). Let us show that its self-statistics is constrained to _discrete_ values for use in distinguishing fracton orders. Note \(S_{\alpha\alpha}^{x}S_{\alpha\mu}^{y}S_{\alpha\mu}^{z}=(\theta_{a}^{[y\overline {z}]})^{2}\) by virtue of Eqs. (4) and (5). Thus, since \((S_{\alpha\mu}^{y})^{N}=S_{\alpha^{\mu}a}^{\mu}=1\), we have \((\theta_{a}^{[y\overline{z}]})^{2N}=1\). Moreover, applying Eq. (6) recursively gives \((\theta_{a}^{[y\overline{z}]})^{N^{2}}=\theta_{a^{[y\overline{z}]}}^{[y\overline {z}]}=1\). Together, these imply that the self-statistics of \(a\) must be multiples of \(e^{2\pi i/(N\mathrm{gcd}(N,2))}\) in analogy to anyons in 2D.
Semionic fractons in twisted checkerboard models.A major application of fracton self-statistics is to distinguish the quantum phases associated with the checkerboard model \(H_{\mathrm{cb}}\) from its twisted variants introduced in Ref. [11]. To illustrate this, we consider seven twisted models, denoted \(H_{\mathrm{cb}}^{x}\), \(H_{\mathrm{cb}}^{y}\), \(H_{\mathrm{cb}}^{x}\), \(H_{\mathrm{cb}}^{y\overline{z}}\), \(H_{\mathrm{cb}}^{z\overline{z}}\), \(H_{\mathrm{cb}}^{x}\), and \(H_{\mathrm{cb}}^{y\overline{z}}\) below. Together with \(H_{\mathrm{cb}}\), we will show that the eight models fall into two classes of quantum phases, distinguishable by the presence and absence of semionic fracton self-statistics. Explicit construction of paths that connect models with identical fracton self-statistics is given in SM [45].
First, in \(H_{\mathrm{cb}}\) (Eq. (2)), all excitations (including fractons) exhibit either bosonic (\(+1\)) or fermionic (\(-1\)) statistics. This is because all statistical processes are realizable by tensor products of Pauli operators which only commute or anticommute with each other.
In contrast, \(H_{\mathrm{cb}}^{x}\) illustrates a new phase allowing _semionic_ (\(\pm i\)) fracton self-statistics. Instead of using the formalism in Ref. [11], we specify this model using a non-Pauli stabilizer Hamiltonian
\[H_{\mathrm{cb}}^{x}=-\sum_{c}\left(A_{c}^{x}+B_{c}\right) \tag{8}\]
obtained by replacing \(A_{c}\) in the untwisted model Eq. (2) with a modified term \(A_{c}^{x}\), to have a convenient description of excitations with \((A_{c}^{x})^{2}=1\) and the full spectrum labeled by simultaneous eigenvalues \(\{A_{c}^{x},B_{c}=\pm 1\}\), where \(x\) refers to twisting being associated with \(x\)-edges. Explicitly, we label vertices and cubes by monomials as in Fig. 1(a) and denote finite sets of vertices by polynomials with \(\mathbb{Z}_{2}=\{0,1\}\) coefficients [46]. In this notation,
\[A_{c}^{x}\coloneqq A_{c}\phi_{(1+x)\overline{c}}\phi_{(1+x)\overline{c}} \tag{9}\]
according to the construction described in SM [45], where \(\ell=(1+x)\,\overline{x}c\) and \(\ell=(1+x)\,xc\) denote vertex pairs that are ends of \(x\)-edges, and
\[\phi_{\ell} \coloneqq(-1)^{\ell_{\ell}^{x}\theta_{\ell}^{x}+\pi_{\ell_{0}^{x} \overline{\ell}_{\ell_{1}}}^{x}+\pi_{\ell_{1}^{x}\overline{\ell}_{\ell_{1}}}^{ x}+\pi_{\ell_{1}^{x}\overline{\ell}_{\ell_{1}}^{x}}^{x}+\pi_{\ell_{1}^{x} \overline{\ell}_{\ell_{1}}^{x}}^{x}+\pi_{\ell_{1}^{x}\overline{\ell}_{\ell_{1}}^ {x}}^{x}}^{x}}\] \[\quad\cdot(-1)^{\ell_{\ell_{\ell_{1}}^{x}\overline{\ell_{1}}}^{x} \left(\ell_{1+x)\overline{\ell_{1}}^{x}+\overline{\ell_{1}}}\cdot\tilde{\ell}^{ -\overline{\ell_{1}}^{x}+\overline{\ell_{1}}^{x}+\overline{\ell_{1}}^{x}} \right.} \tag{10}\]
is a Dijkgraaf-Witten twisting factor, in terms of the shorthand
\[Z_{\kappa}\coloneqq\prod_{v\in\kappa}Z_{v},\qquad n_{\kappa}^{x}\coloneqq \frac{1}{2}\left(1\pm Z_{\kappa}\right), \tag{11}\]
for \(\kappa\) any finite set of vertices.
In \(H_{\mathrm{cb}}^{x}\), one example of semionic fracton is a \(B_{c}=-1\) excitation (denoted \(m_{x}\) below), which has
\[\theta_{m_{x}}^{[xy]}=\theta_{m_{x}}^{[x\overline{y}\overline{z}]}=i\quad \text{and}\quad\theta_{m_{x}}^{[\overline{y}\overline{z}]}=\theta_{m_{x}}^{[ \overline{y}\overline{z}]}=-i. \tag{12}\]
Two derivations of the statistics are given in SM [45]. In one, we construct a modified \(X\) operator that explicitly generates the statistical processes for \(B\) excitations. The modification of \(X\) is required to ensure no \(A_{c}^{x}\) terms flipped, and results in the above semionic self-statistics.
We emphasize that in \(H_{\mathrm{cb}}^{x}\) exotic self-statistics (\(\theta\neq\pm 1\)) are solely associated with fractons. Ref. [11] observed that non-fractonic excitations in \(H_{\mathrm{cb}}^{x}\) only exhibit either bosonic or fermionic statistics. This implies that \(H_{\mathrm{cb}}^{x}\) cannot be a tensor product of \(H_{\mathrm{cb}}\) and 2D anyon models containing semions. Therefore, the fact that only fracton self-statistics are able to distinguish the two models demonstrates the novelty of \(H_{\mathrm{cb}}^{x}\) as a distinct phase of matter. We refer to the phase of \(H_{\mathrm{cb}}^{x}\) as a _semionic fracton order_, as characterized by the presence of semionic statistics for only the fracton excitations.
The remaining six models are built by analogy to \(H_{\mathrm{cb}}^{x}\). In the definition of \(H_{\mathrm{cb}}^{x}\), the twisting factor \(\phi_{(1+x)\overline{x}}\phi_{(1+x)\overline{x}}\) in Eq. (9)
Figure 4: Graphic proof of \(\theta_{a}^{[x\overline{y}]}\theta_{a}^{[x\overline{z}]\times 1}=S_{aa}^{x}\). The white arrows denote the direction of braiding and exchange processes. (a) Definition of \(S_{ab}^{x}\). (b) The \([x\overline{y}\overline{z}]\) process (dotted windmill) is deformed into one realized in three steps \(a\to\widehat{\sigma}\), \(\widehat{a}\to a\), and \(\widehat{\sigma}\to\widehat{a}\) using operators supported on the olive, green, and gray areas. The intermediate state \(\widehat{\sigma}\) consists of excitations at the three circles. (c) A process which is equivalent to the \([\overline{y}\overline{z}]\) process and hence produces statistics \(\theta_{a}^{[x\overline{y}]}\equiv\theta_{a}^{[x\overline{y}]}\). (d) A process that braids part of \(\widehat{\sigma}\), along on the gray ribbon, around \(a\). The statistical phase due to the presence of \(a\) is \(S_{aa}^{x}\).
is linked to \(x\)-edges. Its analogue associated with \(y\)-edges (\(z\)-edges) specifies \(H_{\mathrm{cb}}^{y}\) (\(H_{\mathrm{cb}}^{z}\)). Moreover, twisting can be applied to more than one direction simultaneously; for example, \(H_{\mathrm{cb}}^{xy}\) has twisting made along both \(x\)-edges and \(y\)-edges.
Remarkably, despite the six models having different ground states, we discover that: (1) \(H_{\mathrm{cb}}^{y}\), \(H_{\mathrm{cb}}^{z}\), and \(H_{\mathrm{cb}}^{yz\infty}\) represent the same semionic fraction phase as \(H_{\mathrm{cb}}^{x}\), while (2) \(H_{\mathrm{cb}}^{xy}\), \(H_{\mathrm{cb}}^{yz}\), and \(H_{\mathrm{cb}}^{zx}\) fall within the phase of \(H_{\mathrm{cb}}\). Let us first demonstrate how fracton self-statistics are matched between \(H_{\mathrm{cb}}^{xy}\) and \(H_{\mathrm{cb}}\). In \(H_{\mathrm{cb}}^{xy}\), excitation \(B_{c}=-1\) (denoted \(m_{xy}\)) is a fracton with
\[\begin{split}\theta_{m_{xy}}^{|xy|}&=i\cdot i=-1, \hskip 14.226378pt\theta_{m_{xy}}^{|xy|}=(-i)^{2}=-1,\\ \theta_{m_{xy}}^{|xy|}&=i\cdot(-i)=1,\hskip 14.226378pt \theta_{m_{xy}}^{|xy|}=(-i)\cdot i=1,\end{split} \tag{13}\]
where two twistings cause a cancellation in semionic character. Further, combining \(m_{xy}\) with an \(A\) excitation at relative position \(\overline{xy}\), denoted \(\overline{xy}e\), yields a fracton \(\overline{xy}e\times m_{xy}\) with purely bosonic self-statistics, which can be seen via Eq. (6) and its analogues.
Based on this observation, we in fact found an exact local unitary transformation relating the ground states of \(H_{\mathrm{cb}}^{xy}\) and \(H_{\mathrm{cb}}\) (see SM [45]), thus establishing rigorously that they represent the same phase. Other phase identifications in the above classification can be proved analogously.
_Self-statistics of fractal fractons._ The idea of self-statistics extends to non-foliated fractons [47]. We demonstrate this with Haah's code [5]
\[H_{\mathrm{Haah}}=-\sum_{\lambda\in\Lambda}\left(A_{\lambda}+B_{\lambda} \right), \tag{14}\]
which is an exactly solvable model defined on a cubic lattice with two qubits per vertex, where \(\Lambda=\{x^{\prime}y^{\prime}z^{k}\}\) denotes lattice vectors \((i,j,k)\in\mathbb{Z}^{3}\) in monomial form and the \(A\) (\(B\)) terms are translation-related to the representative \(A_{1}\left(B_{1}\right)\) at the origin given in Fig. 5(a). Each \(A_{\lambda}\left(B_{\lambda}\right)\) is a product of eight Pauli \(X\)'s (\(Z\)'s). It is succinct to denote collections of translationally related objects by sums of \(\Lambda\)'s elements. In this notation, \(A_{\lambda}\) and \(B_{\lambda}\) are described by Laurent polynomials with \(\mathbb{Z}_{2}=\{0,1\}\) coefficients [46]:
\[A_{\lambda} =\lambda\cdot\left(\overline{f}_{1},\ \overline{f}_{2},\ 0,\ 0\right), B_{\lambda} =\lambda\cdot\left(0,\ 0,\ \ f_{2},\ \ f_{1}\right), \tag{15}\] \[f_{1} =1+x+y+z, f_{2} =1+xy+yz+zx,\] (16) \[\overline{f}_{1} =1+\overline{x}+\overline{y}+\overline{z}, \overline{f}_{2} =1+\overline{xy}+\overline{yz}+\overline{zx}. \tag{17}\]
where the first (last) two components of \(A_{\lambda}\) and \(B_{\lambda}\) locate Pauli \(X\)'s (\(Z\)'s) for the two qubit species. The bar denotes spatial inversion by \(x\to\overline{x}\equiv x^{-1}\) etc.
Excitations can also be described by polynomials. Applying a Pauli \(Z\) to the first (or second) qubit at the origin excites \(A\)-terms in the pattern \(f_{1}\) (respectively, \(f_{2}\)). Interestingly, one may flip \(A\)-terms purely in the \(yz\) plane [6] by noting
\[(y+z)f_{1}+f_{2}=1+y+y^{2}+z+yz+z^{2}\eqqcolon g. \tag{18}\]
Consider planar fractional moves for visual clarity. The \(yz\)-planar ones are generated by \(g\), allowing \(A\) excitations to travel arbitrarily long distances toward each of the _conic_ directions \(K_{1},K_{2}\), and \(K_{3}\) in Fig. 5(a). Explicitly, for \(l=2^{n}\), one has \(1+g^{l}=y^{l}+y^{2l}+z^{l}+z^{l}y^{l}+z^{2l}\) due to the \(\mathbb{Z}_{2}\) setting of the model. Accordingly, \(1\to 1+g^{l}\) provides an instance of pushing an \(A\)-term excitation by at least \(l\) distance toward \(K_{1}\). It is realizable by fractal-shaped operator \(g^{l-1}(0,0,y+z,1)\), reflecting the excitation being a fracton of fractal nature. See Fig. 5(b). We call each \(K_{i}\) a _mobility cone_ for \(A\) excitations, as defined in SM [45]. The description of \(B\) excitations are analogous but with spatial directions inverted.
Based on mobility cones, we categorize fractons of \(H_{\mathrm{Haah}}\) into three types--\(A\), \(B\), and mixed--and define their windmill self-statistical processes. Type-\(A\) (type-\(B\)) are fractons with the mobility cones \(K_{i}\) (respectively, \(-K_{i}\)) for \(i=1,2,3\) shown in Fig. 5(a). The mixed are bound states of type-\(A\) and type-\(B\); they cannot be moved along any individual cone among \(K_{i}\)'s or \(-K_{i}\)'s. Self-statistics is definable using the "windmill" made of mobility cones. See Figs. 5(a) and (c). In \(H_{\mathrm{Haah}}\), non-mixed (i.e., type-\(A\) or type-\(B\)) fractons exhibit purely bosonic self-statistics, since only one type of Pauli is involved.
_Fermionic type-\(A\) fractons in a twisted Haah's code._ To further illustrate the usage of fracton self-statistics, consider a gauge-theoretic variant of Haah's code defined by applying \(H_{\mathrm{Haah}}\) to a Hilbert space that binds a _fermionic_ mode \(\psi_{\lambda}\) to \(A_{\lambda}\) via Gauss's law \(-i\gamma_{\lambda}\tilde{\gamma}_{\lambda}A_{\lambda}=1\), where \(\gamma_{\lambda}\coloneqq\psi_{\lambda}+\psi_{\lambda}^{\dagger}\) and \(\tilde{\gamma}_{\lambda}\coloneqq\frac{1}{t}(\psi_{\lambda}-\psi_{\lambda}^{ \dagger})\) are Majorana operators. As detailed in SM [45], the gauge theory emerges from a spin model \(H_{\mathrm{Haah}}^{F}\), namely, the twisted Haah's code proposed in Ref. [37].
Fracton self-statistics enables us to settle the unresolved question of whether \(H_{\mathrm{Haah}}^{F}\) represents a distinct fracton order from the original Haah's code \(H_{\mathrm{Haah}}\). The expectation that \(A\) excitation becomes fermionic is now defined and proved via windmill processes. The operator that excites \(A\)-excitations is modified to \(Z_{\sigma}c_{\sigma}\) due to gauge invariance, where \(Z_{\sigma}\) denotes Pauli \(Z\) on qubit \(\sigma\) while \(c_{\sigma}\) denotes a product of \(\gamma_{\lambda}\)'s that are associated with the \(Z_{\sigma}\)-flipped \(A\) terms. Still, one may wonder whether it is possible to compensate the statistics change by attaching \(B\) excitations to \(A\). Indeed, this is the case for the 2D toric code, and the checkerboard model, which we have shown above. However, it is not allowed here because attach
Figure 5: Fracton’s mobility in the Haah’s code. (a) Top: definition of \(A_{1}\) and \(B_{1}\). They are products of eight Pauli’s. Identity operators \(I\) are omitted when possible. Bottom: mobility cones (on the \(yz\)-plane) for \(A\) and \(B\) excitations. (b) Fractional moves \(1\to\eta_{i}\) of an \(A\) excitation are realized by operators of fractal support. Gray square dots represent operator \((0,0,y+z,1)\) and its translations. (c) A windmill for a composite of type-\(A\) and type-\(B\) fractons.
ing type-_B_ fracton alters the mobility of \(A\). Thus, the presence of fermionic type-_A_ fractons distinguishes \(H_{\text{Haha}}^{F}\) from \(H_{\text{Haha}}\). See also SM [45] for the discreteness of this self-statisitcs, which confirms it as a phase distinction.
_Conclusions._ We have shown that it is possible to exchange two realizations of a fracton superselection sector via its fractional mobility. The notion of self-statistics for fractons can thus be introduced, which is essential in characterizing fracton orders. As applications, we studied a family of twisted checkerboard models and a twisted Haah's code, from which we revealed a novel phase of foliated nature--what we call a semionic fracton order--and a new fractal-type order characterized by emergent fermionic fractons. Our work marks a crucial step towards a full "algebraic theory of fractons" yet to be developed.
We thank Sheng-Jie Huang, Juven Wang and especially Ashvin Vishwanath for helpful discussions. The authors are grateful to the Banff International Research Station, where this work began in 2020 at the workshop "Fractons and Beyond." HS also acknowledges discussions with Sung-Sik Lee. HS has been supported by the Natural Sciences and Engineering Research Council of Canada and and the National Natural Science Foundation of China (Grant No. 12047503). NT is supported by the Walter Burke Institute for Theoretical Physics at Caltech. The work of MH is supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences (BES) under Award number DE-SC0014415. WS is supported by the Simons Collaboration on Ultra-Quantum Matter (UQM), which is a grant from the Simons Foundation (651444). The work of NT, WS and MH also benefited from meetings of the UQM Simons Collaboration supported by Simons Foundation grant number 618615.
|
2309.09796 | On a conjecture of Ramírez Alfonsín and Skałba II | Let $1<c<d$ be two relatively prime integers and $g_{c,d}=cd-c-d$. We
confirm, by employing the Hardy--Littlewood method, a 2020 conjecture of
Ram\'{\i}rez Alfons\'{\i}n and Ska{\l}ba which states that $$#\left\{p\le
g_{c,d}:p\in \mathcal{P}, ~p=cx+dy,~x,y\in \mathbb{Z}_{\geqslant0}\right\}\sim
\frac{1}{2}\pi\left(g_{c,d}\right) \quad (\text{as}~c\rightarrow\infty),$$
where $\mathcal{P}$ is the set of primes, $\mathbb{Z}_{\geqslant0}$ is the set
of nonnegative integers and $\pi(t)$ denotes the number of primes not exceeding
$t$. | Yuchen Ding, Wenguang Zhai, Lilu Zhao | 2023-09-18T14:16:40Z | http://arxiv.org/abs/2309.09796v2 | # On a conjecture of Ramirez Alfonsin and Skalba II
###### Abstract.
Let \(1<c<d\) be two relatively prime integers and \(g_{c,d}=cd-c-d\). We confirm, by employing the Hardy-Littlewood method, a 2020 conjecture of Ramirez Alfonsin and Skalba which states that
\[\#\left\{p\leq g_{c,d}:p\in\mathcal{P},\ p=cx+dy,\ x,y\in\mathbb{Z}_{\geqslant 0 }\right\}\sim\frac{1}{2}\pi\left(g_{c,d}\right)\quad(\text{as }c\to\infty),\]
where \(\mathcal{P}\) is the set of primes, \(\mathbb{Z}_{\geqslant 0}\) is the set of nonnegative integers and \(\pi(t)\) denotes the number of primes not exceeding \(t\).
Key words and phrases:Frobenius-type problems, Hardy-Littlewood method, primes, Siegel-Walfisz theorem 2010 Mathematics Subject Classification: 11N05, 11P55
## 1. Introduction
Let \(1<c<d\) be two relatively prime integers and \(g_{c,d}=cd-c-d\). As early as 1882, Sylvester [6] showed that \(g_{c,d}\) is the largest integer which cannot be represented as the form \(cx+dy\)\((x,y\in\mathbb{Z}_{\geqslant 0}).\) Furthermore, he proved that for any \(0\leq s\leq g_{c,d}\), exactly one of \(s\) and \(g_{c,d}-s\) can be written as the form \(cx+dy\)\((x,y\in\mathbb{Z}_{\geqslant 0})\). As an immediate consequence, we know that exactly half of the integers between the interval \([0,g_{c,d}]\) can be written as the desired form. Actually, Sylvester's results are the first nontrivial case of the diophantine Frobenius problem [4], which asks the largest integer \(g_{c_{1},...,c_{n}}\) not of the form
\[c_{1}x_{1}+\cdots+c_{n}x_{n}\quad(x_{1},...,x_{n}\in\mathbb{Z}_{\geqslant 0}),\]
provided that \(c_{1},...,c_{n}\) are positive integers with \(\gcd(c_{1},...,c_{n})=1\). There are a huge number of literatures related to the diophantine Frobenius problem. For some of these results, see e.g. the excellent monograph [4] of Ramirez Alfonsin.
Motivated by Sylvester's theorems, Ramirez Alfonsin and Skalba [5] considered the diophantine Frobenius problem in primes. Precisely, let \(\pi_{c,d}\) be the number of primes not exceeding \(g_{c,d}\) with the form \(cx+dy\)\((x,y\in\mathbb{Z}_{\geqslant 0})\). By a very enlightening argument, Ramirez Alfonsin and Skalba proved that for any \(\varepsilon>0\), there is a constant \(k(\varepsilon)>0\) such that
\[\pi_{c,d}\geqslant k(\varepsilon)\frac{g_{c,d}}{(\log g_{c,d})^{2+\varepsilon}}.\]
On observing the antisymmetry property of the integers with the form \(cx+dy\)\((x,y\in\mathbb{Z}_{\geqslant 0})\) found by Sylvester, they naturally posed the following conjecture.
**Conjecture 1.1** (Ramirez Alfonsin and Skalba).: _Let \(1<c<d\) be two relatively prime integers, then_
\[\pi_{c,d}\sim\frac{\pi(g_{c,d})}{2}\quad(\text{as }c\to\infty),\]
_where \(\pi(t)\) is the number of primes up to \(t\)._
They gave some remarks below Conjecture 1.1. '_In the same spirit as the prime number theorem, this conjecture seems to be out of reach._' Also, they mentioned that this conjecture has some difficulties sharing the same flavor of Linnik's problem related the minimal primes in arithmetic progressions. Recently, the first named author made some progress on Conjecture 1.1. For real number \(N\geqslant 2\), let \(1<c<d\) be two relatively prime integers satisfying \(cd\leqslant N\). The first named author [2] proved that for all but at most
\[O\left(N(\log N)^{1/2}(\log\log N)^{1/2+\varepsilon}\right)\]
pairs \(c\) and \(d\), we have
\[\pi_{c,d}=\frac{\pi(g_{c,d})}{2}+O\left(\frac{\pi(g_{c,d})}{(\log\log(cd))^{ \varepsilon}}\right).\]
Since
\[\frac{\pi(g_{c,d})}{2}+O\left(\frac{\pi(g_{c,d})}{(\log\log(cd))^{\varepsilon} }\right)\sim\frac{\pi(g_{c,d})}{2}\quad(\text{as }c\to\infty)\]
and the total number of the relatively prime pairs \(c,d\) with \(1<c<d\) and \(cd\leqslant N\) is \(\gg N\log N\). Thus, the first named author actually showed that Conjecture 1.1 is true for almost all \(c\) and \(d\).
It is, however, rather surprising that the complete proof of Conjecture 1.1 follows from an application of the classical Hardy-Littlewood method. Perhaps, a novel point in our argument is that only the first coefficient of the'singular series' contributes the main term of the asymptotic formula comparing with the usual applications of the Hardy-Littlewood method. The idea of the proof presented here is in the same spirit as the one developed by a very recent article of the third named author and Chen, Yang [1].
Now, let's record our result as the following theorem.
**Theorem 1.1**.: _Suppose that \(d>c\) are two relatively prime integers with \(c\) sufficiently large, then we have_
\[\pi_{c,d}\sim\frac{1}{2}\pi(g_{c,d}),\quad\text{as }c\to\infty.\]
As usual, we shall firstly investigate the following weighted form related to Conjecture 1.1, i.e.,
\[\psi_{c,d}=\sum_{\begin{subarray}{c}n\leq g\\ n=cx+dy\\ x,y\in\mathbb{Z}_{>0}\end{subarray}}\Lambda(n),\]
where the von Mangoldt function \(\Lambda(n)\) is defined to be
\[\Lambda(n)=\left\{\begin{aligned} &\log p,&&\text{if }n=p^{\alpha}\ (\alpha>0);\\ & 0,&&\text{otherwise.}\end{aligned}\right.\]
Theorem 1.1 will be proved via the following weighted formula by a fairly standard transition.
**Theorem 1.2**.: _Suppose that \(d>c\) are two relatively prime integers with \(c\) sufficiently large, then we have_
\[\psi_{c,d}\sim\frac{g_{c,d}}{2},\quad\text{as }c\to\infty.\]
As an incident product of Theorem 1.2, we have the following corollary which seems to be of some interests.
**Corollary 1.1**.: _Suppose that \(d>c\) are two relatively prime integers with \(c\) sufficiently large, then we have_
\[\sum_{\begin{subarray}{c}y\leq c\\ (y,c)=1\end{subarray}}\psi(dy;c,dy)\sim\frac{g_{c,d}}{2},\quad\text{as }c \to\infty,\]
_where_
\[\psi(N;q,m)=\sum_{\begin{subarray}{c}n\leqslant N\\ n\equiv m\,(\text{mod }q)\end{subarray}}\Lambda(n).\]
## 2. Outline of the proof: an application of the Hardy-Littlewood method
We first fix some basic notations to be used frequently. From now on, we write \(g\) instead of \(g_{c,d}\) for brevity and \(c\) is supposed to be sufficiently large. Let \(Q\) denote a positive integer depending only on \(g\) which shall be decided later. The function \(e(t)\) is used to denote \(e^{2\pi it}\) as usual. Define the major arcs to be
\[\mathfrak{M}(Q)=\bigcup_{1\leq q\leq Q}\bigcup_{\begin{subarray}{c}1\leq a \leq q\\ (a,q)=1\end{subarray}}\left\{\alpha:\left|\alpha-\frac{a}{q}\right|\leq\frac{Q }{qg}\right\} \tag{2.1}\]
We make a further provision that \(Q<(g/2)^{1/3}\) so that the above subsets are pairwise disjoint. In fact, suppose that
\[\left\{\alpha:\left|\alpha-\frac{a}{q}\right|\leq\frac{Q}{qg}\right\}\bigcap \left\{\alpha:\left|\alpha-\frac{a^{\prime}}{q^{\prime}}\right|\leq\frac{Q}{ qg}\right\}\neq\emptyset\]
for some \(\frac{a}{q}\neq\frac{a^{\prime}}{q^{\prime}}\), then
\[\frac{2Q}{g}\geq\frac{2Q}{qg}\geq\left|\frac{a}{q}-\frac{a^{\prime}}{q^{ \prime}}\right|\geq\frac{1}{Q^{2}},\]
which is certainly a contradiction with the provision that \(Q<(g/2)^{1/3}\). In addition, we note that
\[\mathfrak{M}(Q)\subseteq\left[\frac{1}{Q}-\frac{Q}{qg},1+\frac{Q}{qg}\right] \subseteq\left[\frac{Q}{g},1+\frac{Q}{g}\right].\]
We can now define the minor arcs to be
\[\mathfrak{m}(Q)=\left[\frac{Q+1}{g},1+\frac{Q+1}{g}\right]\setminus\mathfrak{ M}(Q). \tag{2.2}\]
For any real \(\alpha\), let
\[f(\alpha)=\sum_{0\leq n\leq g}\Lambda(n)e(\alpha n)\quad\text{and}\quad h( \alpha)=\sum_{\begin{subarray}{c}0\leq x\leq d\\ 0\leq y\leq c\end{subarray}}e(\alpha(cx+dy))\]
By the orthogonality relation, it is clear that
\[\psi_{c,d}=\int_{0}^{1}f(\alpha)h(-\alpha)d\alpha=\int_{\mathfrak{M}(Q)}f( \alpha)h(-\alpha)d\alpha+\int_{\mathfrak{m}(Q)}f(\alpha)h(-\alpha)d\alpha. \tag{2.3}\]
The remaining parts of our paper will be organized as follows: In the next section, we shall give a suitable bound of the integral on the minor arcs which shows that it contributes the error term of Theorem 1.2. For the integral on the major arcs, we have
\[\int_{\mathfrak{M}(Q)}f(\alpha)h(-\alpha)d\alpha =\sum_{1\leq q\leq Q}\sum_{\begin{subarray}{c}1\leq a\leq q\\ (a,q)=1\end{subarray}}\int_{\frac{q}{q}-\frac{Q}{qg}}^{\frac{a}{q}+\frac{Q}{qg }}f(\alpha)h(-\alpha)d\alpha\] \[=\sum_{1\leq q\leq Q}\sum_{\begin{subarray}{c}1\leq a\leq q\\ (a,q)=1\end{subarray}}\int_{|\theta|\leq\frac{Q}{qg}}f\left(\frac{a}{q}+\theta \right)h\left(-\frac{a}{q}-\theta\right)d\theta. \tag{2.4}\]
We would prove, in section 4, that the integral for \(q=1\) in Eq. (2.2) contributes the main term of Theorem 1.2 and the integrals for \(2\leq q\leq Q\) only contribute the error terms. In section 5, we will prove our theorems and corollary.
## 3. Estimates of the minor arcs
The aim of this section is to prove the following proposition.
**Proposition 3.1**.: _For estimates of the minor arcs, we have_
\[\int_{\mathfrak{m}(Q)}f(\alpha)h(-\alpha)d\alpha\ll\frac{g(\log g)^{6}}{Q^{1/ 2}}+g^{4/5}(\log g)^{6}.\]
We need two lemmas listed below.
**Lemma 3.1**.: _We have_
\[\sup_{\alpha\in\mathfrak{m}(Q)}|f(\alpha)|\ll\frac{g(\log g)^{4}}{Q^{1/2}}+g^ {4/5}(\log g)^{4}.\]
Proof.: By the Dirichlet approximation theorem (see e.g. [7, Lemma 2.1]), there exist \(a\in\mathbb{Z}\) and \(q\in\mathbb{Z}^{+}\) such that
\[(a,q)=1,\quad 1\leq q\leq\frac{g}{Q}\quad\text{and}\quad\left|\alpha-\frac{a}{ q}\right|\leq\frac{Q}{qg}\]
for any \(m\in\mathfrak{m}(Q)\). First of all, we show that \(q>Q\) for these \(\alpha\in\mathfrak{m}(Q)\). Suppose the contrary, i.e., \(q\leq Q\), then from \(\left|\alpha-\frac{a}{q}\right|\leq\frac{Q}{qg}\) we know that
\[\alpha\leq\frac{a}{q}+\frac{Q}{qg}\leq\frac{Q}{qg}\leq\frac{Q}{g}<\frac{Q+1}{g }\quad(\text{if }a\leq 0)\]
and
\[\alpha\geq\frac{q+1}{q}-\frac{Q}{qg}=1+\frac{1}{q}\left(1-\frac{Q}{g}\right) \geq 1+\frac{1}{2Q}>1+\frac{Q+1}{g}\quad(\text{if }a\geq q+1),\]
which contradict with Eq. (2.2). We are left over to consider the case that \(1\leq a\leq q\leq Q\). In this case, we have \(\alpha\in\mathfrak{M}(Q)\) by the definition of the major arcs which is still a contradiction. Thus, we have proved that \(q>Q\) for these \(\alpha\in\mathfrak{m}(Q)\). We are in a position to introduce a fairly remarkable theorem of Vinogradov (see e.g. [7, Theorem 3.1]) which states that for \((a,q)=1\) with \(1\leq q\leq g\) and \(\left|\alpha-\frac{a}{q}\right|\leq\frac{1}{q^{2}}\), we have
\[f(\alpha)\ll\left(\frac{g}{q^{1/2}}+q^{4/5}+g^{1/2}q^{1/2}\right)(\log g)^{4}.\]
Employing this estimate, we deduce from \(Q<q\leq\frac{g}{Q}\) that
\[\sup_{\alpha\in\mathfrak{m}(Q)}|f(\alpha)|\ll\left(\frac{g}{Q^{1/2}}+g^{4/5}+g^{ 1/2}(g/Q)^{1/2}\right)(\log g)^{4}\ll\frac{g(\log g)^{4}}{Q^{1/2}}+g^{4/5}(\log g )^{4}.\]
This completes the proof of Lemma 3.1
**Lemma 3.2**.: _We have_
\[\int_{0}^{1}|h(-\alpha)|d\alpha\ll(\log g)^{2}.\]
Proof.: Recall that for any real number \(\alpha\) and integers \(N_{1}<N_{2}\), we have
\[\sum_{n=N_{1}+1}^{N_{2}}e(\alpha n)\ll\min\{N_{2}-N_{1},\|\alpha\|^{-1}\}\]
(see e.g. [3, Lemma 4.7]), where \(\|\alpha\|=\min\{|\alpha-n|:n\in\mathbb{Z}\}\), from which it follows that
\[h(-\alpha)=\sum_{x\leq d}e(-\alpha cx)\sum_{y\leq c}e(-\alpha dy)\ll\min\{d,\| c\alpha\|^{-1}\}\min\{c,\|d\alpha\|^{-1}\}.\]
The rest of the proof will be devoted to derive the following 'elementary estimates'
\[\int_{0}^{1}\min\{d,\|c\alpha\|^{-1}\}\min\{c,\|d\alpha\|^{-1}\}d\alpha\ll( \log g)^{2}, \tag{3.1}\]
which constitutes one of the the main workhorse of the whole article. We make the observation that it is equivalent to prove that
\[\int_{0}^{1/2}\min\{d,\|c\alpha\|^{-1}\}\min\{c,\|d\alpha\|^{-1}\}d\alpha\ll( \log g)^{2}.\]
For \(0\leq\alpha\leq\frac{1}{cd}\), we have the following trivial estimates that
\[\int_{0}^{1/(cd)}\min\{d,\|c\alpha\|^{-1}\}\min\{c,\|d\alpha\|^{-1}\}d\alpha \leq\int_{0}^{1/(cd)}dc\ d\alpha\leq 1.\]
For \(\frac{1}{cd}\leq\alpha\leq\frac{1}{2\sqrt{cd}}\), it is plain that
\[\frac{1}{d}\leq c\alpha\leq\frac{1}{2}\sqrt{\frac{c}{d}}<\frac{1}{2},\quad \text{i.e.,}\quad\|c\alpha\|=c\alpha,\]
from which we deduce that
\[\int_{\frac{1}{cd}}^{\frac{1}{2\sqrt{cd}}}\min\{d,\|c\alpha\|^{-1}\}\min\{c,\| d\alpha\|^{-1}\}d\alpha\leq\int_{\frac{1}{cd}}^{\frac{1}{2\sqrt{cd}}}\frac{1}{ c\alpha}c\ d\alpha\ll\log(cd)\ll\log g.\]
It remains to prove that
\[\int_{\frac{1}{2\sqrt{cd}}}^{1/2}\min\{d,\|c\alpha\|^{-1}\}\min\{c,\|d\alpha\| ^{-1}\}d\alpha\ll(\log g)^{2}.\]
The above interval is contained in the following union of a few disjoint short intervals
\[\left[\frac{1}{2\sqrt{cd}},\frac{1}{2}\right]\subseteq\bigcup_{\left\lfloor \frac{\sqrt{cd}}{2}\right\rfloor\leq\ell\leq\left\lfloor\frac{cd}{2}\right\rfloor }\left[\frac{\ell}{cd}-\frac{1}{2cd},\frac{\ell}{cd}+\frac{1}{2cd}\right].\]
By the above inclusion relation, it suffices to show that
\[\sum_{\left\lfloor\frac{\sqrt{cd}}{2}\right\rfloor\leq\ell\leq\left\lfloor\frac{cd }{2}\right\rfloor}\int_{\frac{\ell}{cd}-\frac{1}{2cd}}^{\frac{\ell}{cd}+\frac{1 }{2cd}}\min\{d,\|c\alpha\|^{-1}\}\min\{c,\|d\alpha\|^{-1}\}d\alpha\ll(\log g)^{ 2}.\]
Making changes of the variables \(\alpha=\frac{\ell}{cd}+\theta\), it is equivalent to prove that
\[\sum_{\left\lfloor\frac{\sqrt{cd}}{2}\right\rfloor\leq\ell\leq\left\lfloor \frac{cd}{2}\right\rfloor}\int_{-\frac{1}{2cd}}^{\frac{1}{2cd}}\min\left\{d, \left\|c\theta+\frac{\ell}{d}\right\|^{-1}\right\}\min\left\{c,\left\|d \theta+\frac{\ell}{c}\right\|^{-1}\right\}d\theta\ll(\log g)^{2}. \tag{3.2}\]
We separate the proof into four cases.
Case I. For \(\left\lfloor\frac{\sqrt{cd}}{2}\right\rfloor\leq\ell\leq\left\lfloor\frac{cd }{2}\right\rfloor\) with \(c\nmid\ell\) and \(d\nmid\ell\), we have
\[\left\|c\theta+\frac{\ell}{d}\right\|\geq\frac{1}{2}\left\|\frac{\ell}{d} \right\|\quad\text{and}\quad\left\|d\theta+\frac{\ell}{c}\right\|\geq\frac{1} {2}\left\|\frac{\ell}{c}\right\|\]
since \(|\theta|\leq\frac{1}{2cd}\). Therefore, we obtain that
\[\sum_{\left\lfloor\frac{\sqrt{cd}}{2}\right\rfloor\leq\ell\leq \left\lfloor\frac{cd}{2}\right\rfloor\atop c\nmid\ell\text{ and }d\nmid\ell}\int_{-\frac{1}{2cd}}^{\frac{1}{2cd}}\min \left\{d,\left\|c\theta+\frac{\ell}{d}\right\|^{-1}\right\}\min\left\{c,\left\| d\theta+\frac{\ell}{c}\right\|^{-1}\right\}d\theta\] \[\leq 4\sum_{\left\lfloor\frac{\sqrt{cd}}{2}\right\rfloor\leq\ell \leq\left\lfloor\frac{cd}{2}\right\rfloor\atop c\nmid\ell\text{ and }d\nmid\ell}\int_{-\frac{1}{2cd}}^{\frac{1}{2cd}}\left\|\frac{\ell}{d} \right\|^{-1}\left\|\frac{\ell}{c}\right\|^{-1}d\theta\] \[=\frac{4}{cd}\sum_{\left\lfloor\frac{\sqrt{cd}}{2}\right\rfloor \leq\ell\leq\left\lfloor\frac{cd}{2}\right\rfloor\atop c\nmid\ell\text{ and }d\nmid\ell}\left\|\frac{\ell}{d}\right\|^{-1}\left\|\frac{\ell}{c}\right\|^{-1}. \tag{3.3}\]
Now, by the Euclidean division we can assume that \(\ell=ch+r\) with \(1\leq r\leq c-1\). It then follows that
\[\sum_{\left\lfloor\frac{\sqrt{cd}}{2}\right\rfloor\leq\ell\leq \left\lfloor\frac{cd}{2}\right\rfloor\atop c\nmid\ell\text{ and }d\nmid\ell}\left\|\frac{\ell}{c}\right\|^{-1} \leq\sum_{\begin{subarray}{c}0\leq h\leq\frac{d}{2},\ 1\leq r\leq c-1\\ h\neq-c^{-1}r\ (\text{mod }d)\end{subarray}}\left\|\frac{ch+r}{d}\right\|^{-1} \left\|\frac{r}{c}\right\|^{-1}\] \[=\sum_{1\leq r\leq c-1}\left\|\frac{r}{c}\right\|^{-1}\sum_{0\leq h \leq\frac{d}{2},\ h\neq-c^{-1}r\ (\text{mod }d)}\left\|\frac{ch+r}{d}\right\|^{-1},\] \[\leq\sum_{1\leq r\leq c-1}\left\|\frac{r}{c}\right\|^{-1}\sum_{1 \leq r^{\prime}\leq d-1}\left\|\frac{r^{\prime}}{d}\right\|^{-1},\]
where \(c^{-1}c\equiv 1\ (\text{mod }d)\) and the last inequality follows from the fact that
\[ch+r\not\equiv ch^{\prime}+r\ (\text{mod }d)\quad(\text{for }0\leq h\neq h^{ \prime}\leq d/2).\]
It can be seen that
\[\sum_{1\leq r\leq c-1}\left\|\frac{r}{c}\right\|^{-1}\leq 2\sum_{1\leq r\leq \lfloor c/2\rfloor}\left\|\frac{r}{c}\right\|^{-1}=2\sum_{1\leq r\leq\lfloor c /2\rfloor}\frac{c}{r}\ll c\log c\ll c\log g\]
and
\[\sum_{1\leq r^{\prime}\leq d-1}\left\|\frac{r^{\prime}}{d}\right\|^{-1}\leq 2 \sum_{1\leq r^{\prime}\leq\lfloor d/2\rfloor}\left\|\frac{r^{\prime}}{d} \right\|^{-1}=2\sum_{1\leq r^{\prime}\leq\lfloor d/2\rfloor}\frac{d}{r^{\prime }}\ll d\log d\ll d\log g,\]
from which we deduce from Eq. (3.3) that
\[\sum_{\begin{subarray}{c}\left\lfloor\frac{\sqrt{cd}}{2}\right\rfloor\leq\ell \leq\left\lfloor\frac{cd}{2}\right\rfloor\\ c\left|\ell\text{ and }d\right|\ell\end{subarray}}\int_{-\frac{1}{2cd}}^{ \frac{1}{2cd}}\min\left\{d,\left\|c\theta+\frac{\ell}{d}\right\|^{-1}\right\} \min\left\{c,\left\|d\theta+\frac{\ell}{c}\right\|^{-1}\right\}d\theta\ll( \log g)^{2}.\]
Case II. For \(\left\lfloor\frac{\sqrt{cd}}{2}\right\rfloor\leq\ell\leq\left\lfloor\frac{cd }{2}\right\rfloor\) with \(c\nmid\ell\) but \(d|\ell\), we have
\[\left\|d\theta+\frac{\ell}{c}\right\|\geq\frac{1}{2}\left\|\frac{\ell}{c}\right\|\]
and then it follows that
\[\sum_{\begin{subarray}{c}\left\lfloor\frac{\sqrt{cd}}{2}\right\rfloor\leq\ell \leq\left\lfloor\frac{cd}{2}\right\rfloor\\ c\left|\ell\text{ and }d\right|\ell\end{subarray}}\int_{-\frac{1}{2cd}}^{ \frac{1}{2cd}}\min\left\{d,\left\|c\theta+\frac{\ell}{d}\right\|^{-1}\right\} \min\left\{c,\left\|d\theta+\frac{\ell}{c}\right\|^{-1}\right\}d\theta\]
\[\leq 2\sum_{\begin{subarray}{c}\left\lfloor\frac{\sqrt{cd}}{2} \right\rfloor\leq\ell\leq\left\lfloor\frac{cd}{2}\right\rfloor\\ c\left|\ell\text{ and }d\right|\ell\end{subarray}}\int_{-\frac{1}{2cd}}^{ \frac{1}{2cd}}d\left\|\frac{\ell}{c}\right\|^{-1}d\theta\] \[=\frac{2}{c}\sum_{\begin{subarray}{c}\left\lfloor\frac{\sqrt{cd }}{2}\right\rfloor\leq\ell\leq\left\lfloor\frac{cd}{2}\right\rfloor\\ c\left|\ell\text{ and }d\right|\ell\end{subarray}}\left\|\frac{\ell}{c}\right\|^{-1}.\]
On writing \(\ell=d\ell^{*}\), we find that
\[\sum_{\begin{subarray}{c}\left\lfloor\frac{\sqrt{cd}}{2}\right\rfloor\leq\ell \leq\left\lfloor\frac{cd}{2}\right\rfloor\\ c\left|\ell\text{ and }d\right|\ell\end{subarray}}\left\|\frac{\ell}{c}\right\|^{-1} \leq\sum_{1\leq\ell^{*}\leq\frac{\ell}{2}}\left\|\frac{d\ell^{*}}{c}\right\|^ {-1}\leq\sum_{1\leq r\leq c-1}\left\|\frac{r}{c}\right\|^{-1}\ll c\log g,\]
from which it follows clearly that
\[\sum_{\begin{subarray}{c}\left\lfloor\frac{\sqrt{cd}}{2}\right\rfloor\leq\ell \leq\left\lfloor\frac{cd}{2}\right\rfloor\\ c\left|\ell\text{ and }d\right|\ell\end{subarray}}\int_{-\frac{1}{2cd}}^{ \frac{1}{2cd}}\min\left\{d,\left\|c\theta+\frac{\ell}{d}\right\|^{-1}\right\} \min\left\{c,\left\|d\theta+\frac{\ell}{c}\right\|^{-1}\right\}d\theta\ll\log g.\]
Case III. For \(\left\lfloor\frac{\sqrt{cd}}{2}\right\rfloor\leq\ell\leq\left\lfloor\frac{cd}{2}\right\rfloor\) with \(c|\ell\) but \(d\nmid\ell\), we have
\[\sum_{\begin{subarray}{c}\left\lfloor\frac{\sqrt{cd}}{2}\right\rfloor\leq\ell \leq\left\lfloor\frac{cd}{2}\right\rfloor\\ c|\ell\text{ and }d\ell\end{subarray}}\int_{-\frac{1}{2cd}}^{\frac{1}{2cd}}\min \left\{d,\left\|c\theta+\frac{\ell}{d}\right\|^{-1}\right\}\min\left\{c, \left\|d\theta+\frac{\ell}{c}\right\|^{-1}\right\}d\theta\ll\log g.\]
via the same argument as Case II.
Case IV. For \(\left\lfloor\frac{\sqrt{cd}}{2}\right\rfloor\leq\ell\leq\left\lfloor\frac{cd} {2}\right\rfloor\) with \(c|\ell\) and \(d|\ell\), we have \(cd|\ell\) since \((c,d)=1\), which is certainly a contradiction with \(\ell\leq\left\lfloor\frac{cd}{2}\right\rfloor\).
Gathering together Cases I to IV, we established Eq. (3.2), and hence Eq. (3.1).
This completes the proof of Lemma 3.2.
Proof of Proposition 3.1.: The treatment of the minor arcs benefits from the following trivial estimates
\[\left|\int_{\mathfrak{m}(Q)}f(\alpha)h(-\alpha)d\alpha\right|\leq\sup_{\alpha \in\mathfrak{m}(Q)}|f(\alpha)|\int_{0}^{1}|h(-\alpha)|d\alpha\]
together with Lemma 3.1 and Lemma 3.2.
## 4. Calculations of the major arcs
We would provide, in this section, the asymptotic formula of the integral on major arcs as the following proposition.
**Proposition 4.2**.: _For \(Q<c^{1/3}\), we have_
\[\int_{\mathfrak{M}(Q)}f(\alpha)h(-\alpha)d\alpha=\frac{g}{2}+O\left(\frac{g}{ Q}(\log g)^{2}+gQ^{2}\exp\left(-\kappa_{1}\sqrt{\log g}\right)+dQ^{3}\right).\]
To this aim, we firstly prove several lemmas.
**Lemma 4.3**.: _For any real number \(\theta\), we have_
\[f(\theta)=\sum_{0\leq n\leq g}e(n\theta)+O\left(g(1+|\theta|g)\exp\left(- \kappa_{1}\sqrt{\log g}\right)\right).\]
Proof.: Let \(\rho_{n}=\Lambda(n)-1\). Then for any real number \(\theta\), we have
\[f(\theta)-\sum_{0\leq n\leq g}e(n\theta)=\sum_{0\leq n\leq g}\rho_{n}e(n \theta). \tag{4.1}\]
Integrating by parts, we have
\[\sum_{0\leq n\leq g}\rho_{n}e(n\theta)=e(g\theta)\sum_{0\leq n\leq g}\rho_{n} -2\pi i\theta\int_{0}^{g}\left(\sum_{0\leq n\leq t}\rho_{n}\right)e(t\theta)dt. \tag{4.2}\]
Employing the elaborate form of the prime number theorem (see e.g. [7, Lemma 3.1]), we get
\[\sum_{0\leq n\leq t}\rho_{n}\ll t\exp\left(-\kappa_{1}\sqrt{\log t}\right),\]
where \(\kappa_{1}>0\) is an absolute constant. Inserting the above estimates into Eqs. (4.1) and (4.2), we obtain that for any \(\theta\),
\[f(\theta)=\sum_{0\leq n\leq g}e(n\theta)+O\left(g(1+|\theta|g)\exp\left(-\kappa_ {1}\sqrt{\log g}\right)\right).\]
This completes the proof of Lemma 4.3.
**Lemma 4.4**.: \[\int_{|\theta|\leq\frac{Q}{g}}f\left(\theta\right)h\left(-\theta\right)d\theta =\frac{g}{2}+O\left(\frac{g}{Q}(\log g)^{2}+gQ^{2}\exp\left(-\kappa_{1}\sqrt{ \log g}\right)\right).\]
Proof.: From Lemma 4.3, we have
\[\int_{|\theta|\leq\frac{Q}{g}}f\left(\theta\right)h\left(-\theta \right)d\theta=\int_{|\theta|\leq\frac{Q}{g}}\sum_{0\leq n\leq g}e(n\theta) \sum_{\begin{subarray}{c}0\leq x\leq d\\ 0\leq y\leq c\end{subarray}}e(-\theta(cx+dy))d\theta+\mathcal{R}(\theta),\]
where the error term \(\mathcal{R}(\theta)\) can be bounded easily by the trivial estimates as
\[\mathcal{R}(\theta) \ll\int_{|\theta|\leq\frac{Q}{g}}g(1+|\theta|g)\exp\left(-\kappa_ {1}\sqrt{\log g}\right)|h(-\theta)|\ d\theta\] \[\ll\int_{|\theta|\leq\frac{Q}{g}}g(1+|\theta|g)\exp\left(-\kappa_ {1}\sqrt{\log g}\right)g\ d\theta\] \[\ll g^{2}Q\exp\left(-\kappa_{1}\sqrt{\log g}\right)\int_{|\theta |\leq\frac{Q}{g}}1d\theta\] \[\ll gQ^{2}\exp\left(-\kappa_{1}\sqrt{\log g}\right).\]
On noting that
\[\int_{|\theta|\leq\frac{Q}{g}}=\int_{-1/2}^{1/2}-\int_{\frac{Q}{g}}^{1/2}-\int _{-1/2}^{-\frac{Q}{g}},\]
we can obtain
\[\int_{|\theta|\leq\frac{Q}{g}}\sum_{0\leq n\leq g}e(n\theta)\sum_ {\begin{subarray}{c}0\leq x\leq d\\ 0\leq y\leq c\end{subarray}}e(-\theta(cx+dy))d\theta=\int_{-1/2}^{1/2}\sum_{0 \leq n\leq g}e(n\theta)\sum_{\begin{subarray}{c}0\leq x\leq d\\ 0\leq y\leq c\end{subarray}}e(-\theta(cx+dy))d\theta\] \[+\mathcal{R}_{1}(\theta)+\mathcal{R}_{2}(\theta),\]
where
\[\mathcal{R}_{1}(\theta)=\int_{-1/2}^{-\frac{Q}{g}}\sum_{0\leq n \leq g}e(n\theta)\sum_{\begin{subarray}{c}0\leq x\leq d\\ 0\leq y\leq c\end{subarray}}e(-\theta(cx+dy))d\theta\]
and
\[\mathcal{R}_{2}(\theta)=\int_{\frac{Q}{g}}^{1/2}\sum_{0\leq n \leq g}e(n\theta)\sum_{\begin{subarray}{c}0\leq x\leq d\\ 0\leq y\leq c\end{subarray}}e(-\theta(cx+dy))d\theta.\]
It follows from the periodic property of \(\theta\) that
\[\int_{-1/2}^{1/2}\sum_{0\leq n\leq g}e(n\theta)\sum_{\begin{subarray}{c}0\leq x \leq d\\ 0\leq y\leq c\end{subarray}}e(-\theta(cx+dy))d\theta=\sum_{\begin{subarray}{c}0 \leq n\leq g\\ 0\leq x\leq d,\ 0\leq y\leq c\end{subarray}}\int_{0}^{1}e((n-cx-dy)\theta)d\theta.\]
Here comes the most interesting point of the whole proof! (The Hardy-Littlewood method reduces the diophantine Frobenius problem with prime variable to the original diophantine Frobenius problem.) By the orthogonality relation, the above integral equals \(1\) if \(n=cx+dy\) and \(0\) otherwise. Hence, by the result of Sylvester [6] as we mentioned in the introduction (see also [4, Eq. (2) of the Preface]), we get
\[\sum_{\begin{subarray}{c}0\leq n\leq g\\ 0\leq x\leq d,\ 0\leq y\leq c\end{subarray}}\int_{0}^{1}e((n-cx-dy)\theta)d \theta=\frac{g+1}{2}.\]
For \(\frac{Q}{g}\leq\theta\leq\frac{1}{2}\), using again the following estimates
\[\sum_{0\leq n\leq g}e(n\theta)\ll\min\{g,\|\theta\|^{-1}\}\ll\|\theta\|^{-1}= \frac{1}{\theta}\]
and
\[\sum_{0\leq x\leq d}e(-\theta cx)\ll\min\{d,\|c\theta\|^{-1}\},\quad\sum_{0 \leq x\leq c}e(-\theta dx)\ll\min\{c,\|d\theta\|^{-1}\},\]
we deduce that
\[\mathcal{R}_{2}(\theta) \ll\int_{\frac{Q}{g}}^{1/2}\frac{1}{\theta}\min\{d,\|c\theta\|^{- 1}\}\min\{c,\|d\theta\|^{-1}\}d\theta\] \[\leq\frac{g}{Q}\int_{\frac{Q}{g}}^{1/2}\min\{d,\|c\theta\|^{-1} \}\min\{c,\|d\theta\|^{-1}\}d\theta\] \[\leq\frac{g}{Q}(\log g)^{2},\]
where the last inequality follows from Eq. (3.1). The same argument would also lead to the estimate \(\mathcal{R}_{1}(\theta)\ll\frac{g}{Q}(\log g)^{2}\). Collecting the estimates above, we obtain that
\[\int_{|\theta|\leq\frac{Q}{g}}f\left(\theta\right)h\left(-\theta\right)d \theta=\frac{g}{2}+O\left(\frac{g}{Q}(\log g)^{2}+gQ^{2}\exp\left(-\kappa_{1} \sqrt{\log g}\right)\right).\]
This completes the proof of Lemma 4.4.
**Lemma 4.5**.: _For \(Q<c^{1/3}\), we have_
\[\sum_{2\leq q\leq Q}\sum_{\begin{subarray}{c}1\leq a\leq q\\ (a,q)=1\end{subarray}}\int_{|\theta|\leq\frac{Q}{qg}}f\left(\frac{a}{q}+\theta \right)h\left(-\frac{a}{q}-\theta\right)d\theta\ll dQ^{3}.\]
Proof.: Recall that
\[h(-\alpha)=\sum_{x\leq d}e(-\alpha cx)\sum_{y\leq c}e(-\alpha dy)\ll\min\{d,|| c\alpha||^{-1}\}\min\{c,||d\alpha||^{-1}\},\]
hence we have
\[h\left(-\frac{a}{q}-\theta\right)\ll\min\left\{d,\left\|c\left(-\frac{a}{q}- \theta\right)\right\|^{-1}\right\}\min\left\{c,\left\|d\left(-\frac{a}{q}- \theta\right)\right\|^{-1}\right\}.\]
Since \((a,q)=1\), we would have
\[\left\|c\left(-\frac{a}{q}-\theta\right)\right\|\geq\frac{1}{2q}\quad\text{if }\quad q\nmid c\]
and
\[\left\|d\left(-\frac{a}{q}-\theta\right)\right\|\geq\frac{1}{2q}\quad\text{ if}\quad q\nmid d\]
for \(q\geq 2\) and \(|\theta|\leq\frac{Q}{qg}\), provided that \(Q\leq c^{1/3}\). Recall that \((c,d)=1\), thus at least one of the above inequalities is admissible. Therefore, for all \(2\leq q\leq Q\), \((a,q)=1\) and \(|\theta|\leq\frac{Q}{qg}\) we have
\[h\left(-\frac{a}{q}-\theta\right)\ll qd,\]
from which we can conclude that
\[\sum_{2\leq q\leq Q}\sum_{\begin{subarray}{c}1\leq a\leq q\\ (a,q)=1\end{subarray}}\int_{|\theta|\leq\frac{Q}{qg}}f\left(\frac{a}{q}+\theta \right)h\left(-\frac{a}{q}-\theta\right)d\theta\ll\sum_{2\leq q\leq Q}\sum_{ \begin{subarray}{c}1\leq a\leq q\\ (a,q)=1\end{subarray}}\int_{|\theta|\leq\frac{Q}{qg}}gdq\ d\theta\ll dQ^{3}.\]
This completes the proof of Lemma 4.5.
Proof of Proposition 4.2.: Let's turn back to Eq. (2.4):
\[\int_{\mathfrak{M}(Q)}f(\alpha)h(-\alpha)d\alpha=\sum_{1\leq q\leq Q}\sum_{ \begin{subarray}{c}1\leq a\leq q\\ (a,q)=1\end{subarray}}\int_{|\theta|\leq\frac{Q}{qg}}f\left(\frac{a}{q}+\theta \right)h\left(-\frac{a}{q}-\theta\right)d\theta.\]
The term of the above sum for \(a=q=1\) equals
\[\int_{|\theta|\leq\frac{Q}{g}}f\left(1+\theta\right)h\left(-1-\theta\right)d \theta=\int_{|\theta|\leq\frac{Q}{g}}f\left(\theta\right)h\left(-\theta\right) d\theta.\]
Now, our proposition follows immediately from Lemma 4.4 and lemma 4.5.
## 5. Proofs of Theorem 1.1, Theorem 1.2 and Corollary 1.1
Proof of Theorem 1.2.: By Eqs. (2.3), Proposition 3.1 and Proposition 4.2, we have
\[\psi_{c,d}=\frac{g}{2}+O\left(\frac{g}{Q}(\log g)^{2}+gQ^{2}\exp\left(-\kappa _{1}\sqrt{\log g}\right)+dQ^{3}+\frac{g(\log g)^{6}}{Q^{1/2}}+g^{4/5}(\log g)^ {6}\right),\]
where \(Q\leq c^{1/3}\). We choose \(Q=(\log g)^{14}\). Then
\[\psi_{c,d}=\frac{g}{2}+O\left(\frac{g}{\log g}\right), \tag{5.1}\]
provided that \(c\geq(\log g)^{43}\).
For the complement of our proof, it remains to consider the case that \(c\leq(\log g)^{43}\). Following the proof of [2, Page 299], we have
\[\psi_{c,d} =\sum_{\begin{subarray}{c}n=cx+dy\\ n\leq g\\ x,y\in\mathbb{Z}_{\geqslant 0}\end{subarray}}\Lambda(n)\] \[=\sum_{\begin{subarray}{c}1\leq y\leq c\\ (y,c)=1\end{subarray}}\sum_{\begin{subarray}{c}n\equiv dy\,(\text{mod }c)\\ dy\leq n\leq g\end{subarray}}\Lambda(n)+O\left(1\right)\] \[=\sum_{\begin{subarray}{c}1\leq y\leq c\\ (y,c)=1\end{subarray}}\left(\psi(g;c,dy)-\psi(dy;c,dy)\right)+O\left(1\right)\] \[=\psi(g)-\sum_{\begin{subarray}{c}1\leq y\leq c\\ (y,c)=1\end{subarray}}\psi(dy;c,dy)+O\left(1\right). \tag{5.2}\]
Now, for \(c\leq(\log g)^{43}\ll(\log d)^{43}\), by the Siegel-Walfisz theorem we have
\[\sum_{\begin{subarray}{c}1\leq y\leq c\\ (y,c)=1\end{subarray}}\psi(dy;c,dy) =\sum_{\begin{subarray}{c}1\leq y\leq c\\ (y,c)=1\end{subarray}}\left(\frac{dy}{\varphi(c)}+O\left(dy\exp(-\kappa_{2} \sqrt{\log g})\right)\right)\] \[=\frac{1}{2}cd+O\left(g\exp\left(-\kappa_{3}\sqrt{\log g}\right) \right)\] \[=\frac{1}{2}g+O\left(\frac{g}{c}+g\exp\left(-\kappa_{3}\sqrt{ \log g}\right)\right),\]
where \(\kappa_{2}\) and \(\kappa_{3}\) are two positive integers with \(\kappa_{3}<\kappa_{2}\). Since
\[\psi(g)=g+O\left(g\exp\left(-\kappa_{4}\sqrt{\log g}\right)\right)\]
by the prime number theorem, we finally conclude from Eq. (5.2) that
\[\psi_{c,d}=\frac{1}{2}g+O\left(\frac{g}{c}+g\exp\left(-\kappa_{5}\sqrt{\log g }\right)\right) \tag{5.3}\]
for \(c\leq(\log g)^{43}\), where \(\kappa_{5}=\min\{\kappa_{3},\kappa_{4}\}\).
From Eqs. (5.1) and (5.3), we proved that
\[\psi_{c,d}\sim\frac{1}{2}g,\quad\text{as }c\to\infty.\]
This completes the proof of Theorem 1.2.
Proof of Theorem 1.1.: For \(t\leqslant g\), let
\[\vartheta_{a,b}(t)=\sum_{\begin{subarray}{c}p=ax+by\\ p\leqslant t\\ x,y\in\mathbb{Z}_{\geqslant 0}\end{subarray}}\log p\quad\text{and}\quad \vartheta_{a,b}=\vartheta_{a,b}(g).\]
Integrating by parts, we obtain that
\[\pi_{a,b}=\sum_{\begin{subarray}{c}p=ax+by\\ p\leqslant g\\ x,y\in\mathbb{Z}_{\geqslant 0}\end{subarray}}1=\frac{\vartheta_{a,b}}{\log g }+\int_{2}^{g}\frac{\vartheta_{a,b}(t)}{t\log^{2}t}dt. \tag{5.4}\]
By the Chebyshev estimate, we have
\[\vartheta_{a,b}(t)\leqslant\sum_{p\leqslant t}\log p\ll t,\]
from which it follows that
\[\int_{2}^{g}\frac{\vartheta_{a,b}(t)}{t\log^{2}t}dt\ll\int_{2}^{g}\frac{1}{ \log^{2}t}dt\ll\frac{g}{(\log g)^{2}}. \tag{5.5}\]
Again, using the Chebyshev estimate, we have
\[\vartheta_{a,b}=\psi_{a,b}+O(\sqrt{g}). \tag{5.6}\]
Thus, by Theorem 1.2 and Eqs. (5.4), (5.5), (5.6), we conclude that
\[\pi_{a,b}=\frac{\psi_{a,b}}{\log g}+O\left(\frac{\sqrt{g}}{\log g}+\frac{g}{( \log g)^{2}}\right)\sim\frac{1}{2}\pi(g),\]
as \(c\to\infty\). This completes the proof of Theorem 1.1.
Proof of Corollary 1.1.: It follows clearly from Theorem 1.2, Eq. (5.2) and the prime number theorem.
## Acknowledgments
The first named author is supported by National Natural Science Foundation of China (Grant No. 12201544), Natural Science Foundation of Jiangsu Province, China (Grant No. BK20210784), China Postdoctoral Science Foundation (Grant No. 2022M710121), the foundations of the projects "Jiangsu Provincial Double-Innovation Doctor Program" (Grant No. JSSCBS20211023) and "Golden Phoenix of the Green City-Yang Zhou" to excellent PhD (Grant No. YZLYJF2020PHD051).
The second named author is support by the National Natural Science Foundation of China (Grant No. 11971476).
The third named author is support by the National Key Research and Development Program of China (Grant No. 2021YFA1000700) and National Natural Science Foundation of China (Grant No. 11922113).
|
2305.19650 | Adverbs, Surprisingly | This paper begins with the premise that adverbs are neglected in
computational linguistics. This view derives from two analyses: a literature
review and a novel adverb dataset to probe a state-of-the-art language model,
thereby uncovering systematic gaps in accounts for adverb meaning. We suggest
that using Frame Semantics for characterizing word meaning, as in FrameNet,
provides a promising approach to adverb analysis, given its ability to describe
ambiguity, semantic roles, and null instantiation. | Dmitry Nikolaev, Collin F. Baker, Miriam R. L. Petruck, Sebastian Padó | 2023-05-31T08:30:08Z | http://arxiv.org/abs/2305.19650v1 | # Adverbs, Surprisingly
###### Abstract
This paper begins with the premise that adverbs are neglected in computational linguistics. This view derives from two analyses: a literature review and a novel adverb dataset to probe a state-of-the-art language model, thereby uncovering systematic gaps in accounts for adverb meaning. We suggest that using Frame Semantics for characterizing word meaning, as in FrameNet, provides a promising approach to adverb analysis, given its ability to describe ambiguity, semantic roles, and null instantiation.
## 1 Introduction
Adverbs are the part of speech (POS) that has seen the least attention in (computational) linguistics, likely due to its challenging nature (Conlon and Evens, 1992). As Huddleston and Pullum (2002, 563) state, "the adverb is a [...] residual category [...] to which words are assigned if they do not satisfy the more specific criteria for nouns, verbs, adjectives, prepositions, and conjunctions."
Syntactically, they modify many POSs, except nouns (_eat porridge quickly, hardly noticeable_), or even complete clauses (_Probably, I'll come tomorrow_). They are semantically varied (Thomason and Stalnaker, 1973), ranging from intensifiers/modifiers (_absolutely_, _beautifully_) to temporal and spatial specifications (_yesterday, forward_), to so-called _speaker-oriented adverbs_ yielding inferences about speaker attitudes, beliefs, and evaluations. Finally, adverbs can occupy different positions in sentences, creating complex issues of scoping and ambiguity (Alexiadou, 2004; Payne et al., 2010). Consider the following sentences:1
Footnote 1: Huddleston and Pullum (2002, 575)
1. [label=(0)]
2. 1. [label=(0)]
3. 2. 2. 1.
tured and characterized via the frame elements (i.e. semantic roles) of the frame. Notably, FrameNet mechanisms will account for null-instantiated roles, allowing it to hint at unexpressed content in cases like Example 2b (v. Section 4.2 for details).
1. [label=(2)]
2. 1. [\(\mathtt{S}_{\mathtt{PEAKER}}\) The Minister] **reported** [\(\mathtt{M}_{\mathtt{message}}\) that the cost had exploded].
3. [\(\mathtt{M}_{\mathtt{message}}\) The cost had] **reportedly** [\(\mathtt{M}_{\mathtt{message}}\) exploded].
In such cases specifically, FrameNet considerations of frame element realization help to explain the absence of the Speaker semantic role in 2b.
Plan of the Paper.Section 2 defines the scope of this paper (speaker-oriented adverbs) and shows the lack of accounts for adverbs in NLP through a literature review. Section 3 presents a probing dataset for speaker-oriented adverbs on the basis of which it demonstrates empirically that current large language models do not provide accounts for adverb meaning. Section 4 provides general background information on FrameNet, gives details on the framework's approach to the description of adverb meaning, and suggests its use to improve NLP models. Section 5 concludes the paper.
## 2Scope and Motivation
### Scope
Given the variety and heterogeneity of adverbs, we restrict the empirical scope of this paper to a subclass of them - even though we believe that the conceptual points apply to adverbs generally. We focus on _speaker-oriented adverbs_(Ernst, 2009). This broad class of adverbs, itself comprises several subtypes brought together by their giving rise to a range of inferences about attitudes and beliefs of the speaker, such as epistemic beliefs (Ex. 3), evaluations (Ex. 1 and 4), and speech acts (Ex. 5):
1. [label=(3)]
2. Peter says: "Paul is **certainly** right". \(\models\) Peter is certain that Paul is right.
3. [label=(4)]
4. Peter says: "**Unfortunately**, Paul arrived". \(\models\) Peter is unhappy that Paul arrived.
5. Peter says: "**Frankly**, Paul annoys me." \(\models\) Peter voices his frank opinion.
Structurally, these entailments are similar to entailments that arise from implicative verbs (Karttunen, 1971). As sources of information about how speakers assess states of affairs, they are highly relevant for tasks like opinion mining (Pang and Lee, 2008) and stance detection (Thomas et al., 2006). However, while implicative verbs have received considerable attention in the context of textual entailment (Karttunen, 2012; Lotan et al., 2013), speaker-oriented adverbs have not.
### Treatment of Adverbs in Computational Linguistics
This section summarizes work on adverbs in computational linguistics in the four most relevant areas: WordNets, applications, distributional modeling, and semantic annotation. Section 3 covers large language models separately.
WordNets.Princeton WordNet (WN, version 1.3) (Miller et al., 1990) covers about 4,500 English adverbs, comprising both single words and adverbial multi-word expressions like _a priori_. The information recorded includes senses (although most adverbs are monosemous) and semantic relations: almost all single-word adverbs are linked to the adjectives from which they are derived,and some adverbs have antonyms. However, WN has no information on the adverbs' syntactic or semantic behavior. The approach of corresponding WordNet resources varies substantially: GermanNet, for German, does not treat adverbs at all (Hamp and Feldweg, 1997). In contrast, plWordNet (Maziarz et al., 2016) provides a considerably richer description of adverbs, notably regarding lexical relations, but is only available for Polish.
NLP applications.Apparently, sentiment and emotion analysis are the NLP applications that have paid the most attention to adverbs (Benamara et al., 2007; Dragut and Fellbaum, 2014; Chauhan et al., 2020). Hedge detection, that is, the recognition of expressions that modulate speaker confidence in their statements boasts additional work on adverbs (Jeon and Choe, 2009; Islam et al., 2020). However, these studies, are generally limited to two specific subtypes: scalar adverbs that modify sentiment strength (intensifiers/minimizers: _very/hardly nice_) and adverbs that modify confidence (_certainly/apparently_). Haider et al. (2021) also considers locative and temporal adverbs. Confidence-modifying adverbs form a subtype of the speaker-oriented adverbs addressed here, but existing studies do not offer a general account of these adverbs beyond the requirements of specific tasks.
Studies on structured sentiment and emotion analysis (Barnes et al., 2021; Kim and Klinger, 2018) assume a different perspective. These works
concentrate on defining and modeling the relations between sentiment- and emotion- introducing expressions and their semantic arguments, such as the experiencer of the affect and its target. As the comparison with Example 2 shows, these relations are at times tied to adverb meanings. However, we are not aware of studies in this area that deal specifically with adverbs.
Distributional modeling.A number of studies investigated the interplay between word embeddings and morphology, analyzing similarity by parts of speech [14] or investigating meaning shifts corresponding to morphological derivation [1, 15]. Typically, these studies include adverbs, and not surprisingly find that adverbs behave highly inconsistently.
Semantic annotation.In principle, frameworks for the annotation of (semantic) argument structure are promising sources for information about adverb meaning, but they differ widely in the information that they offer. The PropBank [13] annotation scheme offers a range of modifier roles (ARGM) for the annotation of modifiers, including adverbs. However, the most fitting of these roles, ARGM-ADV, is a "catch-all" category. In addition, the PropBank analysis does not treat adverbs as predicates in their own right and does not assign roles to them. Thus, _fortunately, she accepted_ and _even she accepted_ would receive the same analysis.
In contrast, UCCA [1] explicitly splits adverbs into adverbial modifiers proper (D) and ground elements (G), where the latter expresses the speaker's attitude toward the event. However, UCCA does not make the structural relations explicit either.
AMR [1] offers a more nuanced approach: many adverbs are mapped to their underlying predicates and endowed with complete argument structure,2 while others are interpreted as degree, manner, or time modifiers. However, no provision exists in the representation for speaker-oriented adverbs. To illustrate, the AMR annotation of _thankfully, she accepted the present_ either treats the adverb as describing a general state of affairs (_it is good that she accepted_) or simply omits it.
Footnote 2: For example, AMR treats _sing_ in _sing beautifully_ as the first argument of beautiful-02.
Finally, Frame Semantics [11] offers the conceptual infrastructure to improve on these treatments and avoid their limitations. Section 4 provides justification of this understanding.
## 3 Case Study: Modeling Adverb Meaning as Natural Language Inference
One possibility, so far not mentioned, is that the knowledge inherent in large neural language models might provide a sufficient account of the meaning of (speaker-oriented) adverbs. In that case, at least from the NLP perspective, no (new) specific treatment would be required. However, this state of affairs is not the case, as we show below.
### Creating Probing Datasets
To operationalize "a sufficient account," we ask language models to distinguish between valid and invalid inferences along the lines of Examples 3-5. As input data, we constructed probing examples with inferences for speaker-oriented adverbs.
We examined four classes of adverbs, motivated by current FrameNet frames containing adverbs (see Section 4.3 for details). These are: likelihood adverbs (e.g. _undoubtedly_, _probably_); unattributed-information adverbs (_reported_ly, _allegedly_, _supposedly_); degree adverbs (_at least_, _approximately_); and obviousness adverbs (_blatantly_, _conspicuously_).
We built the datasets from combinations of premises and hypotheses containing such adverbs, formulated as templates with sets of fillers for the adverbs and different participant positions. In this manner, we assessed the LM's capabilities irrespective of specific word choice. We paired each premise with two to four unambiguous hypotheses depending on the adverb class. The premise either implies or contradicts the hypothesis. Table 1 shows an example. Hypothesis 1 negates the premise and constitutes a contradiction. Hypothesis 2 is a valid inference about speaker evaluation; and Hypothesis 3 is a valid inference about the uncertainty inherent in the premise.
We report studies on two datasets with different emphases. We designed the first to be _naturalstic_, based on existing sentences for adverbs in FrameNet. Given the limited size of this dataset, we also created a larger _synthetic_ dataset with simpler, more varied, sentences. The Appendix lists full details on both datasets.
Naturalistic Dataset.As stated, we created this dataset based on sentences in the FrameNet database containing adverbs of the four classes enumerated above. We "templatized" the sentences
by treating the position of the adverb as a slot that can be filled by all semantically congruent adverbs from the respective class. In sentences where the subject is a personal name, we also treated the subject position as a slot, which we filled with twenty female and male names popular in the United States. Because the low number of sentences of the each type in the FrameNet database, and most templates have only one slot, viz. the adverb, the size of this dataset is limited. See Table 3 for example counts by adverb class.
Synthetic Dataset.The goal of this dataset was to test if the performance of the model is robust with regard to the replacement of the main-event description and varying syntactic complexity of the premises and hypotheses. It covers three of the four adverb classes: unattributed-information, degree, and obviousness, where the templates from the first dataset were most restricted. In these templates, subjects are always exchangeable. In addition, we also varied the description of the main action or relation described the sentence.
Table 2 shows the template set for unattributed-information adverbs. The set of adverbs for this class comprises _reportedly_, _allegedly_, _supposedly_, _apparently_, and _ostensibly_. Fillers of the action slot include both gerund phrases (e.g. _selling the house_) and noun phrases (e.g. _the wedding_). Entailments and contradictions are produced in pairs. For entailments, we test two valid inferences triggered by the adverb. For contradictions, we test embedded clauses with and without negation. Table 5 shows the example count for each input type.
### Probing Setup: NLI models
Arguably the best match for these types of datasets are the family of language models optimized for the task of natural-language inference (Storks et al., 2019). Concretely, we evaluated the series of NLI models released by Nie et al. (2020), the SNLI or Stanford Natural Language Inference models. These models carry out a three-way classification between entailment, contradiction, and neutral. The author fine-tuned their models on a data set created in an iterative, adversarial, human-in-the-loop fashion, designed to remedy the shortcomings of previous NLI datasets (Belinkov et al., 2019). Preliminary experiments with different available base architectures (RoBERTa, ALBERT, BART, ELECTRA, and XLNet) showed that RoBERTa-large3 was the best-performing variant. Thus, we used this model for evaluations. We used our probing datasets solely for evaluation, not for further fine-tuning.
Footnote 3: ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli
For analysis, we checked the labels that the model predicted with their corresponding probabilities. In several cases, we performed additional tests to verify whether the adverbs or other properties of the sentence determined the model predictions.
### Evaluation on a Naturalistic Dataset
#### 3.3.1 Overall results
Table 3 shows overall results of the SNLI model on the naturalistic dataset for the four adverb classes. The adverb classes are not strictly comparable because they are represented by different input sentences (as described above), which include all types of lexical and syntactic confounds. Nevertheless, our experiments showed two consistent results: (i) the model cannot correctly draw inferences based on some set of adverbs on which it fails; (ii) the presence of adverbs increases the difficulty for the model to draw correct inferences in general. What follows is a survey of the evidence for these two claims.
#### 3.3.2 Failure to Understand Adverbs
Degree adverbs.The model does not understand that _at least as big_ is incompatible with _smaller_.
\begin{table}
\begin{tabular}{l l} \hline \hline Premise & \begin{tabular}{l} The celebration had been postponed, **estensibly** because of the Gulf War \\ \end{tabular} \\ \hline Hyp 1 & \begin{tabular}{l} The Gulf War **ostensibly** had no effect on the celebration (contradiction) \\ \end{tabular} \\ Hyp 2 & \begin{tabular}{l} Someone said that the celebration was postponed because of the Gulf War (entailment) \\ \end{tabular} \\ Hyp 3 &
\begin{tabular}{l} The Gulf War may have had no effect on the celebration (entailment) \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 1: Naturalistic dataset: Probing items
\begin{table}
\begin{tabular}{l l} \hline \hline Premise & \begin{tabular}{l} SUBJ1 said that SUBJ2 **ADV** opposed AC-TION \\ \end{tabular} \\ \hline Hyp 1 & \begin{tabular}{l} SUBJ1 said that SUBJ2 may have opposed \\ \end{tabular} \\ Hyp 2 & \begin{tabular}{l} ACTION (entailment) \\ \end{tabular} \\ Hyp 2 & \begin{tabular}{l} SUBJ1 is not sure that SUBJ2 opposed AC-TION (entailment) \\ \end{tabular} \\ Hyp 3 & \begin{tabular}{l} SUBJ1 is sure that SUBJ2 opposed ACTION (contradiction) \\ \end{tabular} \\ Hyp 4 &
\begin{tabular}{l} SUBJ1 is sure that SUBJ2 did not support \\ ACTION (contradiction) \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 2: Synthetic dataset: Probing items
While it correctly labels the pair _Lantau covers nearly twice the area of Hong Kong Island - Lantau is at least as big as Hong Kong Island_ as entailment and the same premise with _Lantau is much smaller than Hong Kong Island_ as contradiction, it considers that this premise also entails _Hong Kong Island is at least as big as Lantau_, which is also a straightforward contradiction.
The quantifier-adverb combination _almost every_ constitutes another weak point of the model. While it correctly labels the pair _Almost all assignments are challenging in different ways_ vs. _Most of the assignments are difficult_, it labels _Almost every assignment is a challenge in a different way_ vs. the same as neutral.4
Footnote 4: The model answers correctly only when there is a larger lexical overlap, as in _Most of the assignments are challenging_.
Unattributed-information adverbs.The correct analysis of these adverbs is subtle since valid inferences may be expressed in ways that differ from the premise both lexically and syntactically.
Sometimes the model answers incorrectly with extremely high confidence. The example from Table 1 is a case in point. _The Gulf War ostensibly had no effect on the celebration_ is always correctly labeled as contradiction. The _Someone said_... hypothesis is also correctly labelled as entailment with **any** adverb in the premise. Strikingly, the model gives the same result when the adverb is omitted. This suggests that the model does not take the adverb in the premise into account.
The experiments with Hypothesis 3 (cf. Table 1) corroborated that understanding: regardless of the combination of the adverb in the premise and the hypothesis, the model confidently marks the pair as contradiction or neutral with almost zero probability attached to the prediction of entailment. This finding shows that while the model may be able to draw a positive inference from the hearsay adverb (the reported event may have happened), it is completely unaware of the possibility of the negative inference, i.e. that the reported event may not have taken place: 12 times out of 16, the model confidently predicts contradiction.
#### 3.3.3 Adverbs Complicate Inference
In another analysis, we investigate the impact of the sentences' structural complexity on prediction quality. We frequently found that the model correctly inferred when the hypothesis is structurally simple or no adverb is given, but failed when the hypothesis had an embedded clause and the premise had an adverb. Table 4 shows a concrete example, which permits three observations:
1. The model is sensitive to whether the hypothesis contains an embedded clause: the confidence for the correct prediction drops from \(\approx\)1 to \(\approx\)0.8 for all verbs in the no-adverb case.
2. The presence of the adverb is not noticeable with structurally simple hypotheses: the confidence in the correct answer remains \(>\)0.9.
3. The combination of an adverb and an embedded clause can derail the model - paradoxically most so for the verb _support_, where the model was most confident without an adverb. Furthermore, note that an adverb can force the model to change its decision even in the presence of a strong lexical cue. Given the hypothesis _The students were obviously drunk_, the model correctly identifies _The students abhor/forswore/renounced alcohol_ as contradiction. While the model labels _The students abjured alcohol_ as entailment, (perhaps) because of an incorrect analysis of the verb, when we change the hypothesis to _The students were conspicuously drunk_, the model confidently and correctly labels _The students abjured alcohol_ as contradiction.
### Evaluation on a Synthetic Dataset
The results for the application of same model on the larger synthetic dataset are shown in Table 5. They demonstrate that in general the task of drawing correct inferences from adverbs is very difficult for the model. Instead, the model tends to consistently predict the same relation (entailment / neutral / contradiction) for all sentences for an adverb class. It is able to correctly predict inference for the quantity degree class (_at least two dozen people_\(\models\)_many people_ and \(\not\models\)_nobody_). However, even syntactically trivial entailments and contradictions in other classes lead to systematic failures. E.g., while the model can correctly identify the inference _James said that Mary reportedly opposed the wedding_\(\models\)_James said that Mary may have opposed the wed
\begin{table}
\begin{tabular}{l l l} \hline \hline Adverb class & Error rate (\%) & \# sentences \\ \hline Likelihood & 2 & 5,880 \\ Unattributed & 6 & 90 \\ information & 25 & 35 \\ Degree & 23 & 16 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Naturalistic dataset: SNLI model error rates by adverb class
ding_, it fails on the entailment of the type _James is not sure that Mary opposed the wedding_.
Similarly, with obviousness adverbs, while the examples of the type _James blatantly criticized Mary \(\models\)James disparaged Mary_ are easy for the model, entailments like _James tried to disparage Mary_ leads to near-chance performance. In the domain of adverb-modulated relations, while the model seems to do well on entailments (_James is at least twice as rich as Mary \(\models\)James's net worth is at least as big as Mary's_), in fact it does not understand that the relation is not symmetric and therefore cannot correctly identify contradictions (_Mary's net worth is at least as big as James's_).
### Discussion
Taken together, these experiments demonstrate systematic shortcomings in the ability of current large language models to account for adverb meaning, either glossing over them completely or making rather random inferences about their meaning. Arguably, this study only looked at a specific type of language model and other types of language models would fare better. However, converging evidence from the literature exists.
For instance, Nikolaev and Pado (2023) analyzed sentence transformers, which might be expected to provide the most nuanced understanding of adverbs. Instead, the study found that the sentences' main participants (subjects and objects) primarily determine the semantic similarity of sentence pairs, which is largely independent of adverbs. The paper argues that this behavior arises from the structure of the training data for sentence transformers (online conversations, duplicate questions on WikiAnswers), where sentence pairs labelled as se
\begin{table}
\begin{tabular}{l l l r r r r r} \hline \hline
**Verb** & **Prediction** & **Hypothesis** & **obviously** & **clearly** & **publicly** & **blatantly** & **no adverb** \\ \hline \multirow{4}{*}{_aid_} & \multirow{2}{*}{Entailment} & Simple & 0.94 & 0.94 & 0.95 & 0.96 & 0.97 \\ & & Complex & 0.60 & 0.62 & 0.70 & 0.71 & 0.85 \\ & \multirow{2}{*}{Neutral} & Simple & 0.05 & 0.05 & 0.05 & 0.04 & 0.02 \\ & & Complex & 0.39 & 0.38 & 0.29 & 0.27 & 0.15 \\ \hline \multirow{4}{*}{_help_} & \multirow{2}{*}{Entailment} & Simple & 0.92 & 0.92 & 0.92 & 0.95 & 0.97 \\ & & Complex & 0.53 & 0.52 & 0.58 & 0.61 & 0.77 \\ & & Simple & 0.07 & 0.08 & 0.08 & 0.05 & 0.03 \\ & & Complex & 0.47 & 0.47 & 0.41 & 0.38 & 0.22 \\ \hline \multirow{4}{*}{_support_} & \multirow{2}{*}{Entailment} & Simple & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 \\ & & Complex & 0.41 & 0.43 & 0.57 & 0.39 & 0.85 \\ \cline{1-1} & \multirow{2}{*}{Neutral} & Simple & 0.01 & 0.01 & 0.01 & 0.01 & 0 \\ \cline{1-1} & & Complex & **0.55** & **0.53** & 0.40 & **0.40** & 0.15 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Prediction of NLI model given _Castro ADV backed the rebels_ as premise and _Castro VERBed the rebels_ or _Castro tried to VERB the rebels_ as hypothesis (_simple_ and _complex_ respectively). Boldface indicates wrong model predictions; underline indicates “borderline correct” cases where an incorrect label received a probability \(>\) 40%.
\begin{table}
\begin{tabular}{l l r r r r r} \hline \hline
**Semantic type** & **Test** & **Entailment** & **Neutral** & **Contradiction** & **Error rate (\%)** & **\# sentences** \\ \hline \multirow{4}{*}{Unattributed} & Entailment 1 & 70,188 & 12 & 0 & \(\approx\) 0 & 70,200 \\ & Entailment 2 & 134 & 70,066 & 0 & \(\approx\) 100 & 70,200 \\ information & Contradiction 1 & 7,940 & 62,260 & 0 & 100 & 70,200 \\ & Contradiction 2 & 567 & 69,633 & 0 & 100 & 70,200 \\ \hline \multirow{4}{*}{Degree (properties of people)} & Entailment & 31,200 & 0 & 0 & 0 & 31,200 \\ & Contradiction & 12,390 & 3,980 & 14,830 & 52 & 31,200 \\ \hline \multirow{4}{*}{Degree (properties of objects)} & Entailment & 840 & 0 & 0 & 0 & 840 \\ & Contradiction & 547 & 0 & 293 & 65 & 840 \\ \hline \multirow{4}{*}{Degree (quantities)} & Entailment & 38,400 & 0 & 0 & 0 & 38,400 \\ & Contradiction & 0 & 0 & 38,400 & 0 & 38,400 \\ \hline \multirow{4}{*}{Obviousness} & Entailment 1 & 54,600 & 0 & 0 & 0 & 54,600 \\ & Entailment 2 & 33,217 & 21,383 & 0 & 39 & 54,600 \\ \cline{1-1} & Contradiction 1 & 61 & 0 & 54,539 & \(\approx\) 0 & 54,600 \\ \cline{1-1} & Contradiction 2 & 0 & 1,615 & 52,985 & 3 & 54,600 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Synthetic dataset: Model predictions (cells with correct predictions have gray background) for each template class and error rates.
mantically similar often have similar sets of main participants (subjects and objects) and can vary widely in other respects.
If a similar bias is at play in the NLI models in the present study, creating larger, richer training sets that involve adverbs in a systematic manner is a way forward. However, given the relative scarcity of adverbs and their complex behavior (cf. Section 1), it seems unlikely that this effect will emerge naturally by pre-training on ever larger datasets. Instead, the evidence provided here indicates that adverb data must be created intentionally. The following section outlines a proposal to do so.
## 4 Describing Adverbs in FrameNet
This section will provide a brief background to FrameNet (Section 4.1), show how FrameNet can be useful for the analysis of adverbs (Section 4.2), survey the data on adverbs contained in the current version of the dataset (Section 4.3), and propose concrete directions for next steps (Section 4.4).
### Background to FrameNet
FrameNet (_FN_, Ruppenhofer et al. 2016) is a research and resource-development project in corpus-based computational lexicography grounded in the theory of _Frame Semantics_(Fillmore, 1985).
At the heart of the work is the _semantic frame_, a script-like knowledge structure that facilitates inferencing within and across events, situations, states-of-affairs, relations, and objects. FN defines a semantic frame in terms of its _frame elements_ (_FEs_), or participants (and other concepts) in the scene that the frame captures; a _lexical unit_ (LU) is a pairing of a lemma and a frame, characterizing that LU in terms of the frame that it evokes. FN frames may include more than one POS, and FrameNet does not claim that the LUs of a frame are synonymous, merely that they are semantically similar in referring to the same situation. Additionally, FN distinguishes between core FEs and non-core FEs; the former uniquely define a frame and the later identify concepts that characterize events or situations more generally, such as time and place. To illustrate, Example 6 shows annotation for the verb _BUY_, defined in the Commerce_buy frame, with the FEs Buyer, Seller, Goods, and Money.5
Footnote 5: This paper uses the following typographical conventions: frame names appear in typewriter font; FE names are in small caps; and lexical units are in **ROLD CAPS**.
(6) [Chuck \({}_{\text{Biver}}\)] **BOUGHT** [a car \({}_{\text{Goods}}\)] [from Jerry \({}_{\text{SELL}}\)] [for $\$2,000 \({}_{\text{MONEy}}\)]
FrameNet annotators label approximately 20 sentences for each LU in each frame; and automatic processes tabulate the results to produce _valence_ descriptions, or _semantic-syntactic combinatorial possibilities_ of each LU. These also include _null-instantiated_ core FEs, i.e. FEs that uniquely define a frame, even when not realized linguistically. Such valence descriptions provide information about meaning-form mappings that are important for natural-language understanding. FrameNet data, or semantic parsers built from them, have proven useful for tasks such as recognizing paraphrases (Ellsworth and Janin, 2007), drawing inferences (Ben Aharon et al., 2010), machine translation (Zhai et al., 2013), question answering (Khashabi et al., 2018), or paraphrasing (Wang et al., 2019).
At present, the FrameNet database (Release 1.7) holds 1,224 frames, defined in terms of 10,478 frame-specific FEs, and 13,686 LUs. Of those lexical units, 61% have _lexicographic_ annotation, i.e. annotation for one target lemma per sentence.
### FrameNet for the Analysis of Adverbs
We now outline how the descriptive devices of FrameNet, as outlined in Section 4.1, can capture the relevant facts about adverb meaning and address the core challenges of adverb classes, ambiguity, inferences, and null instantiation of roles.
**Frames.** Since frame definitions encompass much of the meaning of each LU, many FN frames already offer fine-grained, semantically motivated descriptions of adverb classes. For example, the Emotion_directed frame captures the semantic similarity of _happy_, _happily_, _happiness_, _sad_, and _sadly_ and offers a starting point for the description of emotion-related adverbs, by exploiting the fact that these adverbs evoke the same background knowledge as the corresponding LUs of other parts of speech (Ruppenhofer et al., 2016).
When a lemma is ambiguous, each sense gets mapped to a different frame; each mapping is a separate lexical unit (LU). For instance, Example 1 in Section 1 includes the lemma _happily_, which is ambiguous: In Example 1a, _happily_ is defined in the Luck frame (along with _fortunately_ and _luckily_). The definition of this frame indicates that there is someone, the Protagonist, for whom a particular state of affairs is surprisingly good or
bad. But this sentence does not express the Protagonist; this is a case of null instantiation or NI (see below for details). The other three sentences, Examples 1b-1d, illustrate _happily_ in the Emotion_directed frame. This involves an emotional response of someone, the Experience, to a stimulus, the Stimulus FE (here, watching TV), which evokes the emotional response, specifically happiness (recoverable from the definition of the LU _happily_). In these examples, the Experience is explicit, so no inference is required (although coreference resolution will be required to resolve the referent of _they_). Example 7 shows the annotations of the sentences in the Luck frame (Ex. 7a) and in the Emotion_directed frame (Ex. 7b):
* HAPPILY, [they watched TV until dinner \(\textsc{{}_{\textsc{state\_of\_affairs}}}\)] PROTAGONIST: NI-
* [They \(\textsc{{}_{\textsc{experience}}}\)] HAPPILY [watched TV until dinner \(\textsc{{}_{\textsc{stimulus}}}\)].
Frame Elements.In FrameNet, frame elements are associated with (classes of) inferences (Chang et al., 2002). Such inferences can capture important aspects of adverb meaning, as we have shown in Section 2. The valence patterns for the two senses of _happily_ shown above lead to different inferences via the two sets of frame elements:
* A State_of_affairs is evaluated as good (or bad) [...] for a particular Protagonist.
* An Experience [feels or experiences] a particular emotional response to a Stimulus or about a Topic.
While such natural language descriptions were traditionally hard to formalize, the recent advances in "prompting" language models (Shin et al., 2020) have reestablished natural language descriptions as sufficient in many conditions (cf. also our template-based probing dataset in Section 3).
Null instantiation.FrameNet annotates information about the conceptually required "core" semantic roles of a frame even if absent from the text. FN distinguishes three types of null instantiation, one licensed by a construction and the others licensed lexically. FrameNet includes approximately 55,700 NI labels in its annotations; and roughly one-quarter of these omissions are licensed conventionally, with the remaining 75% licensed lexically (Petruck, 2019).
This capability of FrameNet is particularly important for adverbs, notably speaker-oriented adverbs. By definition, these adverbs welcome inferences about the speaker, who is typically not realized unless the statement is part of reported speech or thought: _The father thought: "Happily they are all watching TV."_
Returning to Example 2 (above), 2a illustrates an instantiated Speaker and 2b illustrates a _null-instantiated_Speaker, a fact that FN records in its database. No other lexical resource used extensively in computational linguistics records such information.
### Current Status of Adverbs in FrameNet
Currently, FrameNet (Release 1.7) contains 217 adverb LUs. Of these adverbs, 158 have annotation, with a total of 2,475 annotations of adverbs on sentences in the database, yielding a mean of 16 annotations per LU. However, like many linguistic phenomena, the annotations exhibit a highly skewed (Zipfian) distribution: 41 of the 158 LUs have only one annotation while nine have more than 50 annotations each. In line with its general principles, FrameNet chose not to define one single frame to capture all speaker-oriented adverbs, instead defining each such adverb according to the specific frame it evokes. At the same time, the class of speaker-oriented adverbs is arguably recoverable from the union of a set of frames all of which support inferences about the speaker by way of describing the speaker through a certain frame element. In this way, the existing frames and their annotations provide a suitable basis for creating data for this (and future) research.
Table 6 shows the four FrameNet frames used to suggest adverbs for the experiment described in Section 3 together with the adverbs listed, illustrative example sentences, and their definitions.
### Next Steps
As the numbers show (Section 4.3), FrameNet has not attended to adverbs either. Perhaps this fact represents a principal incompatibility: the description of adverbs may not welcome using concepts that FN developed for traditional predicates with clear-cut valence. Yet, we believe that including adverbs in FrameNet both follows the spirit of what Fillmore (1985) called "semantics of understanding" and is in line with FrameNet practice. Still, it will require work on two principal levels: theoretical development and practical lexicographic analysis.
At the theoretical level, the FrameNet approach has seen constant development over the 25 years of the project's existence. In initial verb-centered frames, nominals tended to fill FEs, with additional attributes realized as adverbs. Next, FN added deverbal nouns to frames, which largely take the same frame elements. To expand to other types of nouns, like natural kinds and artifacts, FrameNet broadened the concept of FE to encompass _qualia_ such as substance or purpose (Pustejovsky, 1991). Layering the annotation of nouns as FEs of verbs, and modifiers of nouns as _their_ FEs provided a richer semantic representation. Next, FrameNet included adjectives as frame-evoking elements, permitting generalizations over domains like speed or temperature. While most aspects of adverbs description are already present in FrameNet (cf. above), theoretical analysis must make precise the implications of annotating null instantiated adverbial frame elements at scale.
At the practical level, the time is ripe to add many more adverbs to appropriate existing frames and to create new frames for adverbs as needed. The principles of annotating naturally occurring text and extracting valence descriptions for LUs established on the other parts of speech carry over to adverbs. The combination of valence descriptions and annotated instances constitute essential inputs to characterize inferences.
Clearly, the more annotation, the better, but large-scale expert annotation is slow and resource-intensive. Using crowdsourcing, which permits parallelizing (thus, speeding up) annotation, is a possible mitigation. Fossati et al. (2013) and Feizabadi and Pado (2014) demonstrated success with crowdsourcing for frame-semantic annotation when the task is narrowed down appropriately. Substantial promise exists to extract adverb annotation automatically from comparable corpora (Roth and Frank, 2015) and paraphrasing models (Wang et al., 2019). Even for the core task of FrameNet analysis, defining frames, Ustalov et al. (2018) proposed automatic methods. Still, full automation remains hard, given concerns of quality and consistency.
## 5 Conclusion
Conlon and Evens (1992) stated that adverbs are under-researched in computational linguistics; this statement is still true. Adverbs have received attention only in two applications: sentiment analysis and hedging detection. The large language models used here show systematic gaps in capturing adverb meaning. The problem is **not** solved.
We propose that Frame Semantics, as embodied in FrameNet, along with improved techniques to mitigate the annotation effort to extend FN with new frames and annotations, can capture the meaning and implicatures of adverbs. Considering frames as lexical constructions (Fillmore, 2008), this proposal fits well with recent work to combine language models and construction grammar (Weissweiler et al., 2023).
Multiple ways exist for computational modeling to use such a resource, e.g., by extending the coverage of semantic role labellers to a larger range of adverbs, or by fine-tuning language models on large annotated datasets for which our probing dataset can serve as a blueprint.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Frame name** & **Adverbial lexical units \& example sentence** & **Definition** \\ \hline
**Unattributed** & _allegedly.adv, ostensibly.adv, purportedly.adv, report-_ & A speaker presents a Reported fact as deriving from statements (made directly to them or to others) of third parties. \\ \hline
**Likelihood** & _certainly, likely, probably, possibly_ & This frame concerns the likelihood of a Hypothetical event occurring, the only core frame element in the frame. \\ \hline
**Obviousness** & _audibly.adv, clearly.adv, evidently.adv, noticeably.adv, obviously.adv, visibly.adv_ & A Phenomenon is portrayed in terms of the Degree of likelihood that it will be perceived and known, given the (usually implicit) Evidence, Perceiver, and Circumstances in which it is considered._ \\ \hline
**Degree** & _a little (bit).adv, a lot.adv, absolutely.adv, as hell.adv, farc.adv, fully.adv, in part.adv, kind of.adv, so.adv, somewhat.adv, that.adv, totally.adv, very.adv, way.adv_ & LUs in this frame modify a Gradable attribute and describe intensities at the extreme positions on a scale._ \\ & **Ex.** I had ABSOLUTELY nothing to say. & \\ \hline \hline \end{tabular}
\end{table}
Table 6: FrameNet Frames characterizing Speaker-Oriented Adverbs
### Limitations
We only used English data in the study, so we cannot guarantee that the findings will generalize to other languages (cf. bender2019multi). The English NLI datasets are, as usual, larger than for other languages, so we should expect models targeting other languages to have worse performance. We do, however, believe that the challenges of adverbs are comparable in other languages, particularly in typologically similar languages.
## Ethics Statement
The paper argues for a new approach to the treatment of adverbs in the development of resources and applications in NLP. We consider better understanding of language by computational models as not posing a significant societal risk in itself. The dataset used for the computational experiment in Section 3 was created based on the data contained in the publicly available FrameNet corpus and, as far as we are aware, does not contain sensitive elements. Implementation of our proposed methodology has the same risks as any data-driven approach in computational linguistics, but we assume that we cannot safeguard against its possible misuse due to its very general nature.
|
2309.15397 | Short-Term Postsynaptic Plasticity Facilitates Predictive Tracking in
Continuous Attractors | The N-methyl-D-aspartate receptor (NMDAR) is a crucial component of synaptic
transmission, and its dysfunction is implicated in many neurological diseases
and psychiatric conditions. NMDAR-based short-term postsynaptic plasticity
(STPP) is a newly discovered postsynaptic response facilitation mechanism. Our
group has suggested that long-lasting glutamate binding of NMDAR allows input
information to be held for up to 500 ms or longer in brain slices, which
contributes to response facilitation. However, the implications of STPP in the
dynamics of neuronal populations remain unknown. In this study, we implemented
STPP in a continuous attractor neural network (CANN) model to describe the
neural information encoded in neuronal populations. Unlike short-term
facilitation, which is a kind of presynaptic plasticity, the temporally
enhanced synaptic efficacy induced by STPP destabilizes the network state of
the CANN by increasing the mobility of the system. This nontrivial dynamical
effect enables a CANN with STPP to track a moving stimulus predictively, i.e.,
the network state responds to the anticipated stimulus. Our findings reveal a
novel STPP-based mechanism for sensory prediction that can help develop
brain-inspired computational algorithms for prediction. | Huilin Zhao, Sungchil Yang, Chi Chung Alan Fung | 2023-09-27T04:40:51Z | http://arxiv.org/abs/2309.15397v1 | # Short-Term Postsynaptic Plasticity Facilitates Predictive Tracking in Continuous Attractors
###### Abstract
The N-methyl-D-aspartate receptor (NMDAR) is a crucial component of synaptic transmission, and its dysfunction is implicated in many neurological diseases and psychiatric conditions. NMDAR-based short-term postsynaptic plasticity (STPP) is a newly discovered postsynaptic response facilitation mechanism. Our group has suggested that long-lasting glutamate binding of NMDAR allows input information to be held for up to 500 ms or longer in brain slices, which contributes to response facilitation. However, the implications of STPP in the dynamics of neuronal populations remain unknown. In this study, we implemented STPP in a continuous attractor neural network (CANN) model to describe the neural information encoded in neuronal populations. Unlike short-term facilitation, which is a kind of presynaptic plasticity, the temporally enhanced synaptic efficacy induced by STPP destabilizes the network state of the CANN by increasing the mobility of the system. This nontrivial dynamical effect enables a CANN with STPP to track a moving stimulus predictively, i.e., the network state responds to the anticipated stimulus. Our findings reveal a novel STPP-based mechanism for sensory prediction that can help develop brain-inspired computational algorithms for prediction.
Computational Models, Synaptic Plasticity, Attractor Models, Predictive Coding, NMDA Receptors
## 1 Introduction
The N-methyl-D-aspartate receptor (NMDAR) is a Ca\({}^{2+}\)-permeable, ligand-gated ion channel found mainly in the postsynaptic membrane of neurons, facilitating synaptic transmission (Hunt and Castillo, 2012; Paoletti et al., 2013; Yao et al., 2022). It consists of two GluN1s and two GluN2s (GluN2A-D) and becomes active when two glutamates simultaneously bind to two GluN2s (Monyer et al., 1992; Paoletti et al., 2013; Vyklicky et al., 2014). When activated, it produces a regenerative Ca\({}^{2+}\) spike, which contributes to signal integration occurring in postsynaptic dendrites (Schiller et al., 2000; Branco and Hausser, 2011; Yang et al., 2015; Noh et al., 2019). The heterogeneity in NMDAR subunits generates the diversity of its properties and functions (Paoletti et al., 2013). NMDARs are crucial for neurotransmission and neuronal communication in the nervous system (Paoletti and Neyton, 2007; Hansen et al., 2018). Their dysfunction resulting from hyperactivity, hypofunction, abnormal subunit expression, altered receptor trafficking or localization may contribute to a variety of neurological diseases and psychiatric conditions (Zhou and Sheng, 2013), such as Huntington's disease (Burtscher et al., 2021), Alzheimer's disease (Liu et al., 2019), depression (Marsden, 2011; Adell, 2020), schizophrenia
(Lisman et al., 2008; Nakazawa and Sapkota, 2020; Adell, 2020), and ischemic stroke (Chen et al., 2008). NMDARs also play an essential role in synaptic plasticity, which refers to the strengthening or weakening of electrical postsynaptic responses over time in response to past synaptic activities (Citri and Malenka, 2007). Furthermore, NMDARs profoundly influence synaptic functions that underlie high-level cognitive functions (Paoletti et al., 2013; Bertocchi et al., 2021). For example, selective modulation of subunits of NMDARs impaired long-term potentiation, a typical type of long-term synaptic plasticity, in striatal synapses and thus caused a visuospatial learning deficit (Durante et al., 2019). Another example is the repetitive-training enhanced NMDAR-mediated synaptic transmission in the medial prefrontal cortex that was involved in social memory retrieval (Zhang et al., 2022).
Although it is widely accepted that NMDAR-mediated Ca\({}^{2+}\) signaling contributes to long-term synaptic plasticity in postsynaptic neurons (Hunt and Castillo, 2012; Granger and Nicoll, 2014; Volianskis et al., 2015), its effects on short-term synaptic plasticity are gradually being uncovered and studied (Santos et al., 2012; Yang et al., 2014, 2016). Typically, short-term synaptic plasticity is attributed to the difference in the time constants between neuronal signaling and recovery of neurotransmitter availability (Zucker and Regehr, 2002; Mongillo et al., 2008). Neurotransmitter release from presynaptic neurons is primarily responsible for this neurobiological mechanism. Although neuronal plasticity is observed by presynaptic factors for neurotransmitter release, receptors directly modulating the efficacy of postsynaptic currents are situated on the postsynaptic side, which is the postsynaptic NMDARs. According to our previous studies (Yang et al., 2014, 2016, 2018), NMDAR-dependent short-term postsynaptic plasticity (STPP) is proposed to serve as a neurobiological mechanism for signal amplification, particularly in linearly connected circuits such as the hippocampus and cortices. Such signal amplification can be timely achieved through STPP, which enables the faithful transmission of extrinsic information-bearing sensory inputs and the integration of extrinsic sensory inputs (i.e., a priming input) with intrinsic activity (e.g., a brain rhythm or gating input). Unlike the presynaptic mechanism underlying a feedback circuit for signal amplification, this STPP executes a feedforward process to carry out an efficient and timely signal amplification and cascade. One of the observed pieces of evidence showing STPP in a hippocampus dendrite (Yang et al., 2016) is shown in Figure 1A. The postsynaptic response was higher when a second (gating input) glutamate uncaging followed the prior (priming input) uncaging (cyan trace) than when a gating input alone (green trace) was applied. This enhancing effect was eliminated by Ifenprodil, a blocker of GluN2B (Yang et al., 2016). An underlying mechanism (Figure 1B) for short-term signal amplification has been proposed in our earlier publication (Yang et al., 2018). The \(\alpha\)-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor (AMPAR) is another kind of glutamate receptor on the postsynaptic membrane (Diering and Huganir, 2018). NMDARs and AMPARs are closed because the membrane is in the resting state. The AMPARs are then activated by a priming input, i.e., the first glutamate release from the presynaptic neurons, whereas the NMDARs do not open because they are blocked by magnesium ions. However, the input information can be stored via glutamate binding, and the NMDARs enter the bound-but-blocked state for up to 500 ms (or longer) (Yang et al., 2018, 2014). Subsequently, the gating input, i.e., the second glutamate release, is strong enough to further depolarize the postsynaptic membrane, and thus magnesium is removed. Simultaneous membrane depolarization and glutamate binding shift the NMDARs from the bound-but-blocked state to the open state. That is, NMDARs are more likely to open with the previously stored priming input plus gating input than with the gating input alone. The NMDAR-mediated Ca\({}^{2+}\) current, which is much stronger than the AMPAR current, is thought to contribute to the response enhancement. Additionally, when the priming input was so large to induce NMDAR-mediated Ca\({}^{2+}\) current, there was no significant enhancement of the second response under the same level of gating input (Figure 1B in (Yang et al., 2016)). Therefore, STPP depends on the state of postsynaptic NMDARs
and the evocation history of postsynaptic currents, but not necessarily the firing history of a _particular_ presynaptic neuron. In this regard, STPP has been suggested to enable signal amplification and integration in synaptic transmission, but the role of STPP in the dynamics of neuronal populations remains unknown.
We aim to investigate the possible effects of STPP in neuronal populations on neural information processing. To this end, we introduce the continuous attractor neural network (CANN) as our model for neural information representation (Wu and Amari, 2005; Wu et al., 2016). The CANN is capable of extracting the information encoded by a population of neurons, which allows neural information processing to be analyzed and simulated computationally (Deneve et al., 1999; Wu and Amari, 2005). This model has been successfully used to depict the encoding of continuous stimuli in neural systems, such as movement direction (Georgopoulos et al., 1993), head direction (Ben-Yishai et al., 1995; Zhang, 1996; Stringer et al., 2002a), the spatial location of objects (Samsonovich and McNaughton, 1997; Stringer et al., 2002b) and spatial integrated information including location, direction and distance (Burak and Fiete, 2009). A CANN is shown in Figure 1C. In the network, a bump-shaped tuning curve (blue curve) represents the neural activities of the population of neurons in response to the external stimuli (red curve) at a given time. Because different neurons can be characterized by their preferred stimuli, e.g., different head directions represent different head-direction (HD) cells, the response curve is a function of the preferred stimuli of the neurons in a population. Among neurons, there are excitatory connections that are translationally invariant in the space of stimulus values. These translation-invariant connections enable a CANN to hold a continuous family of stationary states (attractors). The stationary states of the neural system form a continuous parameter space in which the system is neutrally stable. This property allows the neutral system to track time-varying stimuli smoothly (Wu et al., 2008; Fung et al., 2010). However, the tracking always lags behind the stimulus due to the time needed for neuronal responses and neuronal interactions (Wu and Amari, 2005; Wu et al., 2016). Because CANNs can track moving objects smoothly, they have been used to shed light on the potential neural correlates of effective tracking of moving objects (Zhang, 1996; Zhang and Wu, 2012; Fung et al., 2012a; Mi et al., 2014; Fard et al., 2015). For instance, inhibitory feedback modulations such as short-term depression (STD) of neuronal synapses (Fung et al., 2012a), spike frequency adaptation (SFA) in neural firing (Mi et al., 2014), and negative feedback from a connected network (Zhang and Wu, 2012) have enabled predictive tracking of sensory input with CANNs. A path integration mechanism combined with CANNs has predicted future movement locations (Fard et al., 2015). These neural predictions, i.e., anticipations, of continuously moving objects are powerful strategies for time delay compensation (Nijhawan, 1994; Bassett et al., 2005; Sommer and Wurtz, 2006) and thus maintain effective and efficient perceptual functions and visual/motor control.
In this study, we applied the STPP mechanism to CANNs and examined the tracking dynamics when they tracked moving stimuli. The stimuli could be head directions, object locations or navigation information such as speeds and directions of movements. To simplify our study, we built a one-dimensional CANN model and used head directions as our representative stimuli. Through the simulation experiments and analysis, we found that STPP-induced enhancing effect on neurons around the hillside of synaptic input profile enables CANNs to anticipate the movements of moving stimuli. Unlike the mechanisms for anticipatory tracking based on inhibitory feedback modulations (Zhang and Wu, 2012; Fung et al., 2012a; Mi et al., 2014), STPP is a feedforward modulation driven by the inherent features of NMDARs. Our findings suggest a novel mechanism for anticipatory tracking. The reliable signal transmission enabled by STPP provides a new framework that has the potential to help conceptualize a network mechanism for sensory prediction and develop brain-inspired computational algorithms for prediction.
## 2 Model and Simulation
### The Model
In our study, we used CANNs (Wu and Amari, 2005; Fung et al., 2010) to investigate the influence of STPP (Yang et al., 2016) on the dynamics of neuronal populations. In this model, the dynamics of synaptic input \(u\left(x,t\right)\) of neurons with preferred stimuli \(x\) at time \(t\) is defined as
\[\tau_{\mathrm{s}}\frac{\partial u\left(x,t\right)}{\partial t}=-u\left(x,t \right)+\left[1+S(x,t)\right]I^{\text{tot}}\left(x,t\right), \tag{1}\]
where \(\tau_{\mathrm{s}}\approx 10\) ms is the neuronal time constant (Koch et al., 1996). \(\left[1+S(x,t)\right]\) models the efficacy of presynaptic neurons in the postsynaptic input evocations. \(I^{\text{tot}}\left(x,t\right)\), the total input to the neurons from the external input and lateral interactions within the neuronal system, is given by
\[I^{\text{tot}}\left(x,t\right)=\int dx^{\prime}J\left(x,x^{\prime}\right)r \left(x^{\prime},t\right)+I^{\text{ext}}\left(x,t\right). \tag{2}\]
\(I^{\text{ext}}\left(x,t\right)\) is the external input, which is defined in the later subsection. \(J\left(x,x^{\prime}\right)\), the excitatory connection between neurons at \(x\) and \(x^{\prime}\), is given by
\[J\left(x,x^{\prime}\right)=\frac{1}{\sqrt{2\pi}a}\exp\left(-\frac{\text{dist} \left(x,x^{\prime}\right)^{2}}{2a^{2}}\right), \tag{3}\]
where \(\text{dist}(x,x^{\prime})\) describes the distance between \(x\) and \(x^{\prime}\) depending on the characteristics of the stimulus, which is defined in the following subsection. \(a=0.5\) represents the range of the connection in the preferred stimuli space (Fung et al., 2010) and also controls the tuning width of the attractor states. \(r\left(x,t\right)\) is the neuronal response of neurons with preferred stimuli \(x\) at time \(t\). It also encodes the firing rate and is defined as a function of \(u\left(x,t\right)\):
\[r\left(x,t\right)=\Theta\left[u\left(x,t\right)\right]\frac{u\left(x,t\right) ^{2}}{1+\frac{1}{8\sqrt{2\pi}a}k\int dx^{\prime}u\left(x^{\prime},t\right)^{2 }}, \tag{4}\]
where \(k\) controls the divisive inhibition modeled in the denominator of \(r\left(x,t\right)\)(Deneve et al., 1999; Wu and Amari, 2005). \(\Theta\) is the step function. One should note that \(I^{\text{tot}}\left(x,t\right)\) is a total input integrating the contributions of excitatory and inhibitory signals from a population of neurons regardless of the type of receptors. Also, since the inhibition is divisive, \(\left[1+S\left(x,t\right)\right]\) in Equation (1) modulates the excitatory input, which is consistent with the experimental results by Yang et al. (Yang et al., 2016).
In Equation (1), \(S\left(x,t\right)\) is the enhancing modulation that abstractly models the effect on the synaptic input due to the opening of NMDARs from the bound-but-blocked state. It represents the temporal enhancement due to STPP. Therefore, \(\left[1+S\left(x,t\right)\right]I^{\text{tot}}\left(x,t\right)\) models the synaptic input evocation triggered by presynaptic neuronal activity with an enhancement due to STPP (Yang et al., 2016). This term will be further discussed in the Discussion section. The corresponding latent modulation \(Q\left(x,t\right)\) of \(S\left(x,t\right)\) abstractly models the proportion of NMDARs that enter the bound-but-blocked state. The
dynamics of \(S\left(x,t\right)\) and \(Q\left(x,t\right)\) are defined by
\[\frac{\partial S\left(x,t\right)}{\partial t} =-\frac{S\left(x,t\right)}{\tau_{1}}+\alpha Q\left(x,t\right)f_{S} \left[r\left(x,t\right)\right], \tag{5}\] \[\frac{\partial Q\left(x,t\right)}{\partial t} =-\frac{Q\left(x,t\right)}{\tau_{2}}-\alpha Q\left(x,t\right)f_{S }\left[r\left(x,t\right)\right]+\beta\left[1-Q\left(x,t\right)\right]f_{Q} \left[I^{\text{tot}}\left(x,t\right)\right], \tag{6}\]
where \(\tau_{1}=50\) ms is the time constant of NMDAR (Hestrin et al., 1990), which controls the effective duration of the enhancing effect on postsynaptic input. \(\tau_{2}=500\) ms is the time constant of the latent modulation of STPP (Yang et al., 2018), which controls its effective duration. The parameters \(\alpha\) and \(\beta\), which control the rates of \(S\left(x,t\right)\) and \(Q\left(x,t\right)\), respectively, are critical for adjusting the effects of STPP. From a physiological point of view, \(\alpha\) can be considered the average opening rate of NMDARs and \(\beta\) is the average transition rate of NMDARs from the glutamate-unbound state to the bound-but-blocked state, which is determined by the speed and efficiency of glutamates to bind to NMDARs. \(f_{S}\) and \(f_{Q}\) are activation functions of \(S\left(x,t\right)\) and \(Q\left(x,t\right)\), respectively. \(f_{S}\) defines the raise of the enhancing modulation \(S\left(x,t\right)\). We designed its form based on two considerations: (1) in accordance with the STPP mechanism, the removal of magnesium, which depends on the membrane potential, enhances the excitatory postsynaptic potential by opening NMDARs (Jahr and Stevens, 1990; Vargas-Caballero and Robinson, 2004). Hence, \(f_{S}\) should be sigmoid-shaped based on membrane potential, modeling the magnesium removal efficacy of the postsynaptic membrane. (2) However, in the CANN, the membrane potential is implicit. Based on the study conducted by Latimer's group (Latimer et al., 2019), the average membrane potential can be approximated by the firing rate because they share a rectified-linear relation. In CANNs, \(r\left(x,t\right)\) represents the neuronal activity, which corresponds to the firing rate of spiking neurons. In contrast, the synaptic input \(u\left(x,t\right)\) integrates neuronal activities from lateral neurons and external input, which has a non-linear relation with the firing rate in the presence of divisive inhibition. Therefore, we chose \(r\left(x,t\right)\) as a proxy to represent the average membrane potential for a population of neurons sharing the same preferred stimulus \(x\). As a consequence, \(f_{S}\) is defined to be a cumulative distribution function of \(r\left(x,t\right)\):
\[f_{S}\left[r\left(x,t\right)\right]=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{ \frac{r\left(x,t\right)-r_{0}}{\sigma_{S}}}dt^{\prime}\exp\left(-\frac{{t^{ \prime}}^{2}}{2}\right), \tag{7}\]
where \(r_{0}\) and \(\sigma_{S}\) are the mode and scale of its probability density distribution, respectively. \(f_{Q}\) is a function that drives the latent modulation \(Q\left(x,t\right)\) depending on the total synaptic input. Given that insufficient input prevents NMDARs from entering the bound-but-blocked state and too much input results in strong depolarization and subsequent opening of NMDARs, moderate input can drive as many NMDARs as possible to enter the bound-but-blocked state. Nevertheless, strong input would have a chance to leave a portion of NMDARs in the bound-but-blocked state. Therefore, we considered a log-normal distribution with a right skew to strong input to describe the relationship between the probability of entering the bound-but-blocked state and total synaptic input \(I^{\text{tot}}\left(x,t\right)\):
\[f_{Q}\left[I^{\text{tot}}\left(x,t\right)\right]=\frac{1}{I^{\text{tot}}\left( x,t\right)\sigma_{Q}\sqrt{2\pi}}\exp\left[-\frac{\left|\ln\left[I^{\text{tot}} \left(x,t\right)\right]-\mu_{Q}\right|^{2}}{2\sigma_{Q}^{2}}\right], \tag{8}\]
where \(\mu_{Q}\) and \(\sigma_{Q}\) are the mode and scale of the natural logarithm of \(I^{\text{tot}}\left(x,t\right)\), respectively.
### Simulation Experiments
Our aim is to explore the effects of STPP on the tracking dynamics of neuronal populations by using CANNs. To study the dynamics of a CANN with STPP, we built a model using the equations described in the previous subsection and conducted simulation experiments.
#### 2.2.1 Building the Model
For \(f_{Q}\) in Equation (8) and \(f_{S}\) in Equation (7), we assigned the following empirical values: \(\sigma_{Q}=0.5\), \(\mu_{Q}=0.25\), \(r_{0}=6\) and \(\sigma_{S}=2\). As shown in Figure 2A, \(f_{Q}\) is log-normally distributed, with the latent modulation \(Q\) increasing most strongly when the total input \(I^{\text{tot}}\left(x,t\right)\) is relatively weak. \(f_{S}\) is sigmoid-shaped, enabling \(S\left(x,t\right)\) to be increased at strong membrane potential. Here, the membrane potential is represented by \(r\left(x,t\right)\)(Latimer et al., 2019). The profile of the synaptic input \(u\left(x,t\right)\) is shown in Figure 2B, which is Gaussian shaped where the center of the external input \(I^{\text{ext}}\left(x,t\right)\) (not plotted) is at 0. The corresponding \(Q\left(x,t\right)\) and \(S\left(x,t\right)\) are initially symmetric with respect to the center of \(u\left(x,t\right)\) when there is no translational separation between \(I^{\text{ext}}\left(x,t\right)\) and \(u\left(x,t\right)\) (Figure 2C).
In our simulation experiments, we took the head directions as the example stimuli. Hence, \(x\) is restricted in the space of \(-\pi<x\leq\pi\), and the distance between \(x\) and \(x^{\prime}\) is in a periodic condition that is calculated by
\[\text{dist}\left(x,x^{\prime}\right)=\begin{cases}x-x^{\prime}+2\pi,&\text{if } \left(x-x^{\prime}\right)\leq-\pi,\\ x-x^{\prime},&\text{if}-\pi<\left(x-x^{\prime}\right)\leq\pi,\\ x-x^{\prime}-2\pi,&\text{if}\left(x-x^{\prime}\right)>\pi.\end{cases} \tag{9}\]
One should also note that the model we used is a re-scaled model (Fung et al., 2012). Therefore, the number of neurons is not a factor determining the behavior of the system.
#### 2.2.2 Tracking the External Stimulus
To see how the network reacts to a stimulus, we set the external input to be
\[I^{\text{ext}}\left(x\right)=A\exp\left(-\frac{\left|x-z_{0}\right|^{2}}{4a^{2 }}\right), \tag{10}\]
where \(A\) is the magnitude of the input. \(z_{0}\) is the stimulus position. When considering a continuously moving stimulus, \(I^{\text{ext}}\left(x,t\right)\) is defined as
\[I^{\text{ext}}\left(x,t\right)=A\exp\left[-\frac{\left|x-z_{0}\left(t\right) \right|^{2}}{4a^{2}}\right]. \tag{11}\]
Without loss of generality, we considered the stimulus position at time \(t=0\) to be \(z_{0}=0\), and the stimulus to move at a constant angular velocity \(v_{\text{ext}}\) thereafter, i.e., \(z_{0}\left(t\right)=v_{\text{ext}}t\).
With regard to the tracking behavior simulations, we altered the strength of STPP by adjusting \(\alpha\) and \(\beta\) and altered the external factors by adjusting \(v_{\text{ext}}\) and \(A\) to see how these factors influence the network's tracking performance. To measure the network's tracking performance when it is exposed to a continuously moving stimulus, we used the displacement between the network state and the stimulus position as an index. Note that the network state trails behind the stimulus when \(s\) and \(v_{\text{ext}}\) have opposite signs, whereas it predicts the position of the stimulus when they have the same sign. The displacement is
given by \(s=z\left(t\right)-z_{0}\left(t\right)\), where \(z\left(t\right)\) is the center of mass of \(u\left(x,t\right)\), which is calculated by
\[z\left(t\right) =\tilde{x}+\frac{\int dx\text{dist}\left(x,\tilde{x}\right)u\left( x,t\right)}{\int dxu\left(x,t\right)}, \tag{12}\] \[\tilde{x} =\underset{x}{\arg\max}\;u\left(x,t\right). \tag{13}\]
#### 2.2.3 Measuring the Intrinsic Speed and the Anticipatory Time
A study (Fung et al., 2015) on dynamical behaviors in neural fields suggested that the models with inhibitory feedback modulations could support spontaneously moving profiles without any persistent external input. Therefore, we asked the following question: in the absence of external input, does a CANN with STPP have intrinsic motion? If the answer is yes, the follow-up question arises: is intrinsic motion caused by STPP? Note that without external input, when the network state becomes translationally unstable, it moves with a constant speed, i.e., intrinsic speed (Fung et al., 2015). To investigate the intrinsic dynamics of a CANN with STPP, we measured the intrinsic speeds, denoted as \(v_{\text{int}}\), of the models when the STPP strength was varied. In the simulations, we first let the system reach its stationary state. After that, we removed the external input and manually shifted \(u\) in the direction of positive \(x\) by \(2\pi/200\) for every \(\tau_{\text{s}}\). After 100 \(\tau_{\text{s}}\), we terminated all manual intervention and let the system evolve. When the motion of the system state became steady, we recorded the speed of the intrinsic motion as \(v_{\text{int}}\). For the STPP regimes, both \(\alpha\) and \(\beta\) were selected from 0 to 0.2 with a step of 0.004.
By analyzing the dynamical properties of the system, the study (Fung et al., 2015) also found that intrinsic motion is the internal drive of anticipation. Hence, after obtaining the intrinsic speeds of the CANNs under various STPP regimes, we considered the following question: if intrinsic motion is present, is it also the internal drive of anticipatory behavior in our model? Therefore, to investigate the causality of anticipation, we measured the maximal anticipatory time \(T_{\text{ant}}\) of the CANNs under the same STPP regimes as those used in the simulations for intrinsic speeds. Here, the anticipatory time \(\tau_{\text{ant}}\) is defined as \(s/v_{\text{ext}}\), which is a constant when the moving bump is steady. \(T_{\text{ant}}\) is the maximum of \(\tau_{\text{ant}}\) over a range of \(v_{\text{ext}}\) under a given STPP regime. In the simulations, \(\alpha\) and \(\beta\) were also selected from 0 to 0.2 with a step of 0.004. \(v_{\text{ext}}\) was altered from 0 to 0.008 rad/ms with a step of 0.0002. The magnitude \(A\) of the stimulus is 3.0.
## 3 Results
### Tracking Behaviors
To observe the tracking behaviors of the CANNs with STPP, we visualized the outlines of the network's motions, first in the presence of an abruptly changing stimulus and then in the presence of a continuously moving stimulus.
#### 3.1.1 In the Presence of an Abruptly Changing Stimulus
We first compared the tracking pattern of a CANN with STPP (\(\alpha=0.02\), \(\beta=0.1\)) to that without STPP (\(\alpha=0\), \(\beta=0\)) when the stimulus abruptly changed its position \(z_{0}\). The stimulus position was initially at \(z_{0}=0\) and then shifted to \(z_{0}=1\). As expected, the synaptic input \(u\left(x,t\right)\) moved to align the center of mass \(z\) with the stimulus position \(z_{0}\) (Figure 3A). Further, the snapshot shows an overshoot of the destination, which is a sign of translational instability of the CANN with STPP. A clearer comparison of
\(z\) was made between a network with STPP and that without STPP (Figure 3B). This instability indicates the potential of STPP for anticipatory tracking.
#### 3.1.2 In the Presence of a Continuously Moving Stimulus
Next, we explored the tracking behaviors of the networks in the presence of a continuously moving stimulus by allowing the stimulus to shift with a constant angular velocity \(v_{\text{ext}}\). The CANNs were able to track the moving stimuli by using an approximate speed, which was consistent with how the HD cell system tracks rotating visual cues (Ajabi et al., 2023). The simulations show that the networks without STPP always trailed behind the stimulus regardless of the speed. However, after STPP was applied, three cases of tracking patterns were mainly observed, depending on \(v_{\text{ext}}\): (1) delayed tracking (Figure 4A), (2) perfect tracking with zero lag (Figure 4B), and (3) anticipatory tracking with a constant lead (Figure 4C). Notably, even when there was a delay, the CANN with STPP performed better in tracking the stimulus than pure CANN. In our cases, delayed, perfect and anticipatory trackings occurred in order from fast to slow \(v_{\text{ext}}\). However, delay and alignment may occur in both slow and fast zones, which is discussed in the next subsection. These results suggest that the predictions of the CANNs with STPP depend on the speeds of the stimulus.
### Anticipatory Tracking
Next, we examined the dependencies of anticipatory performance on internal (i.e., STPP parameters) and external (i.e., the strength of the stimulus) factors. The displacement \(s\) is a measure of the tracking performance (see the section titled "Simulation Experiments" for the equation), as shown in Figure 5A. First, we investigated how the STPP parameters affect the model's tracking performance. We compared \(s\) among different STPP regimes, the results of which indicate that anticipatory tracking is achieved in a certain range of STPP strength. As shown in Figure 5B, the network without STPP (\(\alpha=0\), \(\beta=0\)) linearly lagged behind the stimulus (the grey dashed line at \(s=0\)). Moreover, when the STPP was relatively weak due to the small scale of parameters, e.g., the dark green line (\(\alpha=0.02\), \(\beta=0.01\)), the network failed to make predictions although it showed less lag. In contrast, the networks with strong STPP elicited prediction successfully over a considerable range of angular velocities. For instance, the network with \(\alpha=0.02\) and \(\beta=0.1\) could anticipate stimuli with velocities ranging from 69\({}^{\circ}/\text{s}\) to 240\({}^{\circ}/\text{s}\). A stronger STPP regime (\(\alpha=0.06\), \(\beta=0.06\)) facilitated anticipation of a broader range of velocities ranging from 92\({}^{\circ}/\text{s}\) to 338\({}^{\circ}/\text{s}\). Interestingly, we found that at slower velocities, the networks performed with a tiny delay, and the threshold of the velocity for anticipation was related to both the STPP regime and the stimulus (Figure 5B, C). These results reveal the possible implications of STPP for prediction. On the one hand, effective anticipation of stimuli over an extensive range of velocities gives the neural system great flexibility to adapt to a varying stimulus. On the other hand, the correspondence between the range of anticipation-achievable velocities and the diverse intensities of STPP may imply that distinct brain regions or neurons equipped with distinct synaptic plasticity enable diverse anticipatory tracking performance levels.
Second, to understand how the strength of a stimulus affects anticipatory performance, we applied different magnitudes \(A\) of a stimulus to the same network and compared the results. According to the results shown in Figure 5C, the anticipatory lead was greater and the anticipatory velocity range was larger under the weaker stimulus. Moreover, these traces were confluent at a specific velocity, where the displacement was independent of the intensity of the stimulus. Interestingly, this velocity was the same as the intrinsic speed \(v_{\text{int}}\) of the network, which is described and discussed in the next subsection. In line with the results of natural tracking obtained in an earlier study (Fung et al., 2015), our results also
showed confluent behavior, which reveals a certain relationship between intrinsic dynamics and tracking dynamics. That is when the stimulus speed is the same as the intrinsic speed, the displacement is related only to the network itself.
### Intrinsic Dynamics and Anticipatory Time
To work out the relationship between the intrinsic dynamics and the tracking dynamics of the network, we next explored the intrinsic dynamics in the absence of a persistent stimulus (we had already tested the tracking dynamics in its presence). The contour map of the intrinsic speeds \(v_{\text{int}}\) under various combinations of \(\alpha\) and \(\beta\) is showed in Figure 6A. \(v_{\text{int}}\) increased as \(\alpha\) or \(\beta\) (or both) increased. Here, \(v_{\text{int}}\leq 0.00001\) can be considered static because 0.00001 rad/ms is approximately equal to 0.57 \({}^{\circ}\)/s, which is much lower than the majority of the velocities we recorded. In the region where \(v_{\text{int}}\leq 0.00001\), the network became practically static after manual intervention was terminated. In the moving region, the network became translationally unstable and moved with an intrinsic speed. The intrinsic motion was induced by the strong STPP, whereas the static region was located in parameter regions that lacked STPP or were under weak STPP regimes. Figure 6B shows the separate effects of \(\alpha\) and \(\beta\). \(v_{\text{int}}\) increased notably along \(\alpha\) when \(\beta\) was fixed, whereas when \(\alpha\) was fixed, it became almost invariant after \(\beta\) reached a relatively high value. Combining the top and bottom sub-figures of Figure 6B reveals that \(v_{\text{int}}\) was not clearly distinguishable with respect to \(\beta\) but exhibited a clearly hierarchical response to \(\alpha\). This behavior implies that \(v_{\text{int}}\) is more sensitive to \(\alpha\) than to \(\beta\).
A certain relationship between the intrinsic dynamics and the tracking dynamics was shown in the previous subsection. In view of this, the question arises: is intrinsic motion the internal drive of anticipatory behavior? To answer this question, we measured the maximal anticipatory time \(T_{\text{ant}}\) of the CANNs under the same STPP regimes as those in Figure 6A to understand the causality of the anticipation. As shown in Figure 6C, the anticipation (\(T_{\text{ant}}>0\)) appeared after \(v_{\text{int}}>0.00001\). Also, the contour map of \(T_{\text{ant}}\) and that of the intrinsic speeds shared a similar trend. These results indicate that stimulus prediction occurs only when there is intrinsic motion and the network is not in an internally static state. Hence, we assume that STPP-induced intrinsic motion is an internal drive of anticipation and a necessary condition for it. The pattern of the maximal anticipatory time map does not perfectly match the intrinsic speed map owing to the nontrivial influence of external input.
### Analysis of Translational Stability of the System
In a dynamical system, spontaneous movement without an external input occurs when the static solution becomes unstable to positional displacement in some parameter regions. To understand the intrinsic dynamics of a CANN with STPP, we studied the translational stability of static solutions of our model. We considered network states with a positional displacement to be
\[u\left(x,t\right) =u_{0}\left(x\right)+u_{1}\left(t\right)\frac{du_{0}\left(x \right)}{dx}, \tag{14}\] \[S\left(x,t\right) =S_{0}\left(x\right)+S_{1}\left(t\right)\left(x-z\right)S_{0}\left( x\right),\] (15) \[Q\left(x,t\right) =Q_{0}\left(x\right)+Q_{1}\left(t\right)\left(x-z\right)Q_{0} \left(x\right), \tag{16}\]
where \(u_{0}\left(x\right)\), \(S_{0}\left(x\right)\), and \(Q_{0}\left(x\right)\) and \(u_{1}\left(t\right)\), \(S_{1}\left(t\right)\), and \(Q_{1}\left(t\right)\) are the stationary states and the displacements of \(u\left(x,t\right)\), \(S\left(x,t\right)\), and \(Q\left(x,t\right)\), respectively. \(z\) is the center of mass of \(u_{0}\left(x\right)\). As derived
in Supplementary Material,
\[\frac{d}{dt}\left(\begin{array}{c}u_{1}\left(t\right)\\ S_{1}\left(t\right)\\ Q_{1}\left(t\right)\end{array}\right)=\left(\begin{array}{ccc}M_{uu}&M_{uS}&0 \\ M_{Su}&M_{SS}&M_{SQ}\\ M_{Qu}&0&M_{QQ}\end{array}\right)\left(\begin{array}{c}u_{1}\left(t\right)\\ S_{1}\left(t\right)\\ Q_{1}\left(t\right)\end{array}\right). \tag{17}\]
For the dynamical system described by Equation (17), the eigenvalue \(\lambda\) of the matrix composed of \(M_{\psi\varphi}\) (\(\psi,\varphi\in\{u,S,Q\}\)) and 0 determines its stability. We calculated the eigenvalues of networks with varying STPP regimes, letting the maximum of the eigenvalues for each regime be denoted as \(\tilde{\lambda}\). In the static phase, \(\tilde{\lambda}\leq 0\), the network states are stable and static. In the moving phase, \(\tilde{\lambda}>0\), the system would diverge due to positional displacement, i.e., perturbation, leading to spontaneously moving bumps. As illustrated in Figure 7, the network states were static when \(\tilde{\lambda}\leq 0\) in some STPP regimes, specifically weak STPP regimes. Otherwise, the network state bump moved spontaneously when there was interference. That is, STPP increases the translational instability, i.e., the mobility, of the CANNs. Remarkably, the parameter region of intrinsic motion is in the moving phase of the system. The heatmap of \(\tilde{\lambda}\) has a similar pattern to that of the contour map of the intrinsic speeds shown in Figure 6A. Overall, the results indicate that the intrinsic motion of the network is caused by the translational instability of this system, which is induced by STPP.
## 4 Discussion
In this study, we implemented the feedforward signal amplification mechanism of STPP in CANNs. In the model, the STPP effect is modeled by adding two dynamic functions (\(Q\left(x,t\right)\) and \(S\left(x,t\right)\)) to the original CANN (Wu and Amari, 2005). \(Q\left(x,t\right)\) abstractly models the proportion of NDMARs that are in the bound-but-blocked state, while \(S\left(x,t\right)\) abstractly models the temporal enhancement dependent on the opening of NMDARs from the bound-but-blocked state. To trigger the bound-but-blocked state, a sufficient but limited synaptic input should be applied (Figure 1B in (Yang et al., 2016)). To model this effect, we chose the function evoking \(Q\left(x,t\right)\), i.e., \(f_{Q}\), to be a bump-shaped log-normal distribution function. On the other hand, when the input to the postsynaptic neuron is uncommonly large, the postsynaptic membrane potential can be sufficient to remove magnesium ions (Jahr and Stevens, 1990; Vargas-Caballero and Robinson, 2004). As a result, the evoked postsynaptic current becomes stronger than that with NMDARs blocked (Yang et al., 2016). To model this magnesium removal efficacy, we chose the cumulative distribution function \(f_{S}\) to trigger the transition from the bound-but-blocked state to the open state. As the average membrane potential can be represented by a firing rate when reasonably assuming that the below-threshold membrane potentials of the postsynaptic neurons are normally distributed around the average value (Latimer et al., 2019), the cumulative distribution function of the firing rate (\(r\left(x,t\right)\)) is an appropriate choice to model magnesium removal efficacy on a population of neurons sharing the same preferred stimulus. In summary, functions \(f_{Q}\) and \(f_{S}\) motivated by our previous evidence help visualize the STPP effect.
The additional term \(\left[1+S\left(x,t\right)\right]\) in Equation (1) initiates concerns about its appropriateness due to different time constants of AMPARs and NMDARs. Even though the decay time constants of AMPAR and NMDAR are different, the STPP modulation takes into effect when NMDARs are primed and glutamates are re-released. When the opening of NMDARs and AMPARs are simultaneous, the whole evoked postsynaptic currents can be modulated by STPP. Moreover, \(\left[1+S\left(x,t\right)\right]\) models the efficacy of presynaptic neuronal activity in the evocations of postsynaptic input. If a neuron was primed, the
following information-bearing glutamate release could induce a surplus depolarization due to NMDAR-dependent STPP. Therefore, the modulation is on the evocation of postsynaptic response, not the synaptic current corresponding to receptors of particular types. However, the inhibition in this model is divisive. Nonetheless, the modulation still acts on the cortical and hippocampal networks in the preserved excitatory and inhibitory circuits, as in our previous study (Yang et al., 2016). Although we model the modulating effect on the excitatory synaptic currents, this modeling setting is sufficient to mimic the effect of STPP on CANNs.
Our simulation experiments showed that the implementation of STPP in CANNs successfully enabled anticipatory tracking of a moving stimulus and we also explored the relationship of the model dynamics and anticipation. First, we tested the tracking behaviors of CANNs with STPP when they were exposed to an abruptly changing stimulus and a continuously moving stimulus, and found that this model could predict the future locations of the stimulus. Second, we investigated the intrinsic dynamics of the networks with varying STPP regimes; the results showed that a certain level of STPP supported the intrinsic motion in this model and the intrinsic speed was more sensitive to \(\alpha\). Interestingly, when the angular velocity of the moving stimulus reached the intrinsic speed of the given model, the tracking delay was independent of the strength of the stimulus. Third, by comparing the pattern of the maximal anticipatory time with that of the intrinsic speeds, we found that intrinsic motion is the internal drive of anticipation. Lastly, we analyzed the translational stability of the stationary states of our networks with different STPP parameters. The results implied that strong STPP enhanced the mobility of the system and thus induced intrinsic motion. Taking all the results into account, we noticed that strong \(\alpha\) (opening rate of NMDARs) and \(\beta\) (transition rate of NMDARs from the glutamate-unbound state to the bound-but-blocked state) enable anticipation with broader coverage of stimulus velocities and larger maximal anticipatory time. Physiologically, the transition rate of NMDARs to the bound-but-blocked state reflects how fast and efficiently glutamates bind to NMDARs, which is basically dependent on glutamate-NMDAR binding rate and affinity (Paoletti and Neyton, 2007; Singh et al., 2011). Generally, these factors are synaptic specific. Intuitively, the fast opening of NMDARs can help neurons react expeditiously for stimulus chasing, and fast binding can prime NMDARs promptly for further opening. Given that the opening rate (or rise time), the binding rate and affinity are associated with the subunits of NMDAR (Paoletti and Neyton, 2007; Singh et al., 2011; Tu and Kuo, 2014), natural differences in NMDAR subtypes (Buller et al., 1994; Paoletti et al., 2013) would possibly lead to diverse tracking performance for stimuli in various brain regions. In contrast, an abnormal subunit expression of NMDARs may impair perception by the failure of time delay compensation. Collectively, our findings indicate that NMDAR-dependent STPP may underlie sensory prediction, effectively compensating for neural delay and efficiently supporting sensorimotor control.
The underlying mechanism of how STPP can facilitate anticipation can be intuitively understandable. In Figure 8, eight frames of network states were extracted for visualization. From the time-varying states and Equations (1), (4), (5) and (6), we can understand how anticipation occurs. When the stimulus starts moving, the biased distortion of \(Q\left(x,t\right)\) induces a unimodal distribution of the enhancing modulation \(S\left(x,t\right)\) with the peak on neighboring neurons that prefer future stimulus positions. This consequently skews \(u\left(x,t\right)\) toward future positions. This asymmetry leads to stronger activation of the neighboring neurons that exceed the peak of the stimulus, thereby facilitating prediction. In this case, after around 1200 ms, the states became translationally stable with the same speed as the stimulus; only the locations (not shapes) of the states changed, and thus a constant anticipation time was maintained. Overall, the essence of the anticipatory phenomenon is rooted in the effect of NMDAR-dependent STPP. The information on
stimulus positions can be latent in neighboring neurons that are less activated than the stimulus peak-aligned neurons, due to the small input. As the input shifts, the stronger input induces enhanced activation of the neighboring neurons that have stored information. This feedforward effect eventually skews the activation of neurons toward future positions and enables the neural system to sense future events.
Interestingly, despite being a response facilitation mechanism, STPP has the opposite function to that of short-term facilitation (STF). STF refers to a type of short-term synaptic plasticity that boosts the neuronal postsynaptic response as a result of the increased likelihood of neurotransmitter release due to the influx of calcium into the axon terminal after spike generation (Zucker and Regehr, 2002). In a computational study (Fung et al., 2012b) of STF in CANNs, it was discovered that STF improved the behavioral stability of CANNs and served as a noise filter. STF can maintain strong activation of neurons in the active region in accordance with the stimulus position and strengthen interactions among neurons that are tuned to the stimulus. Unlike STPP, this stimulus-specific facilitation stabilizes the network, which makes it incapable of spontaneously moving to anticipate an external input. Inhibitory feedback modulations were considered the basic mechanisms of anticipatory tracking. The modulations include STD, which depresses the activation of postsynaptic neurons by depleting the available resources of neurotransmitters released from presynaptic neurons (Fung et al., 2012a), and SFA, which refers to the reduction of neuronal excitability after prolonged stimulation (Mi et al., 2014). In both STD and SFA, the most active neurons receive the strongest negative feedback to counterbalance their responses, thereby increasing the probability that the locally active network state shifts to the continuously moving stimulus in a sequence. In contrast to inhibitory feedback modulations, which emphasize self-suppression by a feedback system, STPP spotlights the response facilitation on adjacent neurons in a feedforward manner to increase the mobility of the network and achieve anticipatory tracking. Moreover, STPP operates by utilizing the natural properties of NMDARs, which play a significant role in synaptic transmission, rather than relying on recurrent feedback systems, which may involve more auxiliary pathways and systems and may depend on repetitive or prolonged firing (Zucker and Regehr, 2002; Gutkin and Zeldenrust, 2014). It appears that the STPP-dominant feedforward model is simpler and more energy-efficient for anticipatory tracking.
Animal experimental results have suggested that the HD cell system can anchor to local cues, stably tracking rotating cues and maintaining traces when the cues are removed (Taube et al., 1990; Goodridge et al., 1998; Zugaro et al., 2003; Ajabi et al., 2023). However, the mechanism underlying the anchoring and trace maintenance remains unresolved. In 2023, Ajabi's team (Ajabi et al., 2023) found a second dimension in addition to a single angular dimension, namely network gain, to help represent the HD cell system. This discovery indicates that additional information is necessary to fully comprehend how the system adapts to changeable sensory input although the origins of network gain are yet to be identified. They also reported that the system followed the cue's rotation speed to track it and that the system exhibited anticipation. Additionally, the system sustained the drift for some time after the cue was turned off. These findings support the idea that the HD cell system possesses an effective predictive tracking capacity and exhibits spontaneous drift based on experience. To some degree, our model achieved results that were similar to those obtained by these experiments. Our CANN-based model also tracked a moving stimulus with the same velocity (Fung et al., 2010, 2015). Moreover, the incorporation of STPP enabled our model to move spontaneously without persistent external input, i.e., it exhibited intrinsic motion, and showed the anticipatory tracking of a moving stimulus. A very early study (Blair et al., 1997) reported a skew of the peak of the tuning curve toward future positions in the HD cell system rather than just a translational shift to future positions with an invariant bump shape, a finding that was interpreted as suggesting anticipation. This asymmetry can be induced by STPP in our model. As for
the anticipatory time, different studies obtained varying average values such as 17 ms (Blair et al., 1997), 23 ms (Taube and Muller, 1998), and 25 ms (Goodridge and Touretzky, 2000) in the anterior thalamus. The anticipatory time of HD cells can vary in different brain regions and even in different cells in the same region (Blair et al., 1997; Taube and Muller, 1998; Goodridge and Touretzky, 2000). In a large STPP parameter region, our model produced anticipations with maximal anticipatory time ranging from 0 to 30 ms, which were of the same order as that observed in animal experiments. Although STD (Fung et al., 2012a) and SFA (Mi et al., 2014) can achieve tracking performance comparable to that obtained by us, the effect of NMDAR-based STPP should not be neglected.
In addition, to prove the robustness of our model to different values of the STPP time constant \(\tau_{2}\), which represents the lifespan of latent information, we obtained contour maps of \(v_{\text{int}}\) when \(\tau_{2}=300\) ms, \(\tau_{2}=400\) ms, \(\tau_{2}=600\) ms, and \(\tau_{2}=700\) ms through simulations. The results, which are illustrated in Supplementary Material Figure S1, show that the patterns and achievable ranges of intrinsic speeds in other conditions resemble those obtained with \(\tau_{2}=500\) ms (Figure 6A), demonstrating the robustness of the intrinsic property of our model to anticipation.
Although STPP shows promise as an underlying mechanism for sensory prediction, certain results of our model cannot yet be corroborated by current animal experiments. In our model, the anticipatory time varied with the angular velocities of the stimuli, which usually rose first and then fell. However, the average anticipatory time measured in a population of HD cells in the anterior thalamus tended to be approximately constant over a broad range of head turn velocities (Goodridge and Touretzky, 2000). Differently, some studies reported other thought-provoking issues. The anticipatory time varied among different HD cells in the anterior thalamus (Blair et al., 1997), and the complex results obscured the dependency of the anticipation time on the angular velocity (Bassett et al., 2005). Briefly, due to the considerable variability in real data obtained from the HD cell system, it is challenging to determine the stability of the anticipatory time. More notably, if our proposed mechanism is correct, the differing STPP strength of cells would explain the great complexity of the dependence of the anticipatory time.
In conclusion, our work offers a novel feedforward framework that potentially encodes neural information processing based on STPP in the HD cell system. A consideration of heterogeneous interactions in the neural system shows that NMDAR-dependent STPP may coordinate with other mechanisms, an understanding of which would provide a more comprehensive account of neural information processing. An improved understanding of brain processing and networking may also inspire more computational algorithms for prediction. In the future, assessing the effects of our proposed feedforward framework with other sensory inputs, such as sounds with increasing frequencies and predictable spatial locations, will be a crucial step in validating its applicability.
## Author Contributions
H.Z., S.C.Y, and C.C.A.F. conceptualized and designed the study; H.Z. made the computational modeling; H.Z. and C.C.A.F performed the data analysis; H.Z., S.C.Y, and C.C.A.F. wrote the paper; H.Z., S.C.Y, and C.C.A.F. revised the paper; S.C.Y and C.C.A.F acquired funding.
## Funding
Grants from the Research Grants Council of Hong Kong (11102120 and 11101922) for S.C.Y and startup grant (9610591) from City University of Hong Kong to C.C.A.F.
## Acknowledgments
The Figure 1B was created with BioRender.com.
## Supplementary Material
The Supplementary Material for this article can be found online at: (to be confirmed).
## Source Code Availability Statement
The source codes used in this study are available at: [https://github.com/fccaa/cann_stpp_2023](https://github.com/fccaa/cann_stpp_2023).
|
2305.19678 | Smooth-Trajectron++: Augmenting the Trajectron++ behaviour prediction
model with smooth attention | Understanding traffic participants' behaviour is crucial for predicting their
future trajectories, aiding in developing safe and reliable planning systems
for autonomous vehicles. Integrating cognitive processes and machine learning
models has shown promise in other domains but is lacking in the trajectory
forecasting of multiple traffic agents in large-scale autonomous driving
datasets. This work investigates the state-of-the-art trajectory forecasting
model Trajectron++ which we enhance by incorporating a smoothing term in its
attention module. This attention mechanism mimics human attention inspired by
cognitive science research indicating limits to attention switching. We
evaluate the performance of the resulting Smooth-Trajectron++ model and compare
it to the original model on various benchmarks, revealing the potential of
incorporating insights from human cognition into trajectory prediction models. | Frederik S. B. Westerhout, Julian F. Schumann, Arkady Zgonnikov | 2023-05-31T09:19:55Z | http://arxiv.org/abs/2305.19678v2 | # Smooth-Trajectron++: Augmenting the Trajectron++ behaviour prediction model with smooth attention
###### Abstract
Understanding traffic participants' behaviour is crucial for predicting their future trajectories, aiding in developing safe and reliable planning systems for autonomous vehicles. Integrating cognitive processes and machine learning models has shown promise in other domains but is lacking in the trajectory forecasting of multiple traffic agents in large-scale autonomous driving datasets. This work investigates the state-of-the-art trajectory forecasting model Trajectron++ which we enhance by incorporating a smoothing term in its attention module. This attention mechanism mimics human attention inspired by cognitive science research indicating limits to attention switching. We evaluate the performance of the resulting Smooth-Trajectron++ model and compare it to the original model on various benchmarks, revealing the potential of incorporating insights from human cognition into trajectory prediction models.
## I Introduction
In a world where the demand for intelligent vehicles is increasing rapidly [1], the concern for the safety of passengers and other road users should grow with it [2]. According to the World Health Organization, approximately 1.3 million people die each year due to road traffic accidents, and this number is expected to increase if proper measures are not taken [3]. Therefore, it should be paramount for future autonomous vehicles to improve traffic safety.
One of the most critical factors for ensuring a safe environment around intelligent vehicles is accurately predicting the future movements of surrounding traffic participants. These predictions allow for a better assessment of the environment and anticipation of potentially dangerous situations at an early stage, lowering the risk of accidents. The accurate predictions of interactive behaviours are especially important, as those comprise the most challenging situations.
Numerous methods have been used to tackle the human behaviour prediction problem [4, 5, 6], with examples ranging from reasoning-based methods to data-driven techniques. Over the last few years, data-driven approaches have shown great potential [7, 8, 9, 10, 11, 12, 13], using machine learning algorithms to learn from large amounts of data to predict the trajectories of traffic participants. One of these data-driven models is _Trajectron++_[10], which stands out due to its public code availability, the general applicability and the results it achieved on multiple datasets (including _nuScenes_[14], and _highD_[15]).
Other methods to predict future behaviour of traffic participants are based on theories from cognitive science. Instead of learning merely from data, the model is constructed to mimic human cognition. One class of such models based on the concept of evidence accumulation [16] have proved useful specifically in predicting binary decisions in traffic interactions [17, 18]. However, these models are not yet applicable to trajectory forecasting in a more general setting. Another example of using insights from cognitive science for behaviour prediction in traffic is the use of a quantum-like Bayesian model, a mathematical framework that combines elements of quantum theory and Bayesian probability theory to describe decision-making and information processing in complex and uncertain environments [19]. It is used in [20] to more accurately predict human street crossing behaviour, compared to the more data-driven model _Social-LSTM_[7]. Yet another insight from cognitive science suggests that the brain has a limited capacity for shifting attention rapidly between different tasks [21]. This is used in [22], where the application of the smoothing term to the attention module of a machine-learning prediction model - referred to as _Smooth-Attention_ - which mimics human cognition, allows for better predictive performance.
Recent work demonstrated that integrating insights from cognitive science is a promising way of improving the performance of trajectory prediction models, but such cognitively inspired models need to be explored in a much more comprehensive way. Specifically, Cao et al. [22] emphasize the need to combine smooth attention with more advanced interaction modelling network architectures. Here we aim to address this challenge by applying smooth attention to a state-of-the-art behaviour prediction model. Namely, we aim to improve upon the performance of _Trajectron++ (T++)_[10] by leveraging the method of _smooth attention_ proposed in [22]. Applying a smoothness constraint on the attention module significantly reduces changes in attention, thereby ensuring that the module's output mimics the natural human cognitive processing. We name our approach of this combined model _Smooth-Trajectron++_. We test this new model on the _nuScenes_[14] and _highD_[15] datasets.
## II Methods
In this section, we provide an overview of the various elements of the _Trajectron++_ model (_T++_) as well as the functioning of the _Smooth-Attention_ module.
### _Trajectron++_
We use _Trajectron++_[10] as the baseline model, for several reasons. Firstly, the model showed state-of-the-art performance on various public datasets [10] while including an attention module. Secondly, the authors have made the
source code publicly available, including proper documentation regarding its application. This offers the opportunity to potentially reproduce the originally reported results while minimizing deviations from the original setup used by the authors. Lastly, the model has been tested on a large-scale public autonomous driving dataset, _nuScenes_[14]. This provides evidence of the applicability of the model to real-world scenarios concerning interactions between multiple traffic participants.
_Trajectron++_ is a graph-based conditional variational autoencoder model comprising an encoder and a decoder. The encoder uses various modules representing different influences on the trajectory forecast Figure 1. First, the past location and speed of the chosen traffic agent \(i\), for which predictions must be made, is fed into the Node History Encoder, whose main component is a long-short-term memory cell (LSTM). Second, the _T++_ model also uses road map data to make its predictions more feasible and realistic. The Map Encoder module takes relevant environmental information, fed to a convolutional neural network (CNN) and then added to the feature vector. Third, the Edge Influence Encoder models the influence of neighbouring agents on a given agent in the considered past time from \(t-T\) to the current time \(t\). To encode graph edges, the method follows a two-step process. Firstly, edge information is collected from neighbouring agents belonging to the same semantic class, such as pedestrian-pedestrian and car-car semantic classes. Summation is used for feature aggregation inside these classes to handle varying numbers of neighbouring nodes while preserving count information, following [23]. The encodings of all the connections between the modelled agent and its neighbours are combined to create an "influence" representation vector, which represents the overall impact of the neighbouring nodes, which is done using an additive attention module [24]. Finally, the output is concatenated with the node's history and road map data to produce a unified node representation vector \(e_{x}\) fed to a decoder that constructs a predicted trajectory.
### _Smooth Attention_
The _smooth attention_ approach [22] presents a novel perspective on attention modules. Unlike traditional methods, it applies attention at each time step, following [25]. Emulating human attention during deliberate tasks incorporates a smoothness constraint based on the hypothesis that attention does not frequently change over time. Previous research [21] shows that deliberate attention movements are slower due to internal limitations. This implies that attention does not frequently fluctuate during driving, as it falls under intentional movement. By incorporating the smoothness constraint, the _smooth attention_ approach enhances the attention mechanism, improving the selection of important information while disregarding less relevant input variables and aligning better with the characteristics of human attention.
## III Smooth-Trajectron++
In this section, we propose a way to apply the smooth attention module [22] specifically to the _Trajectron++_ model. To do this, we alter the Edge Influence Encoder module of _Trajectron++_ Figure 1, as this is the part where the social interactions are modelled, and the attention is applied.
Our approach2, which we call _Smooth-Trajectron++_, is illustrated in Figure 2. At a high level, the original Edge Influence Encoder is expanded by applying the attention module at each time step in a similar fashion as in the _smooth attention_ model (the green highlighted box in Figure 2), where the outputs \(\alpha^{x}_{i,j_{ab}}\) are the attention weights that are used to rank the importance the human agent \(i\) assigns to the semantic class \(j_{ab}\) for neighbouring agents of types \(a\) and \(b\) (\(a\) and \(b\) can stand for agent types such as cars or pedestrians) at the time \(\tau\). All these attention weights from every time step are then used as an input for the added smoothing term in the loss function to incorporate the regularising of the attention by imposing a vectorial total variation penalty:
Fig. 1: Encoder part of _Trajectron++_[10] that encodes various past input information into the representation vector \(e_{x}\).
Fig. 2: Edge Influence Encoder including Smooth-Attention (in green)
\[\mathcal{L}_{\mathrm{smooth}}(\alpha)=\sum_{\tau=t-T+1}^{t}\sum_{i=1}^{N}\sqrt{ \sum_{j}\left(\alpha_{ij}^{t}-\alpha_{ij}^{t-1}\right)^{2}}. \tag{1}\]
To ensure that the attention weights are utilised during the model training process, we incorporate \(\mathcal{L}_{\mathrm{smooth}}\) into the original loss function \(\mathcal{L}_{0}\)[10]:
\[\mathcal{L}_{new}=\mathcal{L}_{0}+\beta\mathcal{L}_{smooth}. \tag{2}\]
The scaling factor \(\beta\) is introduced to fine-tune the influence of \(\mathcal{L}_{smooth}\). By adjusting \(\beta\), the model can be trained to effectively balance the contribution of the attention weights with the original loss function.
The extra loss term \(\mathcal{L}_{smooth}\) and associated additional calls to the attention module increase the number of computations and therefore have an effect on the training time, which is approximately 1.5 times slower compared to the original version of _Trajectron++_.
## IV Results
Our method is evaluated on two publicly available datasets: _nuScenes_[14] and _highD_[15]. In both scenarios, we trained and assessed both the original _Trajectron++_ model (which is a special case of _Smooth-Trajectron++_ for \(\beta=0\)) and the expanded model, with multiple versions of the latter, differentiated by five \(\beta\)-values ranging from \(0.01\) to \(10\). We also compared the obtained results with those originally reported for _T++_[10], as these turned out to be substantially different from the results we obtained after directly reproducing _T++_ using the available source code and the same hyperparameters mentioned in the original article for the model's training.
The experiments were performed on DelftBlue, the high-performance cluster of Delft University of Technology.
### _nuScenes dataset_
The _nuScenes_ dataset [14] consists of 1000 driving scenes in Boston and Singapore, characterised by their high traffic volumes and challenging driving situations. The driving scenes span 20 seconds each and are annotated at \(2\,\mathrm{Hz}\).
For this dataset, we evaluated the models according to three metrics: Final Displacement Error (FDE), Average Displacement Error (ADE) and Kernel Density Estimation of Negative Log Likelihood (KDE-NLL). These metrics are chosen as they were used in the original paper [10]. First, FDE indicates how far off the model's predicted location is from the actual location at the end of a predicted trajectory. Second, ADE is particularly useful for evaluating the overall accuracy of a model's trajectory predictions, as it considers the entire predicted trajectory rather than just the final location. Finally, KDE-NLL is a valuable metric for evaluating the uncertainty of a model's predictions, as it measures how well the model can capture the true distribution of the data. Following [10], we calculate the three above metrics at prediction horizons of 1, 2, 3, and 4s. The FDE and the ADE outputs comprise the most likely single trajectory prediction, using the "Most Likely" output configuration as in [10].
There are two main agent classes in _nuScenes_, pedestrians and vehicles. As their behaviour is significantly different, we evaluate the models on these classes separately.
#### Iv-A1 Pedestrian-only predictions
Leftmost numbers in each column of Table I-Table III show the results for the predicted pedestrian trajectories. The numbers in bold represent the lowest metric values per prediction horizon, compared to the reproduced T++ results (the "T++ (rep)" row), which serve as a reference for comparative analysis.
First, we found a significant gap between the reproduced T++ performance and the results reported and _T++_ paper. The FDE and ADE exhibit notable differences, especially at shorter prediction horizons. The reproduced KDE-NLL values also diverge significantly from the reported values. Several factors may contribute to this deviation; for example, a different version of _nuScenes_, a discrepancy in used and reported hyper-parameters and model settings, or possible deviations introduced during the reproduction process, such as data downloading or package installation. Future research should more closely examine reproducibility of the original results and clarify potential causes of mismatches with the original findings.
Second, the _smooth attention_ extensions of the reproduced _T++_ (\(\beta=0.01\) to \(\beta=10\)) consistently outperform the baseline reproduced version of _T++_. Tuning the scaling factor \(\beta\) influences the error. Regarding the FDE, the parameter \(\beta=0.1\) has the lowest error in all cases, except for the shared lowest error at the first prediction horizon (@1s) of \(\beta=1.0\). The higher the \(\beta\)-value, the more it resembles the "T++ (rep)" reference values. However, in Table III, the opposite seems to be happening; the "T++ (rep)" row shows the lowest values for almost all cases. An exception is the smooth version with \(\beta=0.01\) @4s, where a marginal performance increase is seen. However, in general, in this pedestrian-only case, smooth attention does not improve this metric, although the decline for the smooth versions is minimal. The smoothing term might decrease the variety of predicted trajectory distributions, affecting the average and making it less similar to the ground truth. Further research is needed to explore this hypothesis in other pedestrian-only scenarios.
#### Iii-A2 Vehicle-only predictions
Rightmost numbers in each column of Table I-Table III show the results for the predicted vehicle trajectories. Similarly to the previous pedestrian-only case, a general FDE and ADE decline is seen along the \(\beta\)-versions of the _Smooth-Trajectron++_. The ADE values of the _T++_ paper are missing in Table II, as they are not reported by the authors in the original article. The version with \(\beta=0.01\) holds the lowest value for the prediction horizon of 1 second, while \(\beta=0.1\) has a minor error for the remaining prediction horizons. Contrary to the pedestrian-only predictions in Table III, _Smooth-Trajectron++_ on the vehicle-only forecasts has better KDE-NLL numbers than the reproduced model, which indicates that the model is better able to match the original distribution of predicted trajectories with the inclusion of the smooth-attention term in the loss function. Furthermore, this can be said for all \(\beta\)-factors, while in this case, the \(\beta=0.01\) has the lowest values.
### _highD dataset_
To evaluate the models on the _highD_ dataset, we used the previously proposed benchmarking framework [26].This framework was designed to benchmark prediction models in _gap acceptance_ scenarios, i.e. situations where drivers decide whether to enter a gap in traffic or wait for the next opportunity, such as when a car approaches an intersection and decides whether to turn left immediately or wait for a break in oncoming traffic. In case of highD dataset, we investigated the predictions of gap acceptance in lane-change decisions using a restricted version of highD (see [26] for details).
The framework [26] allowed us to use two methods of splitting the highD data into training and testing sets: the random split and the critical split. The first method randomly splits the data for testing and training. In contrast, the second method deliberately selects the most unusual behaviour for testing, such as accepting a very small gap or rejecting a large gap. This latter testing scenario, therefore, tests the model's ability to extrapolate to situations that lie outside its training distribution, which is generally considered to be a more difficult task [27]. Also, small accepted gaps can be regarded as safety-critical scenarios, which is especially important when developing safe and reliable prediction models.
Furthermore, the framework allowed us to test the models with varying number of input time steps(\(n_{I}\)) to study the input-dependability of the tested models; we used \(n_{I}=2\) and \(n_{I}=10\).
In addition to the metrics used for the _nuScenes_ dataset, the gap acceptance benchmark includes an additional metric, the Area under the Receiver-Operator Curve (AUC), used to evaluate the performance of binary classification models (here between accepted and rejected gaps).
First, we analyse the performance of the models at predicting lane-change decisions at initial gaps, i.e. at the start
Fig. 3: The performance of _Smooth-Trajectron++_ and _T++_ on the _highD_ dataset: \(\beta_{1-5}\) refer to the \(\beta\)-values of 0.01, 0.1, 0.5, 1.0 and 10 respectively
of the interaction. Here, only in the case of \(\beta=0.5\) there is a notable increase in AUC for the random split compared to \(T+\) in both \(n_{I}\)-instances. For the critical split, almost all AUC-values are lower than \(T+\), except for \(\beta=0.1\) and \(\beta=0.5\) at \(n_{I}=10\) where it is slightly higher. Generally, the \(\beta\)-term does not seem to increase the performance of the base model.
Second, we investigated models' predictions of lane changes in _highD_ at the fixed-sized gaps, as defined in [26]. Looking at the random splits, again for \(\beta=0.5\) there is an increase in AUC for both \(n_{I}\)-situations. The changes for the other \(\beta\)-versions are not consistently different when compared to \(T+\), having minor fluctuations to perform slightly better or slightly worse. Concerning the critical split, all \(\beta\)-versions but \(\beta=10\) perform better than the base model, where \(\beta=10\) performs very similarly to \(T+\). Also, the difference between \(n_{I}=2\) and \(n_{I}=10\) is logical, as the latter consistently has a higher AUC than the former.
For both the FDE and ADE, all the \(\beta\)-versions are outperforming the \(T+\)-model at \(n_{I}=10\) on the random split. However, this seems due to one extremely high value of one of the random splits of \(T+\) (each random split consists of three sub-splits, which are averaged to minimize the effect of randomness). This could be an outlier, caused by an error in the training process. At \(n_{I}=2\), the \(\beta\)-values under-perform compared to \(T+\) for the random split, indicating no significant improvement. At the critical split, only at \(\beta=0.1\) and \(\beta=10\) both ADE and FDE values are lower at \(n_{I}=10\). In general, there is no clear improvement regarding these metrics across the various \(\beta\)-values.
Overall, in _highD_ lane-change prediction experiments, there are instances of both better and worse performance of _Smooth-Trajectron++_ compared to T++, indicating no consistent benefits of adding smooth attention to T++. This is in contrast to the nuScenes results, which may stem from fundamental differences in the datasets. While the _nuScenes_ dataset encompasses a wide range of data with cars and pedestrians, _highD_ mainly focuses on cars. The application of the smoothing term \(\beta\) in the _Smooth-Trajectron++_ model relies on the attention module that compares different semantic classes of traffic participants. In datasets where one class dominates, the smoothing term may not yield tangible improvements.
## V Conclusion
This paper proposed _Smooth-Trajectron++_, a trajectory prediction model based on an existing state-of-the-art model _Trajectron++_[10] in which we incorporated a cognitively-inspired smooth attention module [22]. We demonstrated that our smooth-attention version of T++ can achieve increased performance on the large-scale dataset nuScenes, but does not result in tangible improvements on the highD dataset. This suggests that the smooth attention approach seems to be more suitable for large-scale multi-agent datasets with multiple agent types rather than on datasets with few traffic agents of mostly the same type. Hence, the concept of _smooth attention_ might be better applied to models where the attention module is implemented over individual agents and not semantic classes. Nevertheless, our results further strengthen previous work [18, 20, 22], indicating that including cognitive insights can allow better predictions of human behavior in traffic.
|
2309.10589 | Unbiased Parameter Estimation for Partially Observed Diffusions | In this article we consider the estimation of static parameters for partially
observed diffusion process with discrete-time observations over a fixed time
interval. In particular, we assume that one must time-discretize the partially
observed diffusion process and work with the model with bias and consider
maximizing the resulting log-likelihood. Using a novel double randomization
scheme, based upon Markovian stochastic approximation we develop a new method
to unbiasedly estimate the static parameters, that is, to obtain the maximum
likelihood estimator with no time discretization bias. Under assumptions we
prove that our estimator is unbiased and investigate the method in several
numerical examples, showing that it can empirically out-perform existing
unbiased methodology. | Elsiddig Awadelkarim, Ajay Jasra, Hamza Ruzayqat | 2023-09-19T13:04:28Z | http://arxiv.org/abs/2309.10589v1 | ###### Abstract
###### Abstract
In this article we consider the estimation of static parameters for partially observed diffusion process with discrete-time observations over a fixed time interval. In particular, we assume that one must time-discretize the partially observed diffusion process and work with the model with bias and consider maximizing the resulting log-likelihood. Using a novel double randomization scheme, based upon Markovian stochastic approximation we develop a new method to unbiasedly estimate the static parameters, that is, to obtain the maximum likelihood estimator with no time discretization bias. Under assumptions we prove that our estimator is unbiased and investigate the method in several numerical examples, showing that it can empirically out-perform existing unbiased methodology.
**Keywords**: Unbiased estimation, Markovian stochastic approximation, Parameter estimation, Diffusion processes.
**AMS subject classifications**: 60J22, 62M05, 65C40, 62M20
**Corresponding author**: Hamza Ruzayqat. E-mail: [email protected]
## 1 Introduction
Let \((\mathsf{X},\mathcal{X})\) be a measurable space, \(\Theta\subseteq\mathbb{R}^{d_{\theta}}\) and introduce the family of probability measures \(\{\pi_{\theta}\}_{\theta\in\Theta}\), that is, for each \(\theta\in\Theta\), \(\pi_{\theta}\) is a probability measure on \((\mathsf{X},\mathcal{X})\). Let \(H:\Theta\times\mathsf{X}\to\mathbb{R}^{d_{h}}\) be a measurable mapping such that for each \(\theta\in\Theta\), \(h(\theta):=\int_{\mathsf{X}}H(\theta,x)\pi_{\theta}(dx)\) is finite. The objective is to solve \(h(\theta)=0\). This problem appears routinely in many applications such as maximum likelihood estimation of parameters associated to partially observed diffusion processes, where \(\pi_{\theta}\) represents the posterior measure of the partially observed process for a fixed \(\theta\) and \(H\) is an expression associated to the gradient of the log-likelihood obtained by Fisher's identity; more details are given later on - see also [14]. Other applications include parameter estimation for Bayesian inverse problems (e.g. [15, 17]) although this is not the focus of this article. The afore-mentioned problems have a wide variety of applications in statistics, finance and engineering; see [6, 12] for example.
We assume that in practice, one requires a discretization (approximation) associated to the probability \(\pi_{\theta}\) that is controlled by a scalar parameter \(l\in\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\). We denote such a probability as \(\pi_{\theta}^{l}\) on a measurable space \((\mathsf{X}^{l},\mathcal{X}^{l})\) (which may be different from \((\mathsf{X},\mathcal{X})\) but need not be) and suppose that there is a corresponding functional \(H_{l}:\Theta\times\mathsf{X}^{l}\to\mathbb{R}^{d_{h}}\) writing \(h_{l}(\theta):=\int_{\mathsf{X}^{l}}H_{l}(\theta,x)\pi_{\theta}^{l}(dx)\). We will assume explicitly that \(\lim_{l\to\infty}h_{l}(\theta)=h(\theta)\) for every \(\theta\in\Theta\) and again this is made precise in the next section. In the context of diffusion processes, which is the example considered in this article, \(l\) will control the level of time discretization and this example will be detailed in the next section. The objective article is still to solve \(h(\theta)=0\), despite the fact that one can only work with \(h_{l}(\theta)\). The solution we obtain is unbiased, in the sense that the expectation of the estimator is equal to the (assumed unique) solution of \(h(\theta)=0\).
One of the main methods that can be used to solve \(h_{l}(\theta)=0\) is based upon the well-known stochastic approximation (SA) method and its many variants; see for instance [3, 4, 9, 11, 22]. This is an iterative scheme that updates parameter estimates using often (but not-necessarily) unbiased estimates of \(h_{l}(\theta)\), which are independent at each update. The main complication in our context (partially observed diffusions) is that unbiased estimation w.r.t. \(\pi_{\theta}^{l}\) is often very challenging as one is working with a discrete-time state-space model. The posteriors \(\pi_{\theta}^{l}\) are notoriously challenging to sample from and there have been a large number of methods designed to do exactly this; see for instance [1, 13]. Such methods are based upon Markov chain Monte Carlo (MCMC) and at least in the context of [1], SA has been combined with using MCMC, to yield a Markovian update in the SA method to yield a type of Markovian SA (MSA) method. This can be far more efficient
than obtaining an independent (and unbiased) estimator of \(h_{l}(\theta)\) as it typically needs only one or few updates from a single Markov kernel; details are given in the next section. Note that the MSA method will only give solutions of \(h_{l}(\theta)=0\) and so the resulting solution is likely to have a bias.
In this article based upon the randomization methods in [21], extended to so-called doubly randomized schemes in [14, 15, 17, 18, 23], we develop a new unbiased parameter estimation method for partially observed diffusion processes that uses the MSA method that is mentioned above. By unbiased, we mean a method than can obtain the solution of \(h(\theta)=0\), despite the fact that one only works with \(\pi_{\theta}^{l}\) and \(h_{l}(\theta)\). The approach consists, roughly, of randomizing over the level of discretization \(l\) and then running a MSA method at two consecutive levels of time discretization for a random number of iterations of an MSA algorithm. Using this approach one can show, under assumptions, that one can solve \(h(\theta)=0\). Key to this methodology is being able to run couplings of MCMC kernels that target \(\pi_{\theta}^{l}\) and \(\pi_{\theta}^{l-1}\) where the couplings are'suitably good'. Such methods have been developed in [14] and are utilized here. We note that the work here is not the first to provide unbiased parameter estimates associated to partially observed diffusions with discrete-time observations. Some of the first methods are based upon exact simulations of diffusions (e.g. [5]), which are rather elegant when they can be applied; we assume that this is not possible and this is a reasonably wide-class of diffusion processes. The approach in [14] is more general than that of [5], using a methodology to unbiasedly estimate \(h(\theta)\) at each time step of the SA method and returning unbiased parameter estimates. Note that the estimator that the authors is unbiased in terms of discretization error, but has a bias from the SA iterations; this is a bias that we will remove. The method in [14] is in contrast to the approach in this article where we focus explicitly on unbiasedly estimating the parameters themselves. The advantage of the latter approach, is that often the cost-per-iteration of the MSA method will be less than having to unbiasedly estimate \(h(\theta)\) and hence that one has a faster computational algorithm for parameter estimation. It should be noted, however, that unbiased estimation of the gradient of the log-likelihood (i.e. \(h(\theta)\)) is of independent interest in itself and can be used inside MCMC methods and so on, whereas our approach is limited to exactly parameter estimation.
To summarize the contributions of this article are as follows:
1. To provide a new method for unbiased estimation of static parameters of partially observed diffusions associated to discrete-time observations over a fixed interval.
2. To prove that (a version of) the estimator is indeed unbiased, under mathematical assumptions.
3. To implement the method numerically on several examples and to establish faster empirical performance for parameter estimation than that in [14].
We note that the analysis in 2. is non-trivial and relies on proving that the MSA method with reprojection (which is detailed later on) converges in our context. We use the main convergence theorem of [3] which requires several conditions to be verified, associated to drift and minorization conditions of the Markov kernels that are used (amongst other conditions). This is non-trivial as we use a type conditional particle filter [1] that is in turn coupled. We do not prove that our estimator has finite variance although we conjecture exactly how that may be proved.
This article is structured as follows. In Section 2 we detail the mathematical problem and our methodology. Section 3 outlines the methodology in the context of diffusions and contains within it Section 3.5 which gives our main theoretical result on the unbiasedness of the parameter estimates in the case of partially observed diffusion processes. In Section 4 we provide numerical simulations which investigate our methodology. Appendix A houses all of our technical proofs.
## 2 Methodology
### Problem Formulation
Recall that \(h(\theta):=\int_{\mathsf{X}}H(\theta,x)\pi_{\theta}(dx)\). The objective is to solve, \(h(\theta)=0\), which can be achieved using Markovian stochastic approximation (e.g. [4]) as we will now describe. Let \(K_{\theta}:\mathsf{X}\times\mathcal{X}\to[0,1]\) be a Markov kernel, such that for any \(\theta\in\Theta\), it admits \(\pi_{\theta}\) as an invariant measure and, for each \(\theta\in\Theta\), let \(\nu_{\theta}\) be a probability
measure on \((\mathsf{X},\mathcal{X})\). Set \(\theta_{0}\in\Theta\) and generate \(X_{0}\sim\nu_{\theta_{0}}^{l}\), \(n=1\).
```
1: Set \(\theta_{0}^{l}\in\Theta\) and generate \(X_{0}\sim\nu_{\theta_{0}}^{l}\), \(n=1\).
2: Sample \(X_{n}|(\theta_{0}^{l},x_{0}),\ldots,(\theta_{n-1}^{l},x_{n-1})\) from \(K_{\theta_{n-1}^{l},l}(x_{n-1},\cdot)\).
3: Update: \[\theta_{n}^{l}=\theta_{n-1}^{l}+\gamma_{n}H_{l}(\theta_{n-1}^{l},X_{n}).\]
Set \(n=n+1\) and go to the start of \(2..\).
```
**Algorithm 1** Markovian Stochastic Approximation
The Markovian stochastic approximation iterates at time \(\theta\in\mathbb{N}\), as follows:
1. Sample \(X_{n}|(\theta_{0},x_{0}),\ldots,(\theta_{n-1},x_{n-1})\) from \(K_{\theta_{n-1}}(x_{n-1},\cdot)\).
2. Update: \[\theta_{n}^{l}=\theta_{n-1}^{l}+\gamma_{n}H_{l}(\theta_{n-1}^{l},X_{n}).\]
3. Update: \[\theta_{n}^{l}=\theta_{n-1}^{l}+\gamma_{n}H_{l}(\theta_{n-1}^{l},X_{n}).\]
Set \(n=n+1\) and go to the start of \(2..\).
The Markovian stochastic approximation iterates at time \(\theta\in\mathbb{N}\), as follows:
1. Sample \(X_{n}|(\theta_{0},x_{0}),\ldots,(\theta_{n-1},x_{n-1})\) from \(K_{\theta_{n-1}}(x_{n-1},\cdot)\).
2. Update: \[\theta_{n}^{l}=\theta_{n-1}^{l}+\gamma_{n}H_{l}(\theta_{n-1}^{l},X_{n}).\]
3. Update: \[\theta_{n}^{l}=\theta_{n-1}^{l}+\gamma_{n}H_{l}(\theta_{n-1}^{l},X_{n}).\]
4. Set \(n=n+1\) and go to the start of \(2..\). ```
**Algorithm 2** Markovian Stochastic Approximation
The update:
\[\theta_{n}^{l}=\theta_{n-1}^{l}-\gamma_{n}H(\theta_{n-1},X_{n})\]
where \(\{\gamma_{n}\}_{n\in\mathbb{N}}\) is a sequence of step-sizes, \(\gamma_{n}>0\), \(\sum_{n\in\mathbb{N}}\gamma_{n}=\infty\), \(\sum_{n\in\mathbb{N}}\gamma_{n}^{2}<\infty\).
Typically, to prove convergence one often needs reprojections and ergodicity conditions on \(K_{\theta}\), but for now we shall ignore this to simplify the exposition.
We now assume that working directly with \(\pi_{\theta}\) is not possible and one can only work with a family of approximations \(\{\pi_{\theta}^{l}\}_{l\in\mathbb{N}_{0}}\), on a measurable space \((\mathsf{X}^{l},\mathcal{X}^{l})\), \(l\in\mathbb{N}_{0}\). This approximation is such that as \(l\to\infty\) (\(\mathsf{X}^{l},\mathcal{X}^{l}\)) and \((\mathsf{X},\mathcal{X})\) will be identical. Now for any \(\pi_{\theta}^{l}-\)integrable \(\varphi_{\theta}^{l}:\mathsf{X}^{l}\to\mathbb{R}\)
\[\pi_{\theta}^{l}(\varphi^{l}):=\int_{\mathsf{X}^{l}}\varphi_{\theta}^{l}(x^{l })\pi_{\theta}^{l}(dx),\]
we will assume that there exists a \(\varphi_{\theta}:\mathsf{X}\to\mathbb{R}\) that is \(\pi_{\theta}-\)integrable and
\[\lim_{l\to\infty}\pi_{\theta}^{l}(\varphi^{l})=\pi_{\theta}(\varphi),\]
and is such that \(|\pi_{\theta}^{l}(\varphi^{l})-\pi_{\theta}(\varphi)|\leq|\pi_{\theta}^{l-1}( \varphi^{l-1})-\pi_{\theta}(\varphi)|\). We note that the context here exists in partially observed diffusions and Bayesian inverse problems; see [14, 15] and the former is described in detail in Section 3.
A Markovian stochastic approximation (MSA) scheme would work as follows. Let \(K_{\theta,l}:\mathsf{X}^{l}\times\mathcal{X}^{l}\to[0,1]\) be a Markov kernel, such that for any \(\theta\in\Theta\), it admits \(\pi_{\theta}^{l}\) as an invariant measure and, for each \(\theta\in\Theta\), let \(\nu_{\theta}^{l}\) be a probability measure on \((\mathsf{X},\mathcal{X})\). Then one can run Algorithm 1. In Algorithm 1 we do not specify any stopping rule, although of course in practice one must specify one. We note that, assuming that the zero of the functional \(h_{l}(\theta)=\int_{\mathsf{X}^{l}}H_{l}(\theta,x)\pi_{\theta}^{l}(dx)\) exists, one expects it to be different to the zero of \(h(\theta)\).
### Debiasing Markovian Stochastic Approximation
Let \(\theta_{\star}^{l}\) be the solution of \(h_{l}(\theta)\) and \(\theta_{\star}\) be the solution of \(h(\theta)=0\) and suppose that \(\lim_{l\to\infty}\theta_{\star}^{l}=\theta_{\star}\). We shall assume that these solutions are unique, although this is mainly for simplicity and we need only assume that there exist a collection of solutions. Let \(\mathbb{P}_{L}(l)\) be a positive probability on \(\mathbb{N}_{0}\) then, we know from [21] if one samples \(l\) from \(\mathbb{P}_{L}\) and then computes, independently of \(L\), \(\theta_{\star}^{l}-\theta_{\star}^{l-1}\) (with \(\theta_{\star}^{-1}:=0\)), we have that
\[\widehat{\theta}_{\star}=\frac{\theta_{\star}^{l}-\theta_{\star}^{l-1}}{\mathbb{P }_{L}(l)}\]
is an unbiased estimator of \(\theta_{\star}\). The estimator returned by the MSA procedure after \(N\) steps, \(\theta_{N}^{l}\) is not equal in expectation to \(\theta_{\star}^{l}\), which would be a requirement for unbiasedness (see e.g. [24]), but under conditions, one would have
\[\lim_{N\to\infty}\mathbb{E}[\theta_{N}^{l}]=\theta_{\star}^{l}.\]
This latter result, suggests the following double randomization scheme used in [18] and is now detailed in the sequel.
We begin by assuming that one can find a coupling, \(\check{K}_{\theta,\theta^{\prime},l,l-1}\), of \((K_{\theta,l},K_{\theta^{\prime},l-1})\) for any \((\theta,\theta^{\prime})\in\Theta^{2}\) and \(l\in\mathbb{N}\); examples in the context of partially observed diffusions and Bayesian inverse problems can be found in [14, 15] and we describe a particular scheme in Section 3. Let \(\check{\nu}_{\theta,\theta^{\prime},l,l-1}\) be any coupling of \((\nu_{\theta}^{l},\nu_{\theta^{\prime}}^{l-1})\). Then let \(\{N_{p}\}_{p\in\mathbb{N}_{0}}\) be any sequence of increasing natural numbers, converging to infinity. Let \(\mathbb{P}_{P}\) be any positive probability on \(\mathbb{N}_{0}\). Then Algorithm 2 will, under assumptions, give unbiased estimates of \(\theta_{*}\). Algorithm 2 can be run \(M-\)times in parallel and averaged to reduce the variance of the estimator, if it exists. We remark that Algorithm 2 is completely generic, in that at this stage we have not explicitly specified any model \(\pi_{\theta}\) and \(\pi_{\theta}^{l}\), however, of course the details on \(K_{\theta,l}\) and \(\check{K}_{\theta,\theta^{\prime},l,l-1}\) are rather important and this needs to be explained in a particular context.
## 3 Application to Partially Observed Diffusions
### Model
Consider the diffusion process on the filtered probability space \((\Omega,\mathscr{F},\{\mathscr{F}_{t}\}_{t\geq 0},\mathbb{P}_{\theta})\), \(\theta\in\Theta\subseteq\mathbb{R}^{d_{\theta}}\)
\[dX_{t}=a_{\theta}(X_{t})dt+\sigma(X_{t})dW_{t}\qquad X_{0}\sim\mu_{\theta} \tag{3.1}\]
where \(X_{t}\in\mathbb{R}^{d}\), \(\mu_{\theta}\) a probability measure on \((\mathbb{R}^{d},\mathcal{B}(\mathbb{R}^{d}))\) with Lebesgue density denoted \(\mu_{\theta}\) also, \(a:\Theta\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\), \(\{W_{t}\}_{t\geq 0}\) is a standard \(d-\)dimensional Brownian motion and \(\sigma:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d\times d}\). Denote \(a_{\theta}^{j}\) as the \(j^{th}-\)component of \(a_{\theta}\), \(j\in\{1,\ldots,d\}\) and \(\sigma^{j,k}\) as the \((j,k)^{th}-\)component of \(\sigma\), \((j,k)\in\{1,\ldots,d\}^{2}\). We assume the following, referred to as (D1) from herein:
The coefficients, for any fixed \(\theta\in\Theta\), \(a_{\theta}^{j}\in\mathcal{C}^{2}(\mathbb{R}^{d})\) (twice continuously differentiable real-valued functions on \(\mathbb{R}^{d}\)) and \(\sigma^{j,k}\in\mathcal{C}^{2}(\mathbb{R}^{d})\), for \((j,k)\in\{1,\ldots,d\}\). Also for any fixed \(x\in\mathbb{R}^{d}\), \(a_{\theta}^{j}(x)\in\mathcal{C}(\Theta)\), \(j\in\{1,\ldots,d\}\). In addition, \(a_{\theta}\) and \(\sigma\) satisfy
* **uniform ellipticity**: \(\Sigma(x):=\sigma(x)\sigma(x)^{*}\) is uniformly positive definite over \(x\in\mathbb{R}^{d}\);
* **globally Lipschitz**: for any fixed \(\theta\in\Theta\), there exist a \(C<+\infty\) such that \(|a_{\theta}^{j}(x)-a_{\theta}^{j}(x^{\prime})|+|\sigma^{j,k}(x)-\sigma^{j,k}( x^{\prime})|\leq C\|x-x^{\prime}\|_{2}\) (\(\|\cdot\|_{2}\) is the \(L_{2}-\)norm) for all \((x,x^{\prime})\in\mathbb{R}^{d}\times\mathbb{R}^{d}\), \((j,k)\in\{1,\ldots,d\}^{2}\).
The assumption (D1) is not referred to again and strictly is not a requirement to apply any of the methodology that is to be discussed. However, some of the algorithms (e.g. Algorithm 5) to be described, do not work well without this assumption, which is why it is stated.
By using the Girsanov theorem, for any \(\varphi:\Theta\times\mathbb{R}^{nd}\rightarrow\mathbb{R}\), \(\varphi_{\theta}(x_{t_{1}},\ldots,x_{t_{n}})\), for \(0\leq t_{1}<\cdots<t_{n}=T>0\) and \(\mathbb{P}_{\theta}-\)integrable,
\[\mathbb{E}_{\theta}[\varphi_{\theta}(X_{t_{1}},\ldots,X_{t_{n}})]=\mathbb{E}_ {\mathbb{Q}\otimes\mathrm{Leb}}\left[\mu_{\theta}(X_{0})\varphi_{\theta}(X_{t _{1}},\ldots,X_{t_{n}})\frac{d\mathbb{P}_{\theta}}{d\mathbb{Q}}\right]\]
where \(\mathbb{E}_{\theta}\) denotes expectations w.r.t. \(\mathbb{P}_{\theta}\),
\[\frac{d\mathbb{P}_{\theta}}{d\mathbb{Q}}=\exp\Big{\{}-\frac{1}{2}\int_{0}^{T} \|b_{\theta}(X_{s})\|_{2}^{2}ds+\int_{0}^{T}b_{\theta}(X_{s})^{*}dW_{s}\Big{\}}\]
\(\mathbb{Q}\otimes\mathrm{Leb}\) is the product measure of a probability \(\mathbb{Q}\) on the path \(\{X_{t}\}_{t>0}\) and the Lebesgue measure on \(X_{0}\), \(b_{\theta}(x)=\Sigma(x)^{-1}\sigma(x)^{*}a_{\theta}(x)\) is a \(d-\)vector and under \(\mathbb{Q}\), \(\{X_{t}\}_{t>0}\) solves \(dX_{t}=\sigma(X_{t})dW_{t}\) where \(\{W_{t}\}_{t\geq 0}\) is a standard \(d-\)dimensional Brownian motion under \(\mathbb{Q}\) also. It is assumed that \(a_{\theta},\varphi_{\theta}\) and \(\sigma\) are such that \(\mu_{\theta}(X_{0})\varphi_{\theta}(X_{t_{1}},\ldots,X_{t_{n}})\frac{d\mathbb{P }_{\theta}}{d\mathbb{Q}}\) is \(\mathbb{Q}-\)integrable for each fixed \(T\geq 0\). As it will be useful below, we write
\[\frac{d\mathbb{P}_{\theta}}{d\mathbb{Q}}=\exp\Big{\{}-\frac{1}{2}\int_{0}^{T} \|b_{\theta}(X_{s})\|_{2}^{2}ds+\int_{0}^{T}b_{\theta}(X_{s})^{*}\Sigma(X_{s}) ^{-1}\sigma(X_{s})^{*}dX_{s}\Big{\}}.\]
```
1:Sample \(l\) from \(\mathbb{P}_{L}\) and \(p\) from \(\mathbb{P}_{P}\).
2:If \(l=0\) perform the following: * Set \(\theta_{0}^{l}\in\Theta\), \(n=1\) and generate \(X_{0}\sim\nu_{\theta_{0}}^{l}\). * Sample \(X_{n}|(\theta_{0}^{l},x_{0}),\ldots,(\theta_{n-1}^{l},x_{n-1})\) from \(K_{\theta_{n-1}^{l},l}(x_{n-1},\cdot)\). * Update: \[\theta_{n}^{l}=\theta_{n-1}^{l}-\gamma_{n}H_{l}(\theta_{n-1}^{l},X_{n}).\] If \(n=N_{p}\) go to the next bullet point, otherwise go back to the second bullet point. * If \(p=0\) return \[\widehat{\theta}_{\star}=\frac{\theta_{N_{p}}^{l}}{\mathbb{P}_{p}(p)\mathbb{P} _{L}(l)},\] otherwise return \[\widehat{\theta}_{\star}=\frac{\theta_{N_{p}}^{l}-\theta_{N_{p-1}}^{l}}{ \mathbb{P}_{p}(p)\mathbb{P}_{L}(l)}.\]
3:Otherwise perform the following: * Set \(\theta_{0}^{l}=\theta_{0}^{l-1}\in\Theta\), \(n=1\) and generate \((X_{0}^{l},X_{0}^{l-1})\sim\tilde{\nu}_{\theta_{0}^{l},\theta_{0}^{l-1},l,l-1}^ {l}\). * Sample \(X_{n}^{l},X_{n}^{l-1}\Big{|}(\theta_{0}^{l},\theta_{0}^{l-1},x_{0}^{l},x_{0}^ {l-1}),\ldots,(\theta_{n-1}^{l},\theta_{n-1}^{l-1},x_{n-1}^{l},x_{n-1}^{l-1})\) from \(\tilde{K}_{\theta_{n-1}^{l},\theta_{n-1}^{l-1},l,l-1}\left((x_{n-1}^{l},x_{n-1 }^{l-1}),\cdot,\right)\). * Update: \[\theta_{n}^{l} = \theta_{n-1}^{l}+\gamma_{n}H_{l}(\theta_{n-1}^{l},X_{n}^{l}),\] \[\theta_{n}^{l-1} = \theta_{n-1}^{l-1}+\gamma_{n}H_{l-1}(\theta_{n-1}^{l-1},X_{n}^{l- 1}).\] If \(n=N_{p}\) go to the next bullet point, otherwise go back to the second bullet point. * If \(p=0\) return \[\widehat{\theta}_{\star}=\frac{\theta_{N_{p}}^{l}-\theta_{N_{p}}^{l-1}-\theta_ {N_{p}}^{l}}{\mathbb{P}_{p}(p)\mathbb{P}_{L}(l)},\] otherwise return \[\widehat{\theta}_{\star}=\frac{\theta_{N_{p}}^{l}-\theta_{N_{p}}^{l-1}-\{ \theta_{N_{p-1}}^{l}-\theta_{N_{p-1}}^{l-1}\}}{\mathbb{P}_{p}(p)\mathbb{P}_{L }(l)}.\]
```
**Algorithm 2** Unbiased Markovian Stochastic Approximation (UMSA)
Now if \(\mu_{\theta}\varphi_{\theta}\) is differentiable w.r.t. \(\theta\), one has, under very minor regularity conditions that
\[\nabla_{\theta}\log\left\{\mathbb{E}_{\theta}[\varphi_{\theta}(X_{t_{1}},\ldots, X_{t_{n}})]\right\}=\mathbb{E}_{\overline{\theta}_{\theta}}\left[\nabla_{\theta} \log\left(\mu_{\theta}(X_{0})\varphi_{\theta}(X_{t_{1}},\ldots,X_{t_{n}}) \frac{d\mathbb{P}_{\theta}}{d\mathbb{Q}}\right)\right]\]
where \(\overline{\mathbb{E}}_{\theta}=\varphi_{\theta}\mathbb{P}_{\theta}/\mathbb{P}_ {\theta}(\varphi_{\theta})\) and \(\mathbb{P}_{\theta}(\varphi_{\theta})=\mathbb{E}_{\theta}[\varphi_{\theta}(X _{t_{1}},\ldots,X_{t_{n}})]\). In this notation, the previously mentioned \(\pi_{\theta}\) is the probability measure \(\overline{\mathbb{P}}_{\theta}\), with \(\varphi_{\theta}\) to be determined below.
Consider a sequence of random variables \((Y_{1},\ldots,Y_{T})\), where \(Y_{p}\in\mathbb{R}^{d_{y}}\), that are assumed to have joint Lebesgue density (\(T\in\mathbb{N}\) is assumed from herein)
\[p_{\theta}(y_{1},\ldots,y_{T}|\{x_{s}\}_{0\leq s\leq T})=\prod_{k=1}^{T}g_{ \theta}(x_{k},y_{k})\]
where, \(g:\Theta\times\mathbb{R}^{d}\times\mathbb{R}^{d_{y}}\rightarrow\mathbb{R}^{+}\), for any \((\theta,x)\in\Theta\times\mathbb{R}^{d}\), \(\int_{\mathbb{R}^{d_{y}}}g_{\theta}(x,y)dy=1\) and \(dy\) is the Lebesgue measure. If one considers realizations of the random variables, \((Y_{1},\ldots,Y_{T})\), then we have the state-space model with marginal likelihood:
\[p_{\theta}(y_{1},\ldots,y_{T})=\mathbb{E}_{\theta}\left[\prod_{k=1}^{T}g_{ \theta}(X_{k},y_{k})\right].\]
Now, following the construction that is developed above, one would have the function \(\varphi_{\theta}(x_{1},\ldots,x_{T})=\prod_{k=1}^{T}g_{\theta}(x_{k},y_{k})\) and the expression:
\[H(\theta,\{X_{s}\}_{s\in[0,T]})=\nabla_{\theta}\log\left(\mu_{\theta}(x_{0}) \varphi_{\theta}(X_{1},\ldots,X_{T})\frac{d\mathbb{P}_{\theta}}{d\mathbb{Q}} \right)=\]
\[\nabla_{\theta}\log\{\mu_{\theta}(X_{0})\}+\sum_{k=1}^{n}\nabla_{\theta}\log \{g_{\theta}(X_{k},y_{k})\}-\frac{1}{2}\int_{0}^{T}\nabla_{\theta}\|b_{\theta }(X_{s})\|_{2}^{2}ds+\int_{0}^{T}\nabla_{\theta}b_{\theta}(X_{s})^{*}\Sigma(X_ {s})^{-1}\sigma(X_{s})^{*}dX_{s}.\]
#### 3.1.1 Time Discretization
Let \(l\in\mathbb{N}_{0}\) be given and consider an Euler discretization of step-size \(\Delta_{l}=2^{-l}\), \(k\in\{1,2,\ldots,\Delta_{l}^{-1}T\}\):
\[\widetilde{X}_{k\Delta_{l}} = \widetilde{X}_{(k-1)\Delta_{l}}+a_{\theta}(\widetilde{X}_{(k-1) \Delta_{l}})\Delta_{l}+\sigma(\widetilde{X}_{(k-1)\Delta_{l}})[W_{k\Delta_{l}} -W_{(k-1)\Delta_{l}}]. \tag{3.2}\]
One can also define the function \(H_{l}:\Theta\times(\mathbb{R}^{d})^{\Delta_{l}^{-1}T+1}\rightarrow\mathbb{R}^ {d_{\theta}}\)
\[H_{l}(\theta,\widetilde{x}_{0},\ldots,\widetilde{x}_{T})=\sum_{k=0}^{\Delta_{ l}^{-1}T-1}\Big{\{}-\frac{\Delta_{l}}{2}\nabla_{\theta}\|b_{\theta}( \widetilde{x}_{k\Delta_{l}})\|_{2}^{2}+\nabla_{\theta}b_{\theta}(\widetilde{x }_{k\Delta_{l}})^{*}\Sigma(\widetilde{x}_{k\Delta_{l}})^{-1}\sigma(\widetilde {x}_{k\Delta_{l}})^{*}[\widetilde{x}_{(k+1)\Delta_{l}}-\widetilde{x}_{k\Delta_{ l}}]\Big{\}}+ \tag{3.3}\]
\[\sum_{k=1}^{T}\nabla_{\theta}\log\{g_{\theta}(\widetilde{x}_{k},y_{k})\}+ \nabla_{\theta}\log\{\mu_{\theta}(\widetilde{x}_{0})\}.\]
Then we have
\[h_{l}(\theta)=\nabla_{\theta}\log(p_{\theta}^{l}(y_{1},\ldots,y_{T})):=\frac{ \mathbb{E}_{\theta}[\varphi_{\theta}(\widetilde{X}_{1},\ldots,\widetilde{X}_{T })H_{l}(\theta,\widetilde{X}_{0},\ldots,\widetilde{X}_{T})]}{\mathbb{E}_{ \theta}[\varphi_{\theta}(\widetilde{X}_{t_{1}},\ldots,\widetilde{X}_{t_{n}})]}.\]
The probability measure \(\pi_{\theta}^{l}\) is simply the smoother of path \(\widetilde{x}_{0},\ldots,\widetilde{x}_{T}\) given the observations \(y_{1},\ldots,y_{T}\). To be more precise, set \(U_{0}=X_{0}\) and denote by \(U_{k}\) the discretized path \(\widetilde{X}_{k-1+\Delta_{l}},\ldots,\widetilde{X}_{k}\), \(k\in\{1,\ldots,T\}\) and denote the transition kernel of \(U_{k}\) given \(U_{k-1}\) (as induced by (3.2)) as \(M_{\theta,l}\) then one has that:
\[\pi_{\theta}^{l}\left(d(u_{0},\ldots,u_{T})\right)=\frac{\prod_{k=1}^{T}g_{ \theta}(\tilde{x}_{k},y_{k})\mu_{\theta}(u_{0})du_{0}\prod_{k=1}^{T}M_{\theta,l }(u_{k-1},du_{k})}{\int_{(\mathbb{R}^{d})^{\Delta_{l}^{-1}T+1}}\prod_{k=1}^{T}g_ {\theta}(\tilde{x}_{k},y_{k})\mu_{\theta}(u_{0})du_{0}\prod_{k=1}^{T}M_{\theta,l }(u_{k-1},du_{k})} \tag{3.4}\]
where we have omitted dependence on the data in \(\pi_{\theta}^{l}\).
### Conditional Particle Filter
To construct our algorithm for partially observed diffusions, we begin by describing the conditional particle filter (see [1]) which is a Markov kernel of invariant measure \(\pi_{\theta}^{l}\). The simulation of the kernel is described in Algorithm 3.
1. Input \(U_{0}^{\prime},\ldots,U_{T}^{\prime}\). Set \(k=1\), sample \(U_{0}^{i}\) independently from \(\mu_{\theta}\), \(a_{0}^{i}=i\) for \(i\in\{1,\ldots,N-1\}\).
2. Sampling: for \(i\in\{1,\ldots,N-1\}\) sample \(U_{k}^{i}|U_{k-1}^{a_{k-1}^{i}}\) using the Markov kernel \(M_{\theta,l}\). Set \(U_{k}^{N}=U_{k}^{\prime}\) and for \(i\in\{1,\ldots,N-1\}\), \((U_{0}^{i},\ldots,U_{k}^{i})=(U_{0}^{a_{k-1}^{i}},\ldots,U_{k-1}^{a_{k-1}^{i}}, U_{k}^{i})\). If \(k=T\) go to \(4..\).
3. Resampling: Construct the probability mass function on \(\{1,\ldots,N\}\): \[r_{1}^{i}=\frac{g_{\theta}(\widetilde{x}_{k}^{i},y_{k})}{\sum_{j=1}^{N}g_{ \theta}(\widetilde{x}_{k}^{j},y_{k})}.\] For \(i\in\{1,\ldots,N-1\}\) sample \(a_{k}^{i}\) from \(r_{1}^{i}\). Set \(k=k+1\) and return to the start of \(2..\)
4. Construct the probability mass function on \(\{1,\ldots,N\}\): \[r_{1}^{i}=\frac{g_{\theta}(\widetilde{x}_{k}^{i},y_{T})}{\sum_{j=1}^{N}g_{ \theta}(\widetilde{x}_{T}^{j},y_{T})}.\] Sample \(i\in\{1,\ldots,N\}\) using this mass function and return \((U_{0}^{i},\ldots,U_{T}^{i})\).
**Algorithm 3** Conditional Particle Filter at level \(l\in\mathbb{N}_{0}\).
### Coupled Conditional Particle Filter
To describe the coupled conditional particle filter (CCPF), which is essentially described in [14] and is a conditional particle filter associated to the algorithm in [16], we require several objects that we shall now detail.
We begin with simulating the maximal coupling of two probability mass functions on \(\{1,\ldots,N\}\) in Algorithm 4. This will be needed in the resampling operation of the CCPF. Next we describe a coupling of \(M_{\theta,l}\) and \(M_{\theta^{\prime},l-1}\), which we denote by \(\hat{M}_{\theta,\theta^{\prime},l,l-1}\). This is given in Algorithm 5. It will be needed in the sampling step of the CCPF and is the well-known'synchronous coupling' of Euler discretizations. In Algorithm 5, \(\mathcal{N}_{d}(0,\Delta_{l}I_{d})\) is the \(d-\)dimensional Gaussian, \(0\) mean, \(\Delta_{l}I_{d}\) covariance, \(I_{d}\) is the \(d\times d\) identity matrix. Given Algorithm 4 and Algorithm 5 we are now in a position to describe the simulation of one step of the CCPF kernel \(K_{\theta,\theta^{\prime},l,l-1}\). The kernel will take as its input two trajectories at levels \(l\) and \(l-1\), denote them as \(U_{0:T}^{l}\in\mathbb{R}^{T\Delta_{l}^{l-1}d+1}\) and \(U_{0:T}^{l-1}\in\mathbb{R}^{T\Delta_{l-1}^{l-1}d+1}\) and produce two new such trajectories.
### Final Algorithm
The method that we use is then reasonably simple, given appropriate choices of \(\mathbb{P}_{L}\) and \(\mathbb{P}_{p}\), which is a topic to be discussed in the next section.
* Run Algorithm 2 with the choice of \(\pi_{\theta}^{l}\) as in (3.4) and \(H_{l}(\theta,U_{0:T})\) as in (3.3).
* The kernel \(K_{\theta,l}\) is sampled as in Algorithm 3.
* The kernel \(\check{K}_{\theta,\theta^{\prime},l,l-1}\) is sampled as Algorithm 6.
At this stage several remarks are important. Firstly, the estimator that we have employed is the so-called single term estimator for randomization (see [21]). This can be improved by using independent sum estimators and we refer to [14, 15, 21, 24] for details. Secondly, in Algorithm 4, step 4. can be improved by sampling from any coupling of \((r_{4}^{i},r_{5}^{j})\), although for simplicity, we do not do this. Thirdly, in Algorithm 6, we can improve step 1. by sampling \((U_{0}^{i,l},U_{0}^{i,l-1})\) from a coupling of \((\mu_{\theta},\mu_{\theta^{\prime}})\). Finally, for the probability measure \(\nu_{\theta}^{l}\) (see e.g. Algorithm 2) we use the simulation of \(U_{0}\) from \(\mu_{\theta}\) and the rest of the path is generated using the recursion (3.2). To sample the coupling \(\hat{\nu}_{\theta,\theta}^{l}\) we simply copy \(U_{0}^{l}\) to obtain \(U_{0}^{l-1}\) and generate the two trajectories all the way to time \(T\) using \(T\) applications of \(\tilde{M}_{\theta,\theta^{\prime},l,l-1}\) in Algorithm 5.
### Theoretical Results
We now give the main theoretical result of the paper which is that, under assumptions and modifications, the estimator that we have introduced is unbiased. By unbiased, we mean that the expected value of the estimator is exactly \(\theta_{*}\). The estimator that we analyze, however, is slightly modified from the procedure that is discussed in Section 3.4 as we use the method of reprojection (see e.g. [3] and the references therein) within the stochastic approximation method. This is essentially a minor modification which makes the mathematical analysis of stochastic approximation methods far easier. We remark, that in practice we never use the reprojection method and hence its description is relegated to the Appendix A.3.
The theoretical result is given under a collection of assumptions that are listed and discussed in the Appendix A.4 and are termed (A1-9). Below, we write \(\mathbb{E}[\cdot]\) to denote the expectation w.r.t. the randomness of our estimator that has been produced under the modification detailed in Appendix A.3. We then have the following result.
**Theorem 3.1**.: _Assume (A1-9). Then we have that \(\mathbb{E}[\widehat{\theta}_{*}]=\theta_{*}\)._
Proof.: This follows by Theorem A.2 in the Appendix, the bounded convergence theorem and several simple calculations that are omitted for brevity.
The result that is given is essentially the minimum one would like to provide. One would also like to show that the estimator also has a finite variance and a finite expected cost (or cost that this finite 'with high
probability') as is the case for several other doubly randomized estimators [14, 15, 17, 18]. The challenge here is to show that the coupling of the estimators across consecutive levels of discretizations are sufficiently close as a function of \(l\); typically this is measured through the second moment of the difference, and bounds are often of the type \(\mathcal{O}(\Delta_{l}^{\beta})\) - see [14, 15, 17, 18]. In our context, this require a rather intricate analysis of the coupling of the conditional particle filter and its interplay with the stochastic approximation method. In the proofs given in the Appendix, all of the bounds that are proved, would explode with the discretization level and so, despite the technical sophistication of our proofs an even more intricate proof would be needed. As a result, in this paper we simply consider the empirical performance of our estimator and leave further mathematical analysis to future work.
## 4 Numerical Simulations
In this section, we test our algorithm on two models and compare it against the methodology proposed in [14].
### Ornstein-Uhlenbeck Process
Consider the Ornstein-Uhlenbeck (OU) process \(\{X_{t}\}_{t\geq 0}\) defined by
\[dX_{t}=-\theta X_{t}dt+\sigma dW_{t},\quad X_{0}=x_{0}\in\mathbb{R},\ \ t\in[0,T],\]
where \(\theta,\sigma\in\mathbb{R}_{+}\) and \(T\in\mathbb{N}\). Let \(\varsigma\in\mathbb{R}_{+}\), then the observations \(\{Y_{k}\}_{k=1}^{T}\) are taken at unit times as \(Y_{k}|X_{k}=x_{k}\sim\mathcal{N}(x_{k},\varsigma^{2})\), \(k\in\{1,...,T\}\), generated from the true parameter \(\theta^{*}=0.5\). We set \(x_{0}=100\), \(T=25\), \(\sigma=0.4\) and \(\varsigma=1\). We consider the OU process in this example because the likelihood can be computed analytically, which then can be used in the gradient descent method to obtain the maximum likelihood estimator (MLE). We apply our methodology presented in Algorithm 2 to estimate \(\theta\) and compare our estimator to the MLE obtained from the exact model. Let \(\tilde{\theta}\) be the MLE obtained from running the gradient descent. For each \(M\in\{2^{k}\ :\ 3\leq k\leq 13\}\), we run \(M\) independent copies of our algorithm in parallel. Let \(\hat{\theta}_{1},...,\hat{\theta}_{M}\) be the estimates obtained from each run. For each \(M\), define \(\hat{\theta}_{M}^{*}=\frac{1}{M}\sum_{i=1}^{M}\hat{\theta}_{i}\). The MSE is then calculated as
\[\frac{1}{100}\sum_{i=1}^{100}|\hat{\theta}_{M}^{*,i}-\tilde{\theta}|^{2},\]
which was estimated by running the described procedure above 100 times. The number of particles used in the CPF or the CCPF is 50. We set \(\mathbb{P}_{L}(l)=2^{-1.5l}\mathbb{I}_{\{l_{\min},\cdots,l_{\max}\}}(l)\) for some \(l_{\min},l_{\max}\in\mathbb{N}\cup\{0\}\), where
denotes the indicator function on a set \(A\). Given \(l\) sampled from \(\mathbb{P}_{L}\), we sample \(p\) from \(\mathbb{P}_{P|L}(p|l)\propto g(p|l)\) and set \(N=N_{0}\ 2^{p}\), where
\[g(p|l)=\left\{\begin{array}{ll}2^{5-p}&\text{if}\quad p\in\{p_{\min},\cdots,5 \wedge(l_{\max}-l)\},\\ 2^{-p}\ p\ [\log_{2}(p)]^{2}&\text{if}\quad 5<p\leq p_{\max},\end{array}\right.\]
for some \(p_{\min},p_{\max}\in\mathbb{N}\cup\{0\}\). This choice is very similar to the one used in [18]. We set \(N_{0}=10\), \(l_{\min}=3\), \(l_{\max}=12\), \(p_{\min}=1\) and \(p_{\max}=12\). In Figure 1, we plot the MSE against both the CPU run time and \(M\). The run time here is the sum of CPU run times of of all the \(M\) processes that were run in parallel. Figure 1 shows that the MSE scales as \(M^{-1}\) which agrees with our theory that the estimator \(\hat{\theta}_{M}^{*}\) is unbiased and has a variance that scales as \(M^{-1}\).
Next, we compare our UMSA algorithm against the algorithm proposed in [14], where the latter gives an unbiased estimator for the score function, that is, the gradient of the log-likelihood, which is subsequently utilized in a stochastic gradient method to estimate the static parameters. Since usually it is difficult to decide when is the right time to stop the stochastic gradient method in the algorithm of [14], we only measure the time needed to compute an unbiased estimate of the score function in a neighborhood of \(\theta=0.5\). Notice however that the run time of the algorithm in [14] that is needed to estimate \(\theta\) is in general much more (the average cost is proportional to the number of iterations used in the stochastic gradient method times the average time needed to unbiasedly estimate the score function). On the other hand, as for our algorithm, we compute the median time of running Algorithm 2 only once. We run 1000 simulations of each of the aforementioned procedures to generate box-plots for the run time. As we can see in Figure 2, the median time needed to provide one estimate of \(\theta\) using UMSA is less than that needed to generate an unbiased estimate of the score function in a neighborhood of \(\theta=0.5\). This indicates that the run time needed to compute the estimator \(\hat{\theta}_{M}^{*}\) is indeed going to be less than that needed to estimate \(\theta\) using the method in [14] since the \(M\) simulations of UMSA algorithm are run in parallel, and therefore, the median run time to compute \(\hat{\theta}_{M}^{*}\) is almost the same as the median run time of running Algorithm 2 only once.
### Diffusion Model for Red Kangaroos
In this example, we look at an application from population ecology to predict the dynamics of a population of red kangaroos (Macropus rufus) in New South Wales, Australia [7]. The latent population size \(Z=\{Z_{t}\}_{t\geq t_{1}}\) (\(t_{1}>0\)) is assumed to follow a logistic diffusion process with environmental variance [8, 19], given as
\[dZ_{t}=(\theta_{3}^{2}/2+\theta_{1}-\theta_{2}Z_{t})Z_{t}dt+\theta_{3}Z_{t}dW_ {t},\qquad Z_{t_{1}}\sim\mathcal{LN}(5,10^{2}), \tag{4.1}\]
where \(\mathcal{LN}\) denotes the log-normal distribution. The parameters, \(\theta_{1}\in\mathbb{R}\) and \(\theta_{2}>0\) can be thought of as coefficients that describe how the growth rate varies with population size. However, as \(\theta_{3}\) appears in the diffusion coefficient of (4.1) we apply the Lamperti transformation \(X_{t}=\log(Z_{t})/\theta_{3}\). Applying Ito's formula, the new process \((X_{t})_{t\geq 0}\) satisfies
\[dX_{t}=a_{\theta}(X_{t})dt+dW_{t},\qquad X_{t_{1}}\sim\mathcal{N}\left(\frac{5} {\theta_{3}},\frac{100}{\theta_{3}^{2}}\right), \tag{4.2}\]
with \(a_{\theta}(x)=\theta_{1}/\theta_{3}-(\theta_{2}/\theta_{3})\exp\left(\theta_{ 3}x\right)\). The observations are double transect counts (pairs of non-negative integers) at irregular times \(t_{1},...,t_{P}\), with \(P=41\), denoted by \(Y_{t_{1}},\ldots,Y_{t_{P}}\) and \(Y_{t}\in\mathbb{R}^{2}\). It is assumed that the
Figure 1: (OU Model) The UMSA algorithm is applied to estimate the drift parameter of the OU process. Left: MSE against run time. Right: MSE against \(M\).
Figure 2: (OU Model) Comparison of the time needed to compute one unbiased estimate of \(\theta\) using UMSA and the time needed to compute an unbiased estimate of the score function in a small neighborhood of \(\theta^{*}=0.5\) using the method in [14]. The box-plots are generated from 1000 runs of each procedure.
observations \(\{Y_{t}\}_{t=t_{1}}^{t=t_{P}}\) are conditionally independent given \(\{X_{t}\}_{t_{1}\leq t\leq t_{P}}\) and are negative binomial distributed, precisely the density of \(Y_{t}\) given \(X_{t}\) is
\[g_{\theta}(x_{t},y_{t})=\mathcal{NB}(y_{t}^{1};\theta_{4},\exp\{\theta_{3}x_{t} \})\ \mathcal{NB}(y_{t}^{2};\theta_{4},\exp\{\theta_{3}x_{t}\}),\quad\theta_{4}>0, \quad t\in\{t_{1},\cdots,t_{P}\},\]
where \(\mathcal{NB}(y;r,\mu)=\frac{\Gamma(y+r)}{\Gamma(r)y!}\left(\frac{r}{r+\mu} \right)^{r}\left(\frac{\mu}{r+\mu}\right)^{y}\), for \(y\in\mathbb{N}\cup\{0\}\), \(r\in(0,\infty)\), and \(\mu\in(0,\infty)\). The goal is to estimate the parameters \(\boldsymbol{\theta}=(\theta_{1},\theta_{2},\theta_{3},\theta_{4})\in\mathbb{R }\times(0,\infty)^{3}\).
We employ 3 as the minimum discretization level. For every \(l\geq 3\) set the step size \(\Delta_{l}=2^{-l}\) and replace the irregular times \(t_{1},...,t_{P}\) by rounding each of them to the closest number of the form \(t_{1}+k\Delta_{l}\) for integers \(k\), that is, we replace the irregular time \(t_{i}\) with \(\tilde{t}_{i}=\lfloor\frac{t_{i}-t_{1}}{\Delta_{l}}+\frac{1}{2}\rfloor+t_{1}\), \(i\in\{1,\cdots,P\}\). For each \(M\in\{2^{k}\ :\ 3\leq k\leq 13\}\), we run \(M\) independent repeats of our algorithm in parallel. Let \(\hat{\boldsymbol{\theta}}_{1},...,\hat{\boldsymbol{\theta}}_{M}\) be the estimates obtained from running \(M\) copies of our algorithm. For each \(M\), define \(\hat{\boldsymbol{\theta}}_{M}^{*}=\frac{1}{M}\sum_{i=1}^{M}\hat{\boldsymbol{ \theta}}_{i}\). Figure 3 is a log-log scale plot of the MSE versus the CPU run time of the algorithm. For each \(M\in\{2^{k}\ :\ 3\leq k\leq 12\}\), the MSE is calculated as
\[\text{MSE}_{j}=\frac{1}{100}\sum_{i=1}^{100}|\hat{\boldsymbol{\theta}}_{M,j}^ {*,i}-\hat{\boldsymbol{\theta}}_{2^{13},j}^{*}|^{2},\quad\text{for }1\leq j \leq 4,\]
where \(\hat{\boldsymbol{\theta}}_{k,j}\) denotes the \(j\)-th component of the the vector \(\hat{\boldsymbol{\theta}}_{k}\). Here, the reference value \(\hat{\boldsymbol{\theta}}_{2^{13},j}^{*}\) is computed by running Algorithm 2\(2^{13}\) times. The reason of considering \(\hat{\boldsymbol{\theta}}_{2^{13}}\) as a proxy for the true MLE is that the likelihood for this model is intractable. We set \(\mathbb{P}_{L}\), \(\mathbb{P}_{P}\) and the number of particles used in CPF and CCPF similar to those in the previous example.
Figure 3 shows that the run time scales approximately as \(\text{MSE}^{-1}\). The fact that the estimator has the same rate as Monte-Carlo is a consequence of the unbiasedness property of our estimator as proven in Theorem 3.1. Next, we compare our UMSA algorithm against the algorithm proposed in [14], similar to what we did in the previous example. Figure 4 shows that the median run time needed to compute \(\hat{\boldsymbol{\theta}}_{M}^{*}\) is less than that needed to compute one unbiased estimate of the score function in a neighborhood of \(\boldsymbol{\theta}^{*}\). Again, this example also shows that our method outperforms the methodology of [14].
where for a matrix \(A\) the notation \(A^{\top}\) denotes the transpose of \(A\), \(|A|\) denotes its determinant, and \(dz\) is the Lebesgue measure on \(\mathbb{R}^{d}\). For every \(\theta\in\Theta\) we define the unit-step kernel \(M_{\theta,l}:(\mathbb{R}^{d})^{\Delta_{l}^{-1}}\times\mathcal{B}((\mathbb{R}^{ d})^{\Delta_{l}^{-1}})\to[0,1]\)
Figure 4: (Red Kangaroo Model) Comparison of the time needed to run the UMSA algorithm only once and the time needed to compute an unbiased estimate of the score function in a small neighborhood of \(\boldsymbol{\theta}^{*}=(2.397,4.429\times 10^{-03},0.84,17.631)\) using the method in [14]. The box-plots are generated from 1000 runs of each procedure.
Figure 3: (Red Kangaroo Model) The UMSA algorithm applied to estimate the parameters of the red Kangaroo model. The plots show the MSE vs. run time for each parameter on a log-log scale.
as
\[M_{\theta,l}(x,dz)=Q_{\theta,l}(x_{\Delta_{l}^{-1}},dz_{1})\prod_{i=2}^{\Delta_{l} ^{-1}}Q_{\theta,l}(z_{i-1},dz_{i}).\]
Let \(\{X_{i}\}_{i=0}^{T\Delta_{l}^{-1}}\) be the path of the process generated by \(Q_{\theta,l}\) until time \(T\). For every \(1\leq k\leq\Delta_{l}^{-1}-1\) and a set \(A\in\mathcal{B}(\mathbb{R}^{d})^{\Delta_{l}^{-1}})\) of discrete paths the values of \(M_{\theta,l}(x,A)\) is the probability of the event \(\{X_{i}\}_{i=1+\Delta_{l}^{-1}k}^{\Delta_{l}^{-1}(k+1)}\in A\) given \(\{X_{i}\}_{i=1+\Delta_{l}^{-1}(k-1)}^{\Delta_{l}^{-1}k}=x\). Without loss of generality and for ease of notation we extend the definition of \(M_{\theta,l}\) to \(\left((\mathbb{R}^{d})^{\Delta_{l}^{-1}}\cup\mathbb{R}^{d}\right)\times \mathcal{B}(\mathbb{R}^{d})^{\Delta_{l}^{-1}})\) since \(M_{\theta,l}\) uses only the last component of \(x\). This allows us to think of \(M_{\theta,l}(x,A)\) as the transition probability of the event \(\{\{X_{i}\}_{i=1}^{\Delta_{l}^{-1}}\in A\}\) given \(X_{0}=x\in\mathbb{R}^{d}\).
Let \(y_{1},...,y_{T}\) be the partial observations at times \(1,...,T\). For every \(l\in\mathbb{N}_{0}\) and \(\theta\in\Theta\) define \(G_{\theta,t}^{l}:\left(\mathbb{R}^{d}\right)^{\Delta_{l}^{-1}}\to\mathbb{R}\) by \(G_{\theta,t}^{l}(x)=g_{\theta}(x_{\Delta_{l}^{-1}},y_{t})\) for \(1\leq t\leq T\). Let \(A_{1:T-1}^{1:N}\) be the resampling indices and \((F_{t})_{t=0}^{T}\) be the indices of the path returned by conditional particle filter, these indices satisfy \(A_{t}^{N}=N\) and \(F_{t}=A_{t}^{F_{t+1}}\) for \(1\leq t\leq T-1\) and \(F_{0}=F_{1}\), hence deciding the value of \(F_{T}\) and the resampling indices \(A_{1:T-1}^{1:N-1}\) decides the indices of the whole path that conditional particle filter will return. The kernel of the conditional particle filter \(K_{\theta,l}\) at level \(l\) is defined by its action on bounded functions \(\varphi:\left(\mathbb{R}^{d}\right)^{T\Delta_{l}^{-1}+1}\to\mathbb{R}\) given a discrete path \(z\in\left(\mathbb{R}^{d}\right)^{T\Delta_{l}^{-1}+1}\) as
\[K_{\theta,l}(\varphi)(z)=\sum_{1\leq F_{T},A_{1:T-1}^{1:N-1}\leq N }\int\delta_{z}(dx_{0:T}^{N})\frac{G_{\theta,T}^{l}(x_{T}^{F_{T}})}{\sum_{k=1 }^{N}G_{\theta,T}^{l}(x_{T}^{k})}\prod_{j=1}^{N-1}\mu_{\theta}(dx_{0}^{j})M_{ \theta,l}(x_{0}^{j},dx_{1}^{j})\] (A.1) \[\times\prod_{t=2}^{T}\frac{G_{\theta,t-1}^{l}(x_{t-1}^{A_{t-1}^{ j}})}{\sum_{k=1}^{N}G_{\theta,t-1}^{l}(x_{t-1}^{k})}M_{\theta,l}(x_{t-1}^{A_{t-1} ^{j}},dx_{t}^{j})\varphi(x_{0}^{F_{0}},...,x_{T}^{F_{T}}),\]
where the \(x_{t}^{j}\) are elements in \(\left(\mathbb{R}^{d}\right)^{\Delta_{l}^{-1}}\) for \(1\leq t\leq T\), \(1\leq j\leq N\) and \(x_{0}^{j}\in\mathbb{R}^{d}\) for \(1\leq j\leq N\), and the notation \(\delta_{z}(dx_{0:T}^{N})=\prod_{t=0}^{T}\delta_{z_{t}}(dx_{t}^{N})\). The integral (A.1) is finite and well-defined for any bounded function \(\varphi\) since assumption (A5) guarantees that the functions \(G_{\theta,t}^{l}\) is bounded away from zero for every \(0\leq t\leq T\). It is well-known that for every \(\theta\) the kernel \(K_{\theta,l}\) is \(\psi\)-irreducible and aperiodic with \(\pi_{\theta}^{l}\) as its stationary distribution [2], therefore asymptotically we are able to sample from the filtering distribution \(\pi_{\theta}^{l}\) by iteratively sampling from \(K_{\theta,l}\).
### Modified Algorithm
We use the function \(-H_{l}\) defined in (3.3) to run the Markovian stochastic approximation algorithm in [3] as follows. Let \(\{\gamma_{n}\}_{n\in\mathbb{N}_{0}}\) be a sequence of positive real numbers such that \(\sum_{n\in\mathbb{N}_{0}}\gamma_{n}=\infty\) and \(\sum_{n\in\mathbb{N}_{0}}\gamma_{n}^{2}<\infty\). Suppose we have sequence of increasing compact sets \(\{\Theta_{n}\}_{n\in\mathbb{N}_{0}}\) such that \(\bigcup_{n}\tilde{\Theta}_{n}=\Theta\) and \(\Theta_{n}\subset\text{int}(\Theta_{n+1})\). Let \(\{\epsilon_{n}\}_{n\in\mathbb{N}}\) be a sequence of positive real numbers that converges to \(0\). For every \(l\in\mathbb{N}_{0}\) we define the stochastic approximation with re-projections defined in [3, Section 3.3] as a sequence of pairs \((\theta_{n}^{l},X_{n}^{l})\in\Theta\times\left(\mathbb{R}^{d}\right)^{T\Delta_{l }^{-1}+1}\) defined iteratively by
\[\text{Sample }X_{n+1}^{l}\sim K_{\theta_{n}^{l},l}(X_{n}^{l},\cdot)\] (A.2) \[\tilde{\theta}_{n+1}^{l}=\theta_{n}^{l}+\gamma_{n}H_{l}(X_{n+1}^{ l},\theta_{n}^{l})\] \[\theta_{n+1}^{l}=\begin{cases}\tilde{\theta}_{n+1}^{l},\quad| \tilde{\theta}_{n+1}^{l}-\theta_{n}^{l}|<\epsilon_{n}\ \text{ and }\ \theta_{n+1}^{l}\in\Theta_{n+1}\\ \theta_{0},\quad\text{otherwise}\end{cases}\]
where \((\theta_{0},X_{0})\in\Theta_{0}\times\left(\mathbb{R}^{d}\right)^{T\Delta_{l}^{- 1}+1}\) is an arbitrary initial pair. Under appropriate conditions the sequence \(\{\theta_{n}^{l}\}_{n\in\mathbb{N}_{0}}\) will converge almost surely to a root of \(\nabla_{\theta}\log p_{\theta}^{l}(y_{1:T})\). Theorem 5.5 in [3] provides us with these
appropriate conditions to guarantee the almost sure convergence of these iterations for every \(l\in\mathbb{N}_{0}\). Below we will state the theorem in the context used in this paper along with the assumptions we impose in our paper which we will use to verify the conditions of the theorem. We will also explain how the restated theorem relates to the original theorem given our assumptions.
### Assumptions and Main Theorem
For a measurable space \(S\) and a measurable function \(V:S\to[1,\infty)\) we define the operator \(\|.\|_{V}\) on the space of measurable functions \(f:S\to\mathbb{R}\) by \(\|f\|_{V}=\sup_{x}|\frac{f(x)}{V(x)}|\). We define the space of functions
\[\mathcal{L}_{V}=\left\{f:S\to\mathbb{R}\text{ measurable }:\sup_{x\in S}\frac{|f(x)|}{V(x)}< \infty\right\},\]
and \(\|.\|_{V}\) is a norm on \(\mathcal{L}_{V}\). We impose the assumptions (A1-3) below on \(\nabla_{\theta}\log p^{l}_{\theta}(y_{1:T})\) and \(\Theta\) to guarantee that they are well-behaved.
* The set \(\Theta\) is bounded.
* For every \(l\in\mathbb{N}_{0}\) the function \(\theta\to\nabla_{\theta}\log p^{l}_{\theta}(y_{1:T})\) is twice continuously differentiable. Moreover there exists a unique root for \(\nabla_{\theta}\log p^{l}_{\theta}(y_{1:T})\). This root is the unique maximizer of \(\log p^{l}_{\theta}(y_{1:T})\) and we denote it by \(\theta^{l}_{\star}\).
* For every \(l\in\mathbb{N}_{0}\) there exists a constant \(M\) such that \[\sup_{\tilde{\theta}\in\partial\Theta}\limsup_{\theta\to\tilde{\theta}}\log p ^{l}_{\theta}(y_{1:T})<M<\log p^{l}_{\theta^{l}_{\star}}(y_{1:T}).\]
The following is [3, Theorem 5.5], slightly modified to our notation.
**Theorem A.1**.: _Let \(l\in\mathbb{N}_{0}\) and consider the sequence \(\{\theta^{l}_{n}\}_{n\in\mathbb{N}_{0}}\) defined by iteration (A.2). Assume (A1-3). Suppose that there exist a function \(V_{l}:(\mathbb{R}^{d})^{T\Delta_{l}^{-1}+1}\to[1,\infty)\) and constants \(p\geq 2\), \(\beta\in(0,1]\), \(\lambda,\delta\in(0,1)\), \(b_{l}>0\), a non-empty set \(\mathsf{C}\), and a measure \(\eta_{l}\) that satisfy the following: (TA1) \(\sup_{\theta\in\Theta}(K_{\theta,l}V^{p}_{l})(x)\leq\lambda(V_{l}(x))^{p}+b \mathbb{1}_{\mathsf{C}}(x)\). (TA2) \(\inf_{\theta\in\Theta}K_{\theta,l}(x,A)\geq\delta\eta_{l}(A)\quad\forall x \in\mathsf{C}\) and \(A\in\mathcal{B}((\mathbb{R}^{d})^{T\Delta_{l}^{-1}+1})\). (TA3) There exists \(C\) such that for all \(x\in(\mathbb{R}^{d})^{T\Delta_{l}^{-1}+1}\):_
\[\sup_{\theta\in\Theta}|H_{l}(\theta,x)|\leq CV_{l}(x),\]
\[\sup_{\begin{subarray}{c}(\theta,\theta^{\prime})\in\Theta^{2}\\ \theta\neq\theta^{\prime}\end{subarray}}\|\theta-\theta^{\prime}\|^{-\beta}| H_{l}(\theta,x)-H_{l}(\theta^{\prime},x)|\leq CV_{l}(x).\]
_(TA4) There exists \(C\) such that for all \((\theta,\theta^{\prime})\in\Theta^{2}\):_
\[\|K_{\theta,l}\varphi-K_{\theta^{\prime},l}\varphi\|_{V_{l}}\leq C\left\| \varphi\right\|_{V_{l}}|\theta-\theta^{\prime}|^{\beta}\quad\forall\varphi \in\mathcal{L}_{V_{l}},\]
\[\|K_{\theta,l}\varphi-K_{\theta^{\prime},l}\varphi\|_{V^{p}_{l}}\leq C\left\| \varphi\right\|_{V^{p}_{l}}|\theta-\theta^{\prime}|^{\beta}\quad\forall\varphi \in\mathcal{L}_{V^{p}_{l}}.\]
_(AT5) There exists \(\alpha\in(0,\beta)\) such that_
\[\sum_{n}(\gamma_{n}^{2}+\gamma_{n}\epsilon_{n}^{\alpha}+(\epsilon_{n}^{-1} \gamma_{n})^{p})<\infty.\]
_We have_
\[\theta^{l}_{n}\to\theta^{l}_{\star}\quad\text{ a.s. }\]
Theorem A.1 imposes requirements on the functions \(K_{\theta,l}\), \(H_{l}\), and the sequences \(\{\gamma_{n}\}_{n\in\mathbb{N}_{0}}\), and \(\{\epsilon_{n}\}_{n\in\mathbb{N}_{0}}\). To satisfy these condition we need more assumptions on the afore-mentioned functions and sequences, we state these additional assumptions below and denote them by (A4-8). Let \(\eta\) be a finite signed measure on a probability space. By the Hahn-Jordan decomposition there exist finite measures \(\eta^{+}\) and \(\eta^{-}\) such that \(\eta=\eta^{+}-\eta^{-}\). Denote \(|\eta|=\eta^{+}+\eta^{-}\).
* For every \(l\in\mathbb{N}_{0}\) there exist a function \(W_{l}:\mathbb{R}^{d}\to[1,\infty)\), a pair \((\kappa,\rho)\in(0,1)\times\mathbb{R}^{+}\) and a set \(\mathsf{C}\in\mathcal{B}(\mathbb{R}^{d})\) such that for any \((x,\theta)\in\mathbb{R}^{d}\times\Theta\): \[Q_{\theta,l}(e^{W_{l}})(x)\leq\exp\{\kappa W_{l}(x)+\rho\mathbb{1}_{\mathsf{C }}(x)\},\] where \(W_{l}\) satisfies the growth condition \[\lim_{\|x\|\to\infty}\frac{W_{l}(x)}{\log(\|x\|)}=\infty.\]
* There a constant \(C>0\) such that \(\frac{1}{C}\leq g_{\theta}(x,y_{t}),|\nabla_{\theta}g_{\theta}(x,y_{t})|\leq C\) for all \(1\leq t\leq T\), \((x,y,\theta)\in\mathbb{R}^{d}\times\mathbb{R}^{d_{y}}\times\Theta\).
* For every \(l\in\mathbb{N}_{0}\) there exists \(C\in\mathbb{R}\) such that for every \(\theta\in\Theta:\mu_{\theta}\left(\exp\{W_{l}\}\right)<C\).
* For every \(l\in\mathbb{N}_{0}\) there exist \(\zeta>0\) and \(C>0\) such that for every \((\theta,\theta^{\prime},x)\in\Theta\times\Theta\times\mathbb{R}^{d}\): \[|Q_{\theta,l}-Q_{\theta^{\prime},l}|\left(\exp\{W_{l}\}\right)(x)\leq C\| \theta-\theta^{\prime}\|^{\zeta}\exp\{W_{l}(x)\}\] and \[|\mu_{\theta}-\mu_{\theta^{\prime}}|\left(\exp\{W_{l}\}\right)\leq C\|\theta- \theta^{\prime}\|^{\zeta}.\]
* For every \(l\in\mathbb{N}_{0}\) there are finite non-zero measures \(\Xi_{l}\) and \(\Psi\) such that for every \((\theta,x,A)\in\Theta\times\mathbb{R}^{d}\times\mathcal{B}(\mathbb{R}^{d})\): \[\mu_{\theta}(A)\geq\Psi(A)\] and \[Q_{\theta,l}(x,A)\geq\Xi_{l}(A).\]
* For every \(l\in\mathbb{N}_{0}\) and \(x\in(\mathbb{R}^{d})^{T\Delta_{l}^{-1}+1}\) the function \(\theta\mapsto H_{l}(\theta,x)\) is differentiable. Moreover there exist constants \(C,q>0\) such that for every \((\theta,x)\in\Theta\times(\mathbb{R}^{d})^{T\Delta_{l}^{-1}+1}:\) \[|H_{l}(\theta,x)|<C(1+|x|^{q}),\] \[\|\nabla_{\theta}H_{l}(\theta,x)\|<C(1+|x|^{q}).\]
In [3] the authors state four conditions which they named (A1)-(A4). Here we breifly show that these four conditions follow from the theorem assumptions (TA1-4) and our assumptions (A1-3). The notation \(h(\theta)\) in [3] is the negative of the gradient of the log likelihood in our case \(-\nabla\log p_{\theta}^{l}(y_{1:T})\). (A1) in [3] follows from our assumptions (A1-3). Indeed, the function \(-\log p_{\theta}^{l}(y_{1:T})\) is lower bounded and our (A3) assumes that it is continuously twice differentiable which allows the choice \(w=-\log p_{\theta}^{l}(y_{1:T})\) as remarked by the authors of [3]. Assume our (A2) and (A3) and choose \(M_{0}=\frac{1}{2}M_{1}=\frac{1}{2}M\). This choice with the boundedness of \(\Theta\) and the uniqueness of the root of \(\nabla_{\theta}\log p_{\theta}^{l}(y_{1:T})\) implies the assumptions (A1)-(i),(iii),(iii),(iv) in [3]. In [3] the authors introduce conditions (DRI)-1,2,3 which implies (A2) and (A3) in their paper. Assumption (DRI1) is just our theorem's assumptions (TA1) and (TA2) but with \(x\in(\mathbb{R}^{d})^{T\Delta_{l}^{-1}+1}\) instead of \(x\in\mathsf{C}\) in (TA2). Assumptions (DRI2) and (DRI3) are (TA3) and (TA4) in Theorem A.1 above. (A4) in [3] is really (TA5) in Theorem A.1, but \(\alpha\) can be any value in the interval \((0,\beta)\) because of the stronger assumption (DRI).
As remarked in [3] an easy choice of \(\{\gamma_{n}\}_{n\in\mathbb{N}}\) and \(\{\epsilon_{n}\}_{n\in\mathbb{N}}\) to satisfy (TA5) is to choose \(\delta\in\left(1,\frac{1+\alpha}{1+\alpha/p}\right)\) and \(\eta\in\left(\frac{\delta-1}{\alpha},1-\frac{\delta}{p}\right)\), then take \(\{\gamma_{n}\}_{n\in\mathbb{N}}\) such that \(\sum_{n}\gamma_{n}^{\delta}<\infty\) and \(\epsilon_{n}=\mathcal{L}\gamma_{n}^{\eta}\) for any constant \(\mathcal{L}>0\). Therefore we only need to verify conditions (TA1)-(TA4).
**Theorem A.2**.: _Assume (A1-9). For every \(l\in\mathbb{N}_{0}\) the conditions (TA1-4) of Theorem A.1 hold._
### Proofs for Theorem a.2
We will prove Theorem a.2 in a sequence of lemmata. In general, for a transition kernel \(P\) we say that a function \(V\) is a drift function for \(P\) if there exist constants \(a\in(0,1),b>0\) and a small set \(\mathsf{C}\) such that \(PV\leq aV+b\mathbb{1}_{\mathsf{C}}\). If \(V\) rather satisfies \(P\exp\{V\}\leq\exp\{aV+b\mathbb{1}_{\mathsf{C}}\}\) we call it a multiplicative drift function. Assumption (A4) supposes that for every \(l\in\mathbb{N}_{0}\) the existence of a common multiplicative drift function for the kernels \(\{Q_{\theta,l}\}_{\theta\in\Theta}\). Lemma 1 below shows that this property can be extended to unit-step kernels \(M_{\theta,l}\). Define the function \(\mathcal{W}_{l}:(\mathbb{R}^{d})^{\Delta_{l}^{-1}}\to\mathbb{R}\) by
\[\mathcal{W}_{l}(x)=\log(\sum_{k=1}^{\Delta_{l}^{-1}}\exp\{W_{l}(x_{k})\}).\]
we show that it is a multiplicative drift function for \(M_{\theta,l}\). Multiplicative drift conditions have been considered in [20] and in particular for particle methods in [10, 25].
**Lemma 1**.: _Assume (A4). Then for any \(l\in\mathbb{N}_{0}\) there exist \((\xi,b)\in(0,1)\times\mathbb{R}\) and a set \(\mathsf{C}\) such that for any \((\theta,x)\in\Theta\times(\mathbb{R}^{d})^{\Delta_{l}^{-1}}\):_
\[M_{\theta,l}(\exp\{\mathcal{W}_{l}\})(x)\leq\exp\{\xi\mathcal{W}_{l}(x)+b \mathbb{1}_{\mathsf{C}}(x)\}\]
_with \(\xi=(1+\kappa)/2\)._
Proof.: For each \(1\leq i\leq\Delta_{l}^{-1}\) let \(p_{i}:(\mathbb{R}^{d})^{\Delta_{l}^{-1}}\to\mathbb{R}^{d}\) be the \(i\)-th component projection.
\[M_{\theta,l}\left(\exp\{\mathcal{W}_{l}\}\right)=\sum_{i=1}^{\Delta_{l}^{-1}} M_{\theta,l}\left(\exp\{W_{l}\circ p_{i}\}\right).\]
For each \(1\leq i\leq n\) we have
\[M_{\theta,l}\left(\exp\{W_{l}\circ p_{i}\}\right)=\left(Q_{\theta,l}\right)^{ \Delta_{l}^{-1}}\left(\exp\{W_{l}\circ p_{i}\}\right)=\left(Q_{\theta,l}\right) ^{i}\left(\exp\{W_{l}\circ p_{i}\}\right)\]
where the powers associated to the operator \(Q_{\theta,l}\) mean repeated composition (or iteration). Let \(i>1\), by assumption (A4)
\[\left(Q_{\theta,l}\right)^{i}\left(\exp\{W_{l}\circ p_{i}\}\right)\leq\left(Q_ {\theta,l}\right)^{i-1}\left(\exp\{\kappa W_{l}\circ p_{i-1}+\rho\}\right) \leq e^{\rho}\left(Q_{\theta,l}\right)^{i-1}\left(\exp\{W_{l}\circ p_{i-1}\} \right).\]
Let \(x\in(\mathbb{R}^{d})^{\Delta_{l}^{-1}}\), for \(i=1\) we have
\[Q_{\theta,l}\left(\exp\{W_{l}\circ p_{1}\}\right)(x)\leq\exp\{\kappa W(p_{ \Delta_{l}^{-1}}(x))+\rho\}\leq e^{\rho}\exp\{\kappa\mathcal{W}_{l}(x)\}.\]
It is clear inductively that there exists a constant \(A\) such that for every \((\theta,x)\in\Theta\times(\mathbb{R}^{d})^{\Delta_{l}^{-1}}\)
\[M_{\theta,l}\left(\exp\{\mathcal{W}_{l}\}\right)(x)\leq A\exp\{\kappa\mathcal{ W}_{l}(x)\}.\]
Let \(\mathsf{C}=\left\{x\in(\mathbb{R}^{d})^{\Delta_{l}^{-1}}\ :\ \mathcal{W}_{l}(x)<\frac{2\log(A)}{1- \kappa}+1\right\}\), \(b=\log(A)\), and \(\xi=\frac{1+\kappa}{2}\) where \(A\) is chosen large enough to guarantee that the set \(\mathsf{C}\) is non-empty. With this choice we have that for every \((\theta,x)\in\Theta\times(\mathbb{R}^{d})^{\Delta_{l}^{-1}}\)
\[M_{\theta,l}\left(\exp\{\mathcal{W}_{l}\}\right)(x)\leq A\exp\{\kappa\mathcal{ W}_{l}(x)\}\leq\exp\{\xi\mathcal{W}_{l}(x)+b\mathbb{1}_{\mathsf{C}}(x)\}.\]
**Remark 1**.: _The definition of \(M_{\theta,l}(x,dz)\) depends only on the last component of \(x\). If we consider \(x\) as an element of \(\mathbb{R}^{d}\) (as discussed above) then the following the proof of lemma 1 shows that there exists a small set \(\mathsf{C}\subset\mathbb{R}^{d}\), such that the following inequality holds_
\[M_{\theta,l}\left(\exp\{\mathcal{W}_{l}\}\right)(x)\leq\exp\{\xi W_{l}(x)+b \mathbb{1}_{\mathsf{C}}(x)\}\]
_with the same values for \(\xi\) and \(b\) chosen in Lemma 1._
We are now in a position to prove that condition (TA1) is satisfied. The function \(V_{l}:(\mathbb{R}^{d})^{T\Delta_{l}^{-1}+1}\to\mathbb{R}\) defined by
\[V_{l}(x)=\left(\frac{1}{T+1}\exp\{W_{1}(x_{0})\}+\frac{1}{T+1}\sum_{t=1}^{T}\exp \{\mathcal{W}_{l}(x_{t})\}\right)^{1/2}\]
where \(x_{t}\in(\mathbb{R}^{d})^{\Delta_{l}^{-1}}\) for every \(1\leq t\leq T\) and \(x_{0}\in\mathbb{R}^{d}\). The function \(V_{l}\) is our candidate function that will satisfy the condition (TA1) with \(p=2\). Many of the constants in the statements of the Lemmata below depend upon \(T\), but that dependence is never recorded as it is not needed in our analysis.
**Lemma 2**.: _Assume (A4-6). For every \(l\in\mathbb{N}_{0}\) there exists \((\lambda,b,m)\in(0,1)\times\mathbb{R}^{+}\times\mathbb{R}^{+}\) such that \((\theta,z)\in\Theta\times(\mathbb{R}^{d})^{T\Delta_{l}^{-1}+1}\)_:__
\[(K_{\theta,l}V_{l}^{2})(z)\leq\lambda(V_{l}(z))^{2}+b1_{\mathbb{C}_{m}}(z)\]
_where \(\mathbb{C}_{m}=\left\{z\in(\mathbb{R}^{d})^{T\Delta_{l}^{-1}+1}|V_{l}(z)<m \right\}\)._
Proof.: Let \(\mathcal{V}_{l}=\exp\{\mathcal{W}_{l}\}\) and \(\mathcal{V}_{l,0}=\exp\{W_{l}\}\). We have \(\mathcal{V}_{l},\mathcal{V}_{l,0}\geq 1\), from assumption (A6) the function \(\mathcal{V}_{l,0}\) is \(\mu_{\theta}\)-integrable for every \(\theta\in\Theta\), and from Lemma 1 we have that there exist constants \(C>1\) and \(0<\xi<1\) such that for every \(\theta\in\Theta\), \(x\in(\mathbb{R}^{d})^{\Delta^{-1}}\), and \(w\in\mathbb{R}^{d}\) the inequalities \(M_{\theta,l}(\mathcal{V}_{l})(x)\leq C(\mathcal{V}_{l}(x))^{\xi}\) and \(M_{\theta,l}(\mathcal{V}_{l})(w)\leq C(\mathcal{V}_{l,0}(w))^{\xi}\) hold. Define the function \(\bar{\mathcal{V}}_{l}:(\mathbb{R}^{d})^{T\Delta_{l}^{-1}+1}\to[T+1,\infty)\) by \(\bar{\mathcal{V}}_{l}(x)=\mathcal{V}_{l,0}(x_{0})+\sum_{t=1}^{T}\mathcal{V}_{ l}(x_{t})\), we will show that there exists \((\lambda,\bar{b},m)\in\mathbb{R}^{3}\) such that for every \((\theta,x)\in\Theta\times(\mathbb{R}^{d})^{T\Delta_{l}^{-1}+1}\):
\[(K_{\theta,l}\bar{\mathcal{V}}_{l})(x)\leq\lambda\bar{\mathcal{V}}_{l}(x)+ \bar{b}1_{\mathbb{C}_{m}}(x)\]
where \(\bar{\mathbb{C}}_{\bar{m}}=\left\{x\in(\mathbb{R}^{d})^{T\Delta_{l}^{-1}+1}| \bar{\mathcal{V}}_{l}(x)<\bar{m}\right\}\). Proving this inequality is equivalent to proving the lemma.
We calculate the action of \(K_{\theta,l}\) on the functions \(x\mapsto\mathcal{V}_{l}(x_{i})\) for every \(1\leq i\leq T\) and \(x\mapsto\mathcal{V}_{l,0}(x_{0})\). Let \(0\leq i\leq T\) define the sets \(S_{i}=\{(F_{T},A_{1:T-1}^{1:N-1})|F_{i}=N\}\subset\{1,...,N\}^{1+(T-1)(N-1)}\). The sets \(S_{i}\) are non-empty because \(F_{T}=N\) implies \(F_{t}=N\) for all \(0\leq t<T\). Moreover the sets \(S_{i}\) are strict subsets of \(\{1,...,N\}^{1+(T-1)(N-1)}\). We begin by observing the following bounds. For \(0\leq i\leq T\) and \(\theta\in\Theta\):
\[\sum_{(F_{T},A_{1:T-1}^{1:N-1})\in S_{i}}\int\delta_{z}(dx_{0:T}^{N})\frac{G_{ \theta,T}^{l}(x_{T}^{F_{T}})}{\sum_{k=1}^{N}G_{\theta,T}^{l}(x_{T}^{Q})}\prod_ {j=1}^{N-1}\mu_{\theta}(dx_{0}^{j})M_{\theta,l}(x_{0}^{j},dx_{1}^{j})\prod_{t =2}^{T}\frac{G_{\theta,t-1}^{l}(x_{t-1}^{j})}{\sum_{k=1}^{N}G_{\theta,t-1}^{l }(x_{t-1}^{Q})}M_{\theta,l}(x_{t-1}^{A_{t-1}^{j}},dx_{t}^{j})\]
\[= 1-\sum_{(F_{T},A_{1:T-1}^{1:N-1})\not\in S_{i}}\int\delta_{z}(dx_ {0:T}^{N})\frac{G_{\theta,T}^{l}(x_{T}^{F_{T}})}{\sum_{k=1}^{N}G_{\theta,T}^{l} (x_{t}^{Q})}\prod_{j=1}^{N-1}\mu_{\theta}(dx_{0}^{j})M_{\theta,l}(x_{0}^{j}, dx_{1}^{j})\] \[\times\prod_{t=2}^{T}\frac{G_{\theta,t-1}^{l}(x_{t-1}^{A_{t-1}^{j} })}{\sum_{k=1}^{N}G_{\theta,t-1}^{l}(x_{t-1}^{Q})}M_{\theta,l}(x_{t-1}^{A_{t-1}^ {j}},dx_{t}^{j})\] \[\leq 1-(|\{1,\ldots,N\}^{1+(T-1)(N-1)}\backslash S_{i}|)\left(\frac{1 }{C^{2}N}\right)^{1+(T-1)(N-1)}\] \[\leq 1-\left(\frac{1}{C^{2}N}\right)^{1+(T-1)(N-1)}\]
where we used (A5) to go from lines 2 to 3. From assumption (A6) we have that for every \(\theta\in\Theta\), \(z\in(\mathbb{R}^{d})^{T\Delta_{l}^{-1}+1}\), and \(F_{0}\neq N\):
\[\int\delta_{z}(dx_{0:T}^{N})\prod_{j=1}^{N-1}\mu_{\theta}(dx_{0}^{j})M_{\theta,l}( x_{0}^{j},dx_{1}^{j})\prod_{t=2}^{i}M_{\theta,l}(x_{t-1}^{A_{t-1}^{j}},dx_{t}^{j}) \mathcal{V}_{l,0}(x_{0}^{F_{0}})=\int\mu_{\theta}(dx_{0}^{F_{0}})\mathcal{V}_{l,0 }(x_{0}^{F_{0}})\leq C.\] (A.4)
Using (A.4) and assumption (A5) we can further bound
\[\sum_{(F_{T},A^{i:N-1}_{1:T-1})\not\in S_{i}}\int\delta_{z}(dx^{N}_{0:T})\frac{G^ {l}_{\theta,T}(x^{F_{T}}_{T})}{\sum_{k=1}^{N}G^{l}_{\theta,T}(x^{Q}_{T})}\prod_{j =1}^{N-1}\mu_{\theta}(dx^{j}_{0})M_{\theta,l}(x^{j}_{0},dx^{j}_{1})\prod_{t=2}^ {T}\frac{G^{l}_{\theta,t-1}(x^{A^{j}_{t-1}}_{t-1})}{\sum_{k=1}^{N}G^{l}_{\theta,t-1}(x^{Q}_{t-1})}M_{\theta,l}(x^{A^{j}_{t-1}}_{t-1},dx^{j}_{t})\mathcal{V}_{l }(x^{F_{0}}_{0})\]
\[\leq \left(\frac{C^{2}}{N}\right)^{1+(T-1)(N-1)}\sum_{(F_{T},A^{i:N-1}_ {1:T-1})\not\in S_{i}}\int\delta_{z}(dx^{N}_{0:T})\prod_{j=1}^{N-1}\mu_{\theta }(dx^{j}_{0})M_{\theta,l}(x^{j}_{0},dx^{j}_{1})\] \[\times\prod_{t=2}^{i}M_{\theta,l}(x^{A^{j}_{t-1}}_{t-1},dx^{j}_{t })\mathcal{V}_{l}(x^{F_{0}}_{0})\] \[\leq \left(\frac{C^{2}}{N}\right)^{1+(T-1)(N-1)}N^{1+(T-1)(N-1)}C=C^{3 +2(T-1)(N-1)}.\]
Suppose \(i=0\). For \(\theta\in\Theta\) we have
\[\sum_{1\leq F_{T},A^{i:N-1}_{1:T-1}\leq N}\int\delta_{z}(dx^{N}_{0: T})\frac{G^{l}_{\theta,T}(x^{F_{T}}_{T})}{\sum_{k=1}^{N}G^{l}_{\theta,T}(x^{Q}_{T})} \prod_{j=1}^{N-1}\mu_{\theta}(dx^{j}_{0})M_{\theta,l}(x^{j}_{0},dx^{j}_{1}) \prod_{t=2}^{T}\frac{G^{l}_{\theta,t-1}(x^{A^{j}_{t-1}}_{t-1})}{\sum_{k=1}^{N} G^{l}_{\theta,t-1}(x^{Q}_{t-1})}\] \[\times M_{\theta,l}(x^{A^{j}_{t-1}}_{t-1},dx^{j}_{t})\mathcal{V}_ {l,0}(x^{F_{i}}_{i})\] \[=\mathcal{V}_{l,0}(z_{i})\sum_{(F_{T},A^{i:N-1}_{1:T-1})\in S_{i} }\int\delta_{z}(dx^{N}_{0:T})\frac{G^{l}_{\theta,T}(x^{F_{T}}_{T})}{\sum_{k=1} ^{N}G^{l}_{\theta,T}(x^{Q}_{T})}\prod_{j=1}^{N-1}\mu_{\theta}(dx^{j}_{0})M_{ \theta,l}(x^{j}_{0},dx^{j}_{1})\prod_{t=2}^{T}\frac{G^{l}_{\theta,t-1}(x^{A^{j }_{t-1}}_{t-1})}{\sum_{k=1}^{N}G^{l}_{\theta,t-1}(x^{Q}_{t-1})}\] \[\times M_{\theta,l}(x^{A^{j}_{t-1}}_{t-1},dx^{j}_{t})\mathcal{V}_ {l,0}(x^{F_{i}}_{i}).\]
Using the bounds (A.3) and (A.5) we bound the quantities in (A.6) from above by
\[1-\left(\frac{1}{C^{2}N}\right)^{1+(T-1)(N-1)}\mathcal{V}_{l,0}(z_{0})+C^{3+2( T-1)(N-1)}.\] (A.7)
Now suppose \(1\leq i\leq T\). For \(\theta\in\Theta\) the action of \(K_{\theta,l}\) on \(x\mapsto\mathcal{V}_{l}(x_{i})\) is
\[\sum_{1\leq F_{T},A^{i:N-1}_{1:T-1}\leq N}\int\delta_{z}(dx^{N}_{0: T})\frac{G^{l}_{\theta,T}(x^{F_{T}}_{T})}{\sum_{k=1}^{N}G^{l}_{\theta,T}(x^{Q}_{T})} \prod_{j=1}^{N-1}\mu_{\theta}(dx^{j}_{0})M_{\theta,l}(x^{j}_{0},dx^{j}_{1}) \prod_{t=2}^{T}\frac{G^{l}_{\theta,t-1}(x^{A^{j}_{t-1}}_{t-1})}{\sum_{k=1}^{N}G ^{l}_{\theta,t-1}(x^{Q}_{t-1})}\] \[\times M_{\theta,l}(x^{A^{j}_{t-1}}_{t-1},dx^{j}_{t})\mathcal{V}_ {l}(x^{F_{i}}_{i})\]
\[= \mathcal{V}_{l}(z_{i})\sum_{(F_{T},A^{1:N-1}_{1:T-1})\in S_{i}}\int \delta_{z}(dx^{N}_{0:T})\frac{G^{l}_{\theta,T}(x^{F_{T}}_{T})}{\sum_{k=1}^{N}G^{l }_{\theta,T}(x^{Q}_{T})}\prod_{j=1}^{N-1}\mu_{\theta}(dx^{j}_{0})M_{\theta,l}(x ^{j}_{0},dx^{j}_{1})\prod_{t=2}^{T}\frac{G^{l}_{\theta,t-1}(x^{A^{j}_{t-1}}_{t -1})}{\sum_{k=1}^{N}G^{l}_{\theta,t-1}(x^{Q}_{t-1})}\] \[\times M_{\theta,l}(x^{A^{j}_{t-1}}_{t-1},dx^{j}_{t})+\sum_{(F_{T},A^{1:N-1}_{1:T-1})\notin S_{i}}\int\delta_{z}(dx^{N}_{0:T})\frac{G^{l}_{\theta,T}(x^{F_{T}}_{T})}{\sum_{k=1}^{N}G^{l}_{\theta,T}(x^{Q}_{T})}\prod_{j=1}^{N-1} \mu_{\theta}(dx^{j}_{0})M_{\theta,l}(x^{j}_{0},dx^{j}_{1})\] \[\times\prod_{t=2}^{T}\frac{G^{l}_{\theta,t-1}(x^{A^{j}_{t-1}}_{t- 1})}{\sum_{k=1}^{N}G^{l}_{\theta,t-1}(x^{Q}_{t-1})}M_{\theta,l}(x^{A^{j}_{t-1} }_{t-1},dx^{j}_{t})\mathcal{V}_{l}(x^{F_{i}}_{i}).\]
From (A.3) the first term in (A.8) is bounded above by \(1-\left(\frac{1}{C^{2}N}\right)^{1+(T-1)(N-1)}\). For the second term in (A.8) we use the uniform boundedness of \(G^{l}_{\theta,T}\)
\[\sum_{(F_{T},A^{1:N-1}_{1:T-1})\notin S_{i}}\int\delta_{z}(dx^{N}_ {0:T})\frac{G^{l}_{\theta,T}(x^{F_{T}}_{T})}{\sum_{k=1}^{N}G^{l}_{\theta,T}(x^ {Q}_{T})}\prod_{j=1}^{N-1}\mu_{\theta}(dx^{j}_{0})M_{\theta,l}(x^{j}_{0},dx^{j }_{1})\prod_{t=2}^{T}\frac{G^{l}_{\theta,t-1}(x^{A^{j}_{t-1}}_{t-1})}{\sum_{k= 1}^{N}G^{l}_{\theta,t-1}(x^{Q}_{t-1})}M_{\theta,l}(x^{A^{j}_{t-1}}_{t-1},dx^{j }_{t})\mathcal{V}_{l}(x^{F_{i}}_{i})\] \[\leq\left(\frac{C^{2}}{N}\right)^{1+(T-1)(N-1)}\sum_{(F_{T},A^{1: N-1}_{1:T-1})\notin S_{i}}\int\delta_{z}(dx^{N}_{0:T})\prod_{j=1}^{N-1}\mu_{ \theta}(dx^{j}_{0})M_{\theta,l}(x^{j}_{0},dx^{j}_{1})\prod_{t=2}^{i}M_{\theta, l}(x^{A^{j}_{t-1}}_{t-1},dx^{j}_{t})\mathcal{V}_{l}(x^{F_{i}}_{i}).\]
Using the multiplicative drift property of \(\mathcal{V}_{l}\) we have
\[\sum_{(F_{T},A^{1:N-1}_{1:T-1})\notin S_{i}}\int\delta_{z}(dx^{N}_ {0:T})\prod_{j=1}^{N-1}\mu_{\theta}(dx^{j}_{0})M_{\theta,l}(x^{j}_{0},dx^{j}_{1 })\prod_{t=2}^{i}M_{\theta,l}(x^{A^{j}_{t-1}}_{t-1},dx^{j}_{t})\mathcal{V}_{l} (x^{F_{i}}_{i})\] \[\leq C\sum_{(F_{T},A^{1:N-1}_{1:T-1})\notin S_{i}}\int\delta_{z}(dx ^{N}_{0:T})\prod_{j=1}^{N-1}\mu_{\theta}(dx^{j}_{0})M_{\theta,l}(x^{j}_{0},dx^{j }_{1})\prod_{t=2}^{i-1}M_{\theta,l}(x^{A^{j}_{t-1}}_{t-1},dx^{j}_{t})(\mathcal{ V}_{l}(x^{F_{i-1}}_{i-1}))^{\xi}1_{i>1}\] (A.9) \[+C\sum_{(F_{T},A^{1:N-1}_{1:T-1})\notin S_{i}}\int\delta_{z}(dx^{N }_{0:T})\prod_{j=1}^{N-1}\mu_{\theta}(dx^{j}_{0})M_{\theta,l}(x^{j}_{0},dx^{j}_{ 1})(\mathcal{V}_{l,0}(x^{F_{i-1}}_{i-1}))^{\xi}1_{i=1}\]
We consider the cases \(F_{i-1}=N\) and \(F_{i-1}\neq N\). Suppose \(F_{i-1}=N\). The quantities in (A.9) are bounded above by
\[C\sum_{(F_{T},A^{1:N-1}_{1:T-1})\notin S_{i}}\max((\mathcal{V}_{l,0}(z_{0}))^{ \xi},(\mathcal{V}_{l}(z_{i-1}))^{\xi})\ \leq\ CN^{1+(T-1)(N-1)}(\bar{\mathcal{V}}_{l}(z))^{\xi}.\]
For the case \(F_{i-1}\neq N\). Using \(\mathcal{V}_{l},\mathcal{V}_{l,0}\geq 1\) and (A.4) we can bound the quantities in (A.9) by
\[C\sum_{(F_{T},A^{1:N-1}_{1:T-1})\notin S_{i-1}}\int\delta_{z}(dx^ {N}_{0:T})\prod_{j=1}^{N-1}\mu_{\theta}(dx^{j}_{0})M_{\theta,l}(x^{j}_{0},dx^{j }_{1})\prod_{t=2}^{i-1}M_{\theta,l}(x^{A^{j}_{t-1}}_{t-1},dx^{j}_{t})\mathcal{V} _{l}(x^{F_{i-1}}_{i-1})1_{i>1}\] \[+C\sum_{(F_{T},A^{1:N-1}_{1:T-1})\notin S_{i-1}}\int\delta_{z}(dx^ {N}_{0:T})\prod_{j=1}^{N-1}\mu_{\theta}(dx^{j}_{0})M_{\theta,l}(x^{j}_{0},dx^{j }_{1})\mathcal{V}_{l,0}(x^{F_{i-1}}_{i-1})1_{i=1}\] \[\leq C\sum_{(F_{T},A^{1:N-1}_{1:T-1})\notin S_{i-1}}\int\delta_{z}(dx ^{N}_{0:T})\prod_{j=1}^{N-1}\mu_{\theta}(dx^{j}_{0})M_{\theta,l}(x^{j}_{0},dx^{j }_{1})\prod_{t=2}^{i-1}M_{\theta,l}(x^{A^{j}_{t-1}}_{t-1},dx^{j}_{t})\mathcal{V} _{l}(x^{F_{i-1}}_{i-1})\] \[+C^{2}N^{1+(T-1)(N-1)}.\]
Therefore we have shown that
\[\sum_{(F_{T},A_{1:T-1}^{1:N-1})\not\in S_{i}}\int\delta_{z}(dx_{0:T}^{N})\prod_{j= 1}^{N-1}\mu_{\theta}(dx_{0}^{j})M_{\theta,l}(x_{0}^{j},dx_{1}^{j})\prod_{t=2}^{i }M_{\theta,l}(x_{t-1}^{A_{t-1}^{j}},dx_{t}^{j})\mathcal{V}_{l}(x_{i}^{F_{i}})\]
\[\leq C\left\{\bar{\mathcal{V}}_{l}(z)^{\xi}+C^{2}\right\}N^{1+(T-1)(N-1)}+C \sum_{(F_{T},A_{1:T-1}^{1:N-1})\not\in S_{i-1}}\int\delta_{z}(dx_{0:T}^{N})\prod _{j=1}^{N-1}\mu_{\theta}(dx_{0}^{j})M_{\theta,l}(x_{0}^{j},dx_{1}^{j})\] \[\prod_{t=2}^{i-1}M_{\theta,l}(x_{t-1}^{A_{t-1}^{j}},dx_{t}^{j}) \mathcal{V}_{l}(x_{i-1}^{F_{i-1}}).\]
Hence by induction, the bound (A.5), and the fact that \(\mathcal{V}_{l},\mathcal{V}_{l,0}\geq 1\) we have shown that there is a constant \(C\) such that for every \(0\leq i\leq T\) and \(z\in(\mathbb{R}^{d})^{T\Delta_{t}^{-1}+1}\):
\[\sum_{(F_{T},A_{1:T-1}^{1:N-1})\not\in S_{i}}\int\delta_{z}(dx_{0:T}^{N})\prod _{j=1}^{N-1}\mu_{\theta}(dx_{0}^{j})M_{\theta,l}(x_{0}^{j},dx_{1}^{j})\prod_{t =2}^{i}M_{\theta,l}(x_{t-1}^{A_{t-1}^{j}},dx_{t}^{j})\mathcal{V}_{l}(x_{i}^{F_ {i}})\leq C\bar{\mathcal{V}}_{l}(z)^{\xi}.\] (A.10)
Let \(C_{1}>\max\left(\left(C^{2}N\right)^{1+(T-1)(N-1)},C(T+1)\right)\). Combining the bounds (A.3) and (A.10) we have
\[\left(K_{\theta,l}\bar{\mathcal{V}}_{l}\right)(z)\leq\left(1-\frac{1}{C_{1}} \right)\mathcal{V}_{l,0}(z_{0})+\sum_{i=1}^{T}\left(\left(1-\frac{1}{C_{1}} \right)\mathcal{V}_{l}(z_{i})+\frac{C_{1}}{T+1}(\bar{\mathcal{V}}_{l}(z))^{ \xi}\right)=\left(1-\frac{1}{C_{1}}\right)\bar{\mathcal{V}}_{l}(z)+C_{1}(\bar {\mathcal{V}}_{l}(z))^{\xi}.\]
Let \(\lambda=\left(1-\frac{1}{2C_{1}}\right)\), \(\bar{m}>\left(2C_{1}^{2}\right)^{\frac{1}{1-\xi}}\), \(\bar{b}>C_{1}(2C_{1}^{2})^{\frac{\xi}{1-\xi}}\), and the set \(\mathsf{C}_{\bar{m}}=\{x\in(\mathbb{R}^{d})^{\Delta_{l}^{-1}+1}\ :\ \bar{ \mathcal{V}}_{l}(x)<\bar{m}\}\). We have
\[\left(K_{\theta,l}\bar{\mathcal{V}}_{l}\right)(z)\leq\lambda\bar{\mathcal{V}} _{l}(z)+\bar{b}\mathbbm{1}_{\mathsf{C}_{\bar{m}}}(z).\]
Finally taking \(m=\sqrt{\frac{\bar{m}}{T+1}}\),\(b=\frac{\bar{b}}{T+1}\), and \(\mathsf{C}_{m}=\{x\in(\mathbb{R}^{d})^{\Delta_{l}^{-1}+1}\ :\ V_{l}(x)<m\}\) then
\[\left(K_{\theta,l}V_{l}^{2}\right)(z)\leq\lambda V_{l}^{2}(z)+b\mathbbm{1}_{ \mathsf{C}_{\bar{m}}}(z).\]
The constant \(C_{1}\) can be taken as large as needed to guarantee that the set \(\mathsf{C}_{m}\) is non-empty. Furthermore, the constant \(C_{1}\) is independent of \(\theta\) and \(z\); consequently \(\lambda\), \(m\), and \(b\) are also independent of \(\theta\) and \(z\) which completes the proof.
**Corollary 1**.: _Assume (A4-6). For every \(l\in\mathbb{N}_{0}\), \(A_{1:T-1}^{1:N-1}\in\{1,...,N\}^{(T-1)(N-1)}\), \(F_{T}\in\{1,...,N\}\), and \(r\in\{1,2\}\):_
\[\sup_{z\in(\mathbb{R}^{d})^{T\Delta_{l}^{-1}+1}}\sup_{\theta\in\Theta}\left\| \int\delta_{z}(dx_{0:T}^{N})\prod_{j=1}^{N-1}\mu_{\theta}(dx_{0}^{j})M_{\theta, l}(x_{0}^{j},dx_{1}^{j})\prod_{t=2}^{T}M_{\theta,l}(x_{t-1}^{A_{t-1}^{j}}, dx_{t}^{j})V_{l}^{r}(x_{0}^{F_{0}},...,x_{T}^{F_{T}})\right\|_{V_{l}^{r}}<\infty.\]
Proof.: As Lemma 2 holds, we have
\[\int\delta_{z}(dx_{0:T}^{N})\prod_{j=1}^{N-1}\mu_{\theta}(dx_{0} ^{j})M_{\theta,l}(x_{0}^{j},dx_{1}^{j})\prod_{t=2}^{T}M_{\theta,l}(x_{t-1}^{A_{ t-1}^{j}},dx_{t}^{j})V_{l}^{2}(x_{0}^{F_{0}},...,x_{T}^{F_{T}})\] \[\leq C^{1+(T-1)(N-1)}(K_{\theta,l}V_{l}^{2})(z)\] \[\leq C^{1+(T-1)(N-1)}(\lambda V_{l}^{2}(z)+b\mathbbm{1}_{\mathsf{ C}_{d}}(z))\] \[\leq(\lambda C^{1+(T-1)(N-1)}+m)V_{l}^{2}(z)\]
where we used (A5) to and the definition of \(K_{\theta,l}\) to move to the second line. This proves the inequality for \(r=2\). To prove it for \(r=1\) we will prove that \(V\) itself is a drift function. Using Jensen's inequality we have
\[(PV)(z)\leq\sqrt{(PV^{2})(z)}\leq\sqrt{\lambda V^{2}(z)+b\mathbb{1}_{\mathcal{C }_{d}}}(z)\leq\sqrt{\lambda}V(z)+\sqrt{b}\mathbb{1}_{\mathcal{C}_{d}}(z).\]
The proof of Lemma 2 did not require the \(M\)'s, \(\mu\)'s and \(G\)'s to have the same \(\theta\) subscripts. This observation allows immediately to generalize Corollary 1 to mixed \(\theta\)'s subscripts. This will be helpful in verifying the theorem condition (TA4).
**Corollary 2**.: _Assume (A4-6). For every \(l\in\mathbb{N}_{0}\), \(A^{1:N-1}_{1:T-1}\in\{1,...,N\}^{(T-1)(N-1)}\), \(F_{T}\in\{1,...,N\}\), and \(r\in\{1,2\}\). There exists a constant \(C\) such that for every \(\vartheta\in\Theta^{(T+1)\times(N-1)}\) and \(z\in(\mathbb{R}^{d})^{T\Delta_{l}^{-1}+1}\):_
\[\left\|\int\delta_{z}(dx^{N}_{0:T})\prod_{j=1}^{N-1}\mu_{\vartheta_{1,j}}(dx^{ j}_{0})M_{\theta_{2,j},l}(x^{j}_{0},dx^{j}_{1})\prod_{t=2}^{T}M_{\vartheta_{t+1,j},l}(x^{A^{j}_{t-1}}_{t-1},dx^{j}_{t})V^{r}_{l}(x^{F_{0}}_{0},...,x^{F_{T}}_{ T})\right\|_{V^{r}_{t}}<C.\]
We have proven that (TA1) holds.
**Lemma 3**.: _Assume (A5) and (A8). For every \(l\in\mathbb{N}_{0}\) the assumption (TA2) holds with \(\eta_{l}\) defined above._
Proof.: Let \(z\in(\mathbb{R}^{d})^{T\Delta_{l}^{-1}+1}\) and a measurable set \(S\subset\mathcal{B}((\mathbb{R}^{d})^{T\Delta_{l}^{-1}+1})\). We have
\[\begin{split} K_{\theta,l}(z,S)&=(K_{\theta,l} \mathbb{1}_{S})(z)\\ \geq&\left(\frac{1}{C^{2}N}\right)^{1+(T-1)(N-1)} \sum_{1\leq F_{T},A^{1:N-1}_{1:T-1}\leq N}\int\delta_{z}(dx^{N}_{0:T})\prod_{j =1}^{N-1}\mu_{\theta}(dx^{j}_{0})M_{\theta,l}(x^{j}_{0},dx^{j}_{1})\\ &\quad\prod_{t=2}^{T}M_{\theta,l}(x^{A^{j}_{t-1}}_{t-1},dx^{j}_{ t})\mathbb{1}_{S}(x^{F_{0}}_{0},...,x^{F_{T}}_{T})\\ \geq&\left(\frac{1}{C^{2}N}\right)^{1+(T-1)(N-1)} \int\mu_{\theta}(dx^{1}_{0})M_{\theta,l}(x^{1}_{0},dx^{1}_{1})\prod_{t=2}^{T} M_{\theta,l}(x^{1}_{t-1},dx^{1}_{t})\mathbb{1}_{S}(x^{1}_{0},...,x^{1}_{T}).\end{split}\] (A.11)
Recall that \(x_{t}\in(\mathbb{R}^{d})\) for every \(1\leq t\leq T\) and \(x_{0}\in\mathbb{R}^{d}\). We write \(x_{t,k}\in\mathbb{R}\) for the components of \(x_{t}\) for \(1\leq t\leq T\) and \(1\leq k\leq\Delta_{l}^{-1}\). From assumption (A8) and the definition of \(M_{\theta,l}\) we have
\[M_{\theta,l}(x^{1}_{t-1},dx^{1}_{t})=Q_{\theta,l}(x_{t-1,\Delta_{l}^{-1}},dx^{ 1}_{t,1})\prod_{i=2}^{\Delta_{l}^{-1}}Q_{\theta,l}(x_{t,i-1},dx_{t,i})\geq\prod _{i=1}^{\Delta_{l}^{-1}}\Xi_{l}(dx_{t,i}).\]
From assumption (A8) we also have
\[\mu_{\theta}(dx^{1}_{0})\geq\Psi(dx^{1}_{0}).\]
Therefore the last quantity in (A.11) is bounded below by
\[\left(\frac{1}{C^{2}N}\right)^{1+(T-1)(N-1)}\int\Psi(dx^{1}_{0})\prod_{t=1}^{T }\prod_{i=1}^{\Delta_{l}^{-1}}\Xi_{l}(dx_{t,i})\mathbb{1}_{S}(x^{1}_{0},...,x^ {1}_{T}).\]
This measure is independent of \(\theta\) and \(z\) which proves the minorization condition (TA2).
**Lemma 4**.: _Assume (A1), (A4), and (A9). Assumption (TA3) holds with \(\beta=1\)._
Proof.: From assumption (A9) there are \(q,C_{1}>0\) such that for every \((\theta,x)\in\Theta\times(\mathbb{R}^{d})^{T\Delta_{l}^{-1}+1}\) the inequality \(\max\left(\|\nabla_{\theta}H_{l}(\theta,x)\|,|H_{l}(\theta,x)|\right)<C_{1}(1+ |x|^{q})\) holds. From assumption (A4) and the definition of \(V_{l}\) we have that \(\lim_{\|x\|\to\infty}\log V_{l}(x)/\log\|x\|=\infty\). Hence there exists a constant \(I>0\) such that \(V_{l}(x)>1+\|x\|^{q}\) for \(\|x\|>I\). On the compact set \(\|x\|\leq I\) the function \(V_{l}(x)/(1+\|x\|^{q})\) is continuous and positive hence has a minimum \(m>0\). Let \(C=C_{1}/\min(1,m)\). For every \((\theta,x)\in\Theta\times(\mathbb{R}^{d})^{T\Delta_{l}^{-1}+1}\) we have
\[CV_{l}(x)\geq C_{1}(1+\|x\|^{q})>\max\left(\|\nabla_{\theta}H_{l}(\theta,x)\|, |H_{l}(\theta,x)|\right)\]
form which the first inequality of (TA3) follows. For the second inequality let \(\theta,\theta^{\prime}\in\Theta\) and \(x\in(\mathbb{R}^{d})^{T\Delta_{l}^{-1}+1}\). Applying mean value theorem, Cauchy-Schwarz inequality, and the previous inequality we have
\[|H_{l}(\theta,x)-H_{l}(\theta^{\prime},x)|\leq\|\theta-\theta^{\prime}\|\sup_ {\theta\in\Theta}\|\nabla_{\theta}H_{l}(\theta,x)\|\ <CV_{l}(x)\|\theta-\theta^{\prime}\|.\]
**Lemma 5**.: _Assume (A1) and (A4-7). Assumption (TA4) holds with \(\beta=\min(1,\zeta)\) and \(p=2\)._
Proof.: From Lemma 2 the assumption (TA1) holds with \(p=2\). Let \(r\in\{1,2\}\) and \(\varphi\in\mathcal{L}_{V_{l}^{r}}\). Denote
\[G_{\theta}^{l}(x_{0:T}^{1:N},F_{T},A_{1:T-1}^{1:N-1})=\frac{G_{T,\theta}^{l}(x _{T}^{F_{T}})}{\sum_{k=1}^{N}G_{T,\theta}^{l}(x_{T}^{k})}\prod_{t=2}^{T}\frac {G_{t-1,\theta}^{l}(x_{t-1}^{A_{t-1}^{j}})}{\sum_{k=1}^{N}G_{t-1,\theta}^{l}( x_{t-1}^{k})}.\]
The functions \(\{G_{\theta}^{l}\}_{\theta\in\Theta}\) and \(\{\nabla_{\theta}G_{\theta}^{l}\}_{\theta\in\Theta}\) are uniformly bounded. For the ease of notations define \(\mathcal{S}=\{(F_{T},A_{1:T-1}^{1:N-1})\::\:1\leq F_{T},A_{1:T-1}^{1:N-1}\leq N\}\). Define \(A_{0}^{j}=j\) for \(1\leq j\leq N\). We write the decomposition
\[(K_{\theta,l}\varphi)(z)-(K_{\theta^{\prime},l}\varphi)(z) = \sum_{\mathcal{S}}\int\left(G_{\theta}^{l}(x_{0:T}^{1:N},F_{T},A_ {1:T-1}^{1:N-1})-G_{\theta^{\prime}}^{l}(x_{0:T}^{1:N},F_{T},A_{1:T-1}^{1:N-1 })\right)\delta_{z}(dx_{0:T}^{N})\] (A.12) \[\times\prod_{j=1}^{N-1}\mu_{\theta}(dx_{0}^{j})\prod_{t=1}^{T}M_ {\theta,l}(x_{t-1}^{A_{t-1}^{j}},dx_{t}^{j})\varphi(x_{0}^{F_{0}},...,x_{T}^{ F_{T}})\] \[+\sum_{\mathcal{S}}\int G_{\theta^{\prime}}^{l}(x_{0:T}^{1:N},F_{ T},A_{1:T-1}^{1:N-1})\delta_{z}(dx_{0:T}^{N})\]
\[\times\left(\prod_{j=1}^{N-1}\mu_{\theta}(dx_{0}^{j})-\prod_{j=1} ^{N-1}\mu_{\theta^{\prime}}(dx_{0}^{j})\right)\prod_{j=1}^{N-1}\prod_{t=1}^{T }M_{\theta,l}(x_{t-1}^{A_{t-1}^{j}},dx_{t}^{j})\varphi(x_{0}^{F_{0}},...,x_{T} ^{F_{T}})\] \[+\sum_{\mathcal{S}}\int G_{\theta^{\prime}}^{l}(x_{0:T}^{1:N},F_{ T},A_{1:T-1}^{1:N-1})\delta_{z}(dx_{0:T}^{N}))\] \[\times\prod_{j=1}^{N-1}\mu_{\theta^{\prime}}(dx_{0}^{j})\left( \prod_{t=1}^{T}M_{\theta,l}(x_{t-1}^{A_{t-1}^{j}},dx_{t}^{j})-\prod_{t=1}^{T}M_ {\theta^{\prime}}(x_{t-1}^{A_{t-1}^{j}},dx_{t}^{j})\right)\varphi(x_{0}^{F_{0}},...,x_{T}^{F_{T}}).\]
Applying the mean value theorem to the function \(G_{\theta}^{l}\) along with the uniform boundedness of its derivative from assumption (A5)
\[|G_{\theta}^{l}(x_{0:T}^{1:N},F_{T},A_{1:T-1}^{1:N-1})-G_{\theta^{\prime}}^{l}( x_{0:T}^{1:N},F_{T},A_{1:T-1}^{1:N-1})|=|\nabla_{\theta}G_{\theta^{\prime\prime}}(x_{0:T}^{ 1:N},F_{T},A_{1:T-1}^{1:N-1})\cdot(\theta-\theta^{\prime})|\leq C\|\theta- \theta^{\prime}\|.\]
Hence first term is bounded from above by
\[\begin{split}& C\|\theta^{\prime}-\theta^{\prime}\|\int\delta_{z}(dx_{0 :T}^{N})\prod_{j=1}^{N-1}\mu_{\theta}(dx_{0}^{j})\prod_{t=1}^{T}M_{\theta,l}(x _{t-1}^{A_{t-1}^{j}},dx_{t}^{j})\varphi(x_{0}^{F_{0}},...,x_{T}^{F_{T}})\\ &=C\|\theta^{\prime}-\theta^{\prime}\|\int\delta_{z}(dx_{0:T}^{N}) \prod_{j=1}^{N-1}\mu_{\theta}(dx_{0}^{j})\prod_{t=1}^{T}M_{\theta,l}(x_{t-1}^{ A_{t-1}^{j}},dx_{t}^{j})\frac{\varphi(x_{0}^{F_{0}},...,x_{T}^{F_{T}})}{V_{l}^{r}(x _{0}^{F_{0}},...,x_{T}^{F_{T}})}V_{l}^{r}(x_{0}^{F_{0}},...,x_{T}^{F_{T}})\\ &\leq C\|\theta^{\prime}-\theta^{\prime}\|\|\varphi\|_{V_{l}^{r}} \int\delta_{z}(dx_{0:T}^{N})\prod_{j=1}^{N-1}\mu_{\theta}(dx_{0}^{j})\prod_{t= 1}^{T}M_{\theta,l}(x_{t-1}^{A_{t-1}^{j}},dx_{t}^{j})V_{l}^{r}(x_{0}^{F_{0}},..,x_{T}^{F_{T}}).\end{split}\]
By Corollary 1 is bounded above by \(CV^{r}(z)\). For the second term we can further decompose the term \(\prod_{j=1}^{N-1}\mu_{\theta}(dx_{0}^{j})-\prod_{j=1}^{N-1}\mu_{\theta^{\prime }}(dx_{0}^{j})\) as
\[\begin{split}&\prod_{j=1}^{N-1}\mu_{\theta}(dx_{0}^{j})-\prod_{j=1 }^{N-1}\mu_{\theta^{\prime}}(dx_{0}^{j})\\ &=\sum_{i=1}^{N-1}\left(\prod_{j=1}^{N-1-i}\mu_{\theta}(dx_{0}^{j} )\right)(\mu_{\theta}(dx_{0}^{N-i})-\mu_{\theta^{\prime}}(dx_{0}^{N-i}))\left( \prod_{j=N-i+1}^{N-1}\mu_{\theta^{\prime}}(dx_{0}^{j})\right)\end{split}\]
with the convention that an empty product is \(1\). Similarly to the argument of the first term it is enough to bounded the second term for \(\varphi=V_{l}^{r}\). Since \(G\) is bounded it is sufficient to study the quantity
\[\begin{split}\sum_{i=1}^{N-1}\sum_{\mathcal{S}}\int\delta_{z}(dx _{0:T}^{N})\left(\prod_{j=1}^{N-1-i}\mu_{\theta}(dx_{0}^{j})\right)|\mu_{ \theta}(dx_{0}^{N-i})-\mu_{\theta^{\prime}}(dx_{0}^{N-i})|\left(\prod_{j=N-i+1 }^{N-1}\mu_{\theta^{\prime}}(dx_{0}^{j})\right)\\ \prod_{j=1}^{N-1}\prod_{t=1}^{T}M_{\theta,l}(x_{t-1}^{A_{t-1}^{j}},dx_{t}^{j})V_{l}^{r}(x_{0}^{F_{0}},...,x_{T}^{F_{T}})\end{split}\] (A.13)
where \(|\mu_{\theta}(dx_{0}^{N-i})-\mu_{\theta^{\prime}}(dx_{0}^{N-i})|\) is, as defined above, the sum of the measures in Hahn-Jordan decomposition of the signed measure \(\mu_{\theta}(dx_{0}^{N-i})-\mu_{\theta^{\prime}}(dx_{0}^{N-i})\). By the Cauchy-Schwarz inequality
\[\begin{split}\int\delta_{z}(dx_{0:T}^{N})\left(\prod_{j=1}^{N-1-i} \mu_{\theta}(dx_{0}^{j})\right)|\mu_{\theta}(dx_{0}^{N-i})-\mu_{\theta^{\prime }}(dx_{0}^{N-i})|\left(\prod_{j=N-i+1}^{N-1}\mu_{\theta^{\prime}}(dx_{0}^{j}) \right)\times\\ \prod_{j=1}^{N-1}\prod_{t=1}^{T}M_{\theta,l}(x_{t-1}^{A_{t-1}^{j} },dx_{t}^{j})V_{l}(x_{0}^{F_{0}},...,x_{T}^{F_{T}})\end{split}\]
\[\begin{split}\leq&\left(\int\delta_{z}(dx_{0:T}^{N}) \left(\prod_{j=1}^{N-1-i}\mu_{\theta}(dx_{0}^{j})\right)|\mu_{\theta}(dx_{0}^{N -i})-\mu_{\theta^{\prime}}(dx_{0}^{N-i})|\left(\prod_{j=N-i+1}^{N-1}\mu_{ \theta^{\prime}}(dx_{0}^{j})\right)\prod_{j=1}^{N-1}\prod_{t=1}^{T}M_{\theta,l }(x_{t-1}^{A_{t-1}^{j}},dx_{t}^{j})\right)^{\frac{1}{2}}\times\\ &\qquad\qquad\qquad\left(\int\delta_{z}(dx_{0:T}^{N})\left(\prod_{ j=1}^{N-1-i}\mu_{\theta}(dx_{0}^{j})\right)|\mu_{\theta}(dx_{0}^{N-i})-\mu_{ \theta^{\prime}}(dx_{0}^{N-i})|\left(\prod_{j=N-i+1}^{N-1}\mu_{\theta^{\prime }}(dx_{0}^{j})\right)\times\\ &\qquad\qquad\qquad\qquad\prod_{j=1}^{N-1}\prod_{t=1}^{T}M_{ \theta,l}(x_{t-1}^{A_{t-1}^{j}},dx_{t}^{j})V_{l}^{2}(x_{0}^{F_{0}},...,x_{T}^ {F_{T}})\right)^{\frac{1}{2}}\end{split}\]
\[=\left(\int\delta_{z}(dx_{0:T}^{N})|\mu_{\theta}(dx_{0}^{N-i})-\mu_{ \theta^{\prime}}(dx_{0}^{N-i})|\right)^{\frac{1}{2}}\Bigg{(}\int\delta_{z}(dx_{0: T}^{N})\left(\prod_{j=1}^{N-1-i}\mu_{\theta}(dx_{0}^{j})\right)|\mu_{\theta}(dx_{0}^{N-i })-\mu_{\theta^{\prime}}(dx_{0}^{N-i})|\times\] \[\qquad\qquad\qquad\left(\prod_{j=N-i+1}^{N-1}\mu_{\theta^{\prime} }(dx_{0}^{j})\right)\prod_{j=1}^{N-1}\prod_{t=1}^{T}M_{\theta,l}(x_{t-1}^{A_{t -1}^{j}},dx_{t}^{j})V_{l}^{2}(x_{0}^{F_{0}},...,x_{T}^{F_{T}})\Bigg{)}^{\frac{ 1}{2}}\] \[\leq\Bigg{(}\int\delta_{z}(dx_{0:T}^{N})|\mu_{\theta}(dx_{0}^{N-i })-\mu_{\theta^{\prime}}(dx_{0}^{N-i})|\mathcal{V}_{l}(x_{0}^{N-i})\Bigg{)}^{ \frac{1}{2}}\Bigg{(}\int\delta_{z}(dx_{0:T}^{N})\Bigg{(}\prod_{j=1}^{N-1-i}\mu _{\theta}(dx_{0}^{j})\Bigg{)}\times\] \[\qquad\qquad\qquad\qquad\qquad\prod_{j=1}^{N-1}\prod_{t=1}^{T}M_ {\theta,l}(x_{t-1}^{A_{t-1}^{j}},dx_{t}^{j})V_{l}^{2}(x_{0}^{F_{0}},...,x_{T}^ {F_{T}})\Bigg{)}^{\frac{1}{2}}\]
where we have used \(V_{l}\geq 1\). Hence we only need to to consider \(r=2\). \(V_{l}^{2}(x)\) is defined by \(\frac{1}{T+1}\sum_{k=0}^{T}\mathcal{V}_{l}(x_{k})\). In manner exactly similar to the proof of Lemma 2 we start with a \(0\leq k\leq T\) and keep descending with the indices as we integrate w.r.t to \(M_{\theta,l}\)'s until some \(F_{r}\) for some \(0\leq r\leq k\) hits \(N\), the index of the conditioned path \(z\), or we end up with \(F_{0}\neq N\). In the first case we can find \(C_{1}>0\) such that
\[\int\left(\prod_{j=1}^{N-1-i}\mu_{\theta}(dx_{0}^{j})\right)|\mu_ {\theta}(dx_{0}^{N-i})-\mu_{\theta^{\prime}}(dx_{0}^{N-i})|\left(\prod_{j=N-i+ 1}^{N-1}\mu_{\theta^{\prime}}(dx_{0}^{j})\right)\prod_{j=1}^{N-1}\prod_{t=1}^{ T}M_{\theta,l}(x_{t-1}^{A_{t-1}^{j}},dx_{t}^{j})\mathcal{V}_{l}(x_{k}^{F_{k}})\] \[\leq C_{1}\mathcal{V}_{l}(z_{r})\int\left(\prod_{j=1}^{N-1-i}\mu_ {\theta}(dx_{0}^{j})\right)|\mu_{\theta}(dx_{0}^{N-i})-\mu_{\theta^{\prime}}( dx_{0}^{N-i})|\left(\prod_{j=N-i+1}^{N-1}\mu_{\theta^{\prime}}(dx_{0}^{j}) \right)\prod_{j=1}^{N-1}\prod_{t=1}^{r}M_{\theta,l}(x_{t-1}^{A_{t-1}^{j}},dx_{ t}^{j})\] \[=C_{1}\mathcal{V}_{l}(z_{r})\int|\mu_{\theta}(dx_{0}^{N-i})-\mu_{ \theta^{\prime}}(dx_{0}^{N-i})|\] \[=C_{1}\mathcal{V}_{l}(z_{r})\int|\mu_{\theta}(dx_{0}^{N-i})-\mu_{ \theta^{\prime}}(dx_{0}^{N-i})|\mathcal{V}_{l}(x_{0}^{N-i})\] \[\leq C_{2}\mathcal{V}_{l}(z_{r})\|\theta-\theta^{\prime}\|^{\zeta}\]
for a constant \(C_{2}\) where we employed assumption (A7) in the last line. Thus the sum in (A.13) is bounded by \(CV_{l}^{2}(z)\|\theta-\theta^{\prime}\|^{\zeta}\). In the case where \(F_{0}\neq N\) the previous bound becomes
\[C_{1}\int\left(\prod_{j=1}^{N-1-i}\mu_{\theta}(dx_{0}^{j})\right)| \mu_{\theta}(dx_{0}^{N-i})-\mu_{\theta^{\prime}}(dx_{0}^{N-i})|\left(\prod_{j=N- i+1}^{N-1}\mu_{\theta^{\prime}}(dx_{0}^{j})\right)\mathcal{V}_{l}(x_{0}^{F_{0}})\] \[= \ \mathbb{1}_{\{F_{0}=N-i\}}\int|\mu_{\theta}(dx_{0}^{N-i})-\mu_{ \theta^{\prime}}(dx_{0}^{N-i})|\mathcal{V}_{l}(x_{0}^{N-i})+\mathbb{1}_{\{F_{0 }\neq N-i\}}\int|\mu_{\theta}(dx_{0}^{N-i})-\mu_{\theta^{\prime}}(dx_{0}^{N-i})|\] \[\leq \ \int|\mu_{\theta}(dx_{0}^{N-i})-\mu_{\theta^{\prime}}(dx_{0}^{N-i })|\mathcal{V}_{l}(x_{0}^{N-i})\] \[\leq C\|\theta-\theta^{\prime}\|^{\zeta}.\]
The last term of equation (A.12) can be bounded in a similar manner to the second term. Finally because the set \(\Theta\) is bounded the choice \(\beta=\min(1,\zeta)\) works. |
2307.16454 | A cork of the rational surface with the second Betti number 9 | We provide the first explicit example of a cork of $\mathbf{CP}^2 \#
8\overline{\mathbf{CP}^2}$. This result gives the current smallest second Betti
number of a standard simply-connected closed $4$-manifold for which an explicit
cork has been found. | Yohei Wakamaki | 2023-07-31T07:26:07Z | http://arxiv.org/abs/2307.16454v3 | # A CORK OF THE RAIRIONAL SURFACE WITH THE SECOND BETTI NUMBER 9
###### Abstract.
We provide the first explicit example of a cork of \(\mathbf{CP}^{2}\#8\overline{\mathbf{CP}^{2}}\). This result gives the current smallest second Betti number of a standard simply-connected closed \(4\)-manifold for which an explicit cork has been found.
Key words and phrases:\(4\)-manifold; cork; rational surface 2020 Mathematics Subject Classification: 57R55, 57R65
## 1. Introduction
Throughout this article, we assume that all manifolds are smooth and oriented, and all maps are smooth unless otherwise stated.
One of the fascinating problems in \(4\)-dimensional topology asks whether a simply-connected closed \(4\)-manifold with a small second Betti number \(b_{2}\) admits an exotic smooth structure. The interest in this problem stems from the fact that constructing an exotic smooth structure on such a \(4\)-manifold is much more challenging than on those with large \(b_{2}\). The first example of an exotic smooth structure on simply-connected closed \(4\)-manifolds was discovered by Donaldson [7, 8]. A few years later, Friedman and Morgan [10] proved that for each integer \(m\geq 10\), there exists a simply-connected closed \(4\)-manifold with \(b_{2}=m\) such that it admits infinitely many exotic smooth structures. After their result, many experts [19, 24, 26, 25, 3, 4] contributed to lowering the known minimal value of \(b_{2}\) where an exotic smooth structure on a simply-connected closed \(4\)-manifold exists. Consequently, we now know that for each \(m\geq 3\), there exists a simply-connected closed \(4\)-manifold with \(b_{2}=m\) that admits infinitely many exotic smooth structures.
Regarding the exotic smooth structures on simply-connected closed \(4\)-manifolds, the study of corks of \(4\)-manifolds (see Definition 3.1) plays an important role. Due to the work of Curtis-Freedman-Hsiang-Stong [6] and independently Matveyev [23], for any exotic pair \((X,Y)\) of simply-connected closed \(4\)-manifolds, there exists a cork \((C,\tau)\) of \(X\) such that \(Y\) is diffeomorphic to the _cork twist_ of \(X\) along an embedded copy of \(C\), i.e., the \(4\)-manifold obtained by cutting out the embedded copy of \(C\) in \(X\) and regluing it via the involution \(\tau\). In other words, one can obtain any exotic smooth structure of \(X\) by twisting a cork.
However, despite the importance of corks of simply-connected closed \(4\)-manifolds, we have relatively few explicit examples of them. In particular, the minimal value of \(b_{2}\) of standard simply-connected closed \(4\)-manifolds for which an explicit cork had been found to
date was \(10\)[1]. A simply-connected closed \(4\)-manifolds is called standard if it is obtained as the connected sum of finitely-many copies of \(\mathbf{CP}^{2},\overline{\mathbf{CP}^{2}},S^{2}\times S^{2},K3\) and \(\overline{K3}\). As is well-known, if the celebrated \(11/8\)_-conjecture_[22] is true, then it follows that any simply-connected closed \(4\)-manifold is homeomorphic to one of the standard simply-connected closed \(4\)-manifolds.
This situation naturally leads us to the following problem, which we can regard as a cork version of the problem at the beginning of this paper.
**Problem 1.1**.: _Find an example of a cork of a standard simply-connected closed \(4\)-manifold with \(b_{2}\leq 9\)._
We remark that a cork of a non-standard simply-connected closed \(4\)-manifold with \(b_{2}=9\) has already been found by Akbulut and Yasui [2, Remark 6.2], but it is unknown whether its cork twist results in a standard simply-connected closed \(4\)-manifold. By the definition of cork twist, if the cork twist along their cork results in a standard simply-connected closed \(4\)-manifold, it immediately follows that their cork is also a cork of a standard simply-connected closed \(4\)-manifold.
In this article, we prove the following theorem, providing an answer to Problem 1.1 in the case \(b_{2}=9\).
**Theorem 1.2**.: _The cork \((W_{2},f_{2})\) is a cork of \(\mathbf{CP}^{2}\#8\overline{\mathbf{CP}^{2}}\)._
The proof of this theorem is divided into two parts. The first is giving an explicit Kirby diagram (Figure 19) of an exotic \(\mathbf{CP}^{2}\#8\overline{\mathbf{CP}^{2}}\) obtained by Yasui's construction [29, Corollary 5.2] which uses the rational blowdown technique [9]. We denote this manifold as \(R_{8}\). Note that Yasui [29] explicitly described a procedure to give a Kirby diagram of a \(4\)-manifold obtained by his construction. However, no explicit Kirby diagram of an exotic \(\mathbf{CP}^{2}\#8\overline{\mathbf{CP}^{2}}\) obtained by Yasui's construction was given before. We follow Yasui's procedure to give a diagram of \(R_{8}\) with some modifications. The second is finding the diagram which contains an embedded copy of \(W_{2}\) such that the cork twists along \((W_{2},f_{2})\) results in \(\mathbf{CP}^{2}\#8\overline{\mathbf{CP}^{2}}\). Such diagram is described in Figure 22.
The following follows from Theorem 1.2.
**Corollary 1.3**.: _The manifold \(R_{8}\) becomes diffeomorphic to \(2\mathbf{CP}^{2}\#9\overline{\mathbf{CP}^{2}}\) after taking a connected sum with \(S^{2}\times S^{2}\)._
Figure 1. The cork \((W_{2},f_{2})\). The involution \(f_{2}\) of \(\partial W_{2}\) is defined by exchanging the zero and the dot in this diagram.
This corollary is related to the famous open problem asking whether every exotic pair of simply-connected closed \(4\)-manifolds becomes diffeomorphic after one stabilization, i.e., taking a connected sum with \(S^{2}\times S^{2}\). It is well-known that, due to the theorem of Wall [28], every exotic pair of simply-connected closed \(4\)-manifolds becomes diffeomorphic after sufficiently many stabilizations, and it has been proved that only one stabilization is enough in many cases. To the best of the author's knowledge, among the exotic pairs of simply-connected closed \(4\)-manifolds whose Kirby diagrams are explicitly given, the pair \((\mathbf{CP}^{2}\#8\overline{\mathbf{CP}^{2}},R_{8})\) is currently the smallest example in terms of \(b_{2}\) that becomes diffeomorphic after one stabilization. It is not clear to the author whether other examples of exotic pairs of simply-connected closed \(4\)-manifolds with \(b_{2}\leq 8\) known to date become diffeomorphic after one stabilization. We note that many variants of the problem about one stabilization have recently been answered negatively. For details, see [20, 21, 12, 18, 15, 13, 14, 16, 17, 5].
**Remark 1.4**.: We can also prove an exotic \(\mathbf{CP}^{2}\#k\overline{\mathbf{CP}^{2}}(k=5,6,7,9)\) obtained by Yasui's construction becomes diffeomorphic to \(2\mathbf{CP}^{2}\#(k+1)\overline{\mathbf{CP}^{2}}\) after one stabilization. Futhermore, after posting the first version of this article on arXiv, Rafael Torres informed the author that a stronger version of Corollary 1.3 holds. Namely, the manifold \(R_{8}\) becomes diffeomorphic to \(2\mathbf{CP}^{2}\#8\overline{\mathbf{CP}^{2}}\) after taking a connected sum with \(\mathbf{CP}^{2}\). It is possible to apply his idea to an exotic \(\mathbf{CP}^{2}\#k\overline{\mathbf{CP}^{2}}(k=6,7,9)\) obtained by Yasui's construction. These results will be discussed in the forthcoming paper [27].
### Acknowledgement
The author wishes to express his deepest gratitude to his advisor, Kouichi Yasui, for his patience, encouragement, and for suggesting the intriguing topic of this paper. He is grateful to Rafael Torres for generously informing the author of a stronger version of Corollary 1.3 mentioned in Remark 1.4. He also thanks Natsuya Takahashi for many valuable conversations and comments on the draft of this paper, and Yuichi Yamada for his interest in this study.
## 2. Rational blowdown and Yasui's small exotic rational surfaces
We start this section by reviewing the definition of the rational blowdown of \(4\)-manifold, which was introduced by Fintushel and Stern [9]. Then, we recall Yasui's construction [29] of an exotic \(\mathbf{CP}^{2}\#8\overline{\mathbf{CP}^{2}}\) that uses the rational blowdown. To find a cork embedded in an exotic \(\mathbf{CP}^{2}\#8\overline{\mathbf{CP}^{2}}\) in section 3, we follow Yasui's construction [29] with small modifications (see Remark 2.6) and give an explicit diagram of the \(4\)-manifold.
**Definition 2.1**.: For each integer \(p\geq 2\), let \(C_{p}\) and \(B_{p}\) be the compact \(4\)-manifolds with boundary defined by the Kirby diagrams in Figure 2. Here \(u_{i}\) in Figure 2 represents the elements of \(H_{2}(C_{p};\mathbf{Z})\) given by the corresponding \(2\)-handles. (i.e., \(u_{p-1}^{2}=-p-2,u_{i}^{2}=-2,\) and \(u_{i}\cdot u_{i+1}=+1\)\((1\leq i\leq p-2)\).)
**Definition 2.2** ([9],[11]).: Let \(X\) be a compact \(4\)-manifold and \(C\) be an embedded copy of \(C_{p}\) in \(X\). The \(4\)-manifold \(X_{(p)}=X-(\mathrm{int}C)\cup_{\partial}B_{p}\) is called the _rational blowdown_ of
\(X\) along \(C\). This operation is well-defined since any self-diffeomorphism of \(\partial B_{p}\) extends over \(B_{p}\) (For details, see [11, Section 8.5]).
Yasui's construction [29] starts from the following proposition. Although he constructed exotic \(\mathbf{CP}^{2}\#k\overline{\mathbf{CP}^{2}}\) for \(5\leq k\leq 9\), we only focus on the case \(k=8\).
**Proposition 2.3** ([29, Proposition 3.1 (1)]).: _For \(a\geq 1\), there exists a handle decomposition of \(\mathbf{CP}^{2}\) as in Figure 3._
**Conventions 2.4**.: (1) In the figures below, we often draw only the local pictures of Kirby diagrams. We assume that the parts not drawn in the diagrams are naturally inherited from the previous diagrams and always fixed.
(2) In order to indicate the bands for the handle slides, we sometimes draw arrows in the diagrams as on the left in Figure 4. Figure 4 only shows the cases when the attaching circles of \(2\)-handles are unknots and unlinked. As in Figure 4, the shape of the arrow determines the band for the handle slide. The arrow with a positive twist on the left in Figure 4 (b) will only appear in Figure 26 and Figure 28. As usual, the boxes with integer \(m\) in figures stand for \(m\) right-handed full twists if \(m\) is positive and \(|m|\) left-handed full
Figure 2.
twists if \(m\) is negative.
(3) We often represent the framings of the 2-handles by the second homology classes corresponding to the 2-handles. It is because we need the information of homology classes of certain 2-handles to prove Theorem 2.12. When the second homology classes of 2-handles are exhibited in Kirby diagrams, we mention the 2-handles or the attaching circles of 2-handles by their homology classes. Note that one can obtain the usual framing coefficients of the 2-handles by squaring their homology classes.
(4) We denote the natural orthogonal basis of \(H_{2}(\mathbf{CP}^{2}\#k\overline{\mathbf{CP}^{2}};\mathbf{Z})=H_{2}(\mathbf{ CP}^{2};\mathbf{Z})\oplus_{k}H_{2}(\overline{\mathbf{CP}^{2}};\mathbf{Z})\) by \(h,e_{1},e_{2},\ldots,e_{k}\) (i.e., \(h^{2}=1,e_{i}^{2}=-1,h\cdot e_{i}=0,\text{ and }e_{i}\cdot e_{j}=0(1\leq i\neq j\leq k)\)).
**Proposition 2.5** (cf. [29, Proposition 3.2 (1), \(a=4\)]).: \(\mathbf{CP}^{2}\#14\overline{\mathbf{CP}^{2}}\) _admits a handle decomposition as in Figure 5._
Proof.: The top left picture of Figure 6 shows the neighborhood of the full twist in Figure 3 for the case \(a=4\). By the procedure described in Figure 6, we obtain a Kirby diagram of \(\mathbf{CP}^{2}\#2\overline{\mathbf{CP}^{2}}\) shown in Figure 7. We isotope this diagram to obtain Figure 8. Then we can move one of the kinks on the left side of this diagram to obtain Figure 9. By another isotopy, we obtain Figure 10 and slide \(5h-3e_{1}-2e_{2}\) over \(2h\) to obtain Figure 11. Now we isotope this diagram and perform blowups 11 times to obtain in Figure 12. We slide \(e_{9}\) over \(e_{10}\), \(e_{10}\) over \(e_{11}\), \(e_{11}\) over \(e_{12}\), and \(e_{12}\) over \(e_{13}\). Then we obtain Figure 13. If we perform a blowup here, we obtain Figure 5, and this diagram represents a handle decomposition of \(\mathbf{CP}^{2}\#14\overline{\mathbf{CP}^{2}}\).
**Remark 2.6**.: The reader may notice that there are some differences between the Kirby diagrams in Figure 5 and [29, Proposition 3.2 (1), \(a=4\)]. First, we kept the attaching circles of all the 2-handles, whereas in [29] the 2-handles \(h,2h,e_{1},e_{2},\ldots,e_{8}\), and \(e_{14}\) are omitted for simplicity. Furthermore, we did not perform handle slides of \(e_{3},e_{4},\ldots,e_{8}\) since we will use two of them to construct a cork in Lemma 3.2. The consequence of this
Figure 4. The arrows (on the left) determine the bands (on the right) for handle slides.
Figure 6.
change is that we have shorter plumbing in the bottom right part of Figure 5 compared to [29, Proposition 3.2 (1), \(a=4\)].
The next corollary follows from the Kirby diagram of \(\mathbf{CP}^{2}\#14\overline{\mathbf{CP}^{2}}\) in Figure 5.
**Corollary 2.7** ([29, Corollary 3.3 (1), \(a=4\)]).: \(\mathbf{CP}^{2}\#14\overline{\mathbf{CP}^{2}}\) _contains a copy of \(C_{7}\) such that the elements \(u_{1},u_{2},\ldots,u_{6}\) of \(H_{2}(C_{7};\mathbf{Z})\) in \(H_{2}(\mathbf{CP}^{2}\#14\overline{\mathbf{CP}^{2}};\mathbf{Z})\) satisfy_
\[u_{i} =e_{8+i}-e_{9+i}\ (1\leq i\leq 5),\] \[u_{6} =7h-3e_{1}-2e_{2}-2e_{3}-\cdots-2e_{13}-e_{14}.\]
Proof.: In the bottom right part of Figure 5, we can find a part of the diagram of \(C_{7}\). Their homology classes satisfy the above equations. We can also find the \(2\)-handle \(u_{6}\), so what we need to check is that the attaching circle of \(u_{6}=7h-3e_{1}-2e_{2}-2e_{3}-2e_{4}-2e_{5}-2e_{6}-2e_{7}-2e_{8}-2e_{9}-2e_{10} -2e_{11}-2e_{12}-2e_{13}-e_{14}\) is the unknot. If we delete all the other \(2\)-handles from Figure 5, it looks like Figure 14. We can isotope this diagram to obtain Figure 15. We can find four positive full twists on the right side of Figure 15. By an isotopy, we obtain Figure 16. We can find another four positive full twists from the left side of Figure 16, and we obtain Figure 17 by canceling those full twists again. The reader can check that this diagram represents the unknot.
**Definition 2.8** ([29, Definition 3.5, \(a=4\)]).: The rational blowdown of \(\mathbf{CP}^{2}\#14\overline{\mathbf{CP}^{2}}\) along the embedded copy of \(C_{7}\) in Corollary 2.7 is denoted as \(R_{8}\).
**Remark 2.9**.: Hereafter, we will not need to track the information of the homology classes of \(2\)-handles. So, we will use framing coefficients to represent the framings of \(2\)-handles in the following diagrams. We obtain Figure 18 by squaring the homology classes in Figure 5.
Now we will draw a diagram of \(R_{8}\).
**Theorem 2.10**.: \(R_{8}\) _admits a handle decomposition of Figure 19._
Proof.: Recall that there is a procedure to draw a diagram of a rational blowdown [2, Figure 16]. If we apply this procedure to the copy of \(C_{7}\) in Figure 5, we obtain Figure 19.
**Remark 2.11**.: In [29, Proposition 3.9], Yasui showed that a \(4\)-manifold obtained by his construction is homeomorphic to \(\mathbf{CP}^{2}\#8\overline{\mathbf{CP}^{2}}\). To prove that it is homeomorphic to \(\mathbf{CP}^{2}\#8\overline{\mathbf{CP}^{2}}\), he checked the simply-connectedness of his manifold to apply Rochlin's theorem and Freedman's theorem. He also proved that a \(4\)-manifold obtained by his construction is not diffeomorphic to \(\mathbf{CP}^{2}\#8\overline{\mathbf{CP}^{2}}\) by showing that its Seiberg-Witten invariant is non-trivial [29, Lemma 5.1 (1)]. To prove the non-triviality of the Seiberg-Witten invariant, he first calculated a non-trivial value of the Seiberg-Witten invariant of \(\mathbf{CP}^{2}\#14\overline{\mathbf{CP}^{2}}\). Then he used the information of the homology classes of the \(2\)-handles of \(C_{7}\) to apply the theorems of Fintushel and Stern [9] on the Seiberg-Witten invariant of the rational blowdown, and proved that a value of the Seiberg-Witten invariant of his manifold coincides with the above non-trivial value of the Seiberg-Witten invariant of \(\mathbf{CP}^{2}\#14\overline{\mathbf{CP}^{2}}\).
\(7h-3e_{1}-2e_{2}-2e_{3}-2e_{4}-2e_{5}-2e_{6}-2e_{7}\)
\(-2e_{8}-2e_{9}-2e_{10}-2e_{11}-2e_{12}-2e_{13}\)
\(e_{1}\)\(e_{2}\)\(e_{3}\)\(e_{4}\)\(e_{5}\)\(e_{6}\)\(e_{7}\)\(e_{8}\)\(e_{9}\)\(e_{10}\)\(e_{11}\)\(e_{12}\)\(e_{13}\)\(e_{14}\)\(e_{15}\)\(e_{16}\)\(e_{17}\)\(e_{18}\)\(e_{19}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{24}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{23}\)\(e_{24}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{24}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{24}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{24}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(e_{20}\)\(e_{21}\)\(e_{22}\)\(e_{22}\)\(e_{23}\)\(e_{25}\)\(e_{26}\)\(e_{27}\)\(e_{28}\)\(e_{29}\)\(
Figure 16.
Figure 17.
Figure 15.
Figure 15.
We followed Yasui's procedure [29] to construct \(C_{7}\) in \(\mathbf{CP}^{2}\#14\overline{\mathbf{CP}^{2}}\) with small modifications. However, we can still apply the same arguments to prove that \(R_{8}\) is an exotic \(\mathbf{CP}^{2}\#8\overline{\mathbf{CP}^{2}}\). First, by sliding a \(-1\)-framed unknot over the \(0\)-framed unknot in the diagram in Figure 19, we obtain a diagram of \(R_{8}\) where we can cancel the unique \(1\)-handle. So \(R_{8}\) is simply-connected, and one can check that \(R_{8}\) is homeomorphic to \(\mathbf{CP}^{2}\#8\overline{\mathbf{CP}^{2}}\) by applying Rochlin's theorem and Freedman's theorem as in Yasui's argument. Also,
Figure 19. A diagram of \(R_{8}\).
Figure 18.
the homology classes of the \(2\)-handles of \(C_{7}\) in Figure 5 and those in [29, Corollary 3.3 (1)] are the same. Therefore, the non-triviality of the Seiberg-Witten invariant of \(R_{8}\) has also been proved by Yasui's argument. Thus, we obtain the following theorem.
**Theorem 2.12** (cf. [29, Corollary 5.2 (1)]).: \(R_{8}\) _is homeomorphic but not diffeomorphic to \(\mathbf{CP}^{2}\#8\overline{\mathbf{CP}^{2}}\)._
## 3. Finding a cork
In this section, we prove Theorem 1.2 and Corollary 1.3. To prove Theorem 1.2, we find an embedding of a cork into \(R_{8}\) by using the diagram in Corollary 2.10, and show that the cork twist of \(R_{8}\) along this cork results in the standard \(\mathbf{CP}^{2}\#8\overline{\mathbf{CP}^{2}}\). First, let us briefly review the basic terminology associated with corks.
**Definition 3.1**.: Let \((C,\tau)\) be a pair of a compact contractible \(4\)-manifold with boundary and an involution \(\tau\) on the boundary \(\partial C\). We call \((C,\tau)\) a _cork_ if \(\tau\) does not extend to any self-diffeomorphism on \(C\). A cork \((C,\tau)\) is called a cork of a \(4\)-manifold \(X\) if \(C\) is embedded in the interior of \(X\) and the \(4\)-manifold \(X_{(C,\tau)}:=X-\operatorname{int}(C)\cup_{\tau}C\) is not diffeomorphic to \(X\). The \(4\)-manifold \(X_{(C,\tau)}\) is called the _cork twist_ of \(X\) along \((C,\tau)\). Note that for any cork \((C,\tau)\) and any embedding of \(C\), the cork twist \(X_{(C,\tau)}\) is homeomorphic to \(X\).
**Lemma 3.2**.: _There is an embedding of \(W_{2}\) into \(R_{8}\)._
Proof.: Figure 20 (a) shows a local part of Figure 19. For the following argument, we often draw the local parts of the diagrams unless necessary. As mentioned in section 2.4, the parts of the diagrams not drawn in the figures are always fixed. We slide \(-1\)-framed unknot over the \(0\)-framed unknot, and then we isotope the diagram to obtain Figure 20 (b). We can isotope this diagram to Figure 20 (c). We slide the \(0\)-framed unknot over the \(6\)-framed unknot to obtain Figure 20 (d). By isotopies, we obtain Figure 20 (e), (f), and then Figure 21 (a). We slide the \(4\)-framed unknot over the \(-1\)-framed unknot and then isotope the diagram to obtain Figure 21 (b). By another isotopy, we obtain Figure 21 (c). By sliding the \(0\)-framed unknot in the diagram over one of the \(-1\)-unknots, we obtain Figure 21 (d). Now the entire diagram looks like Figure 22. This diagram shows that \(R_{8}\) contains a submanifold described in Figure 23. By isotoping this diagram likewise in Figure 14, 15, 16, and 17, we obtain Figure 24 (a). We isotope this diagram to obtain Figure 24 (b). By another isotopy, we can deform this diagram to Figure 24 (c), a diagram of \(W_{2}\). Therefore there is an embedding of \(W_{2}\) in \(R_{8}\).
Now we are ready to prove Theorem 1.2.
Proof of Theorem 1.2.: It is enough to show that the cork twist of \(R_{8}\) along \((W_{2},f_{2})\) results in \(\mathbf{CP}^{2}\#8\overline{\mathbf{CP}^{2}}\). Figure 25 shows a diagram of the cork twist of \(R_{8}\) along the embedded copy of \((W_{2},f_{2})\) in Lemma 3.2. Figure 26 (a) shows the bottom right part of this diagram. By following the steps described in Figure 26, we obtain Figure 26 (f). After performing a handle slide as shown in this figure, we can isotope the diagram to obtain Figure 27
(a). By following the steps described in Figure 27, we obtain Figure 27 (f). Note that the deformation from Figure 27 (b) to Figure 27 (c), in which the dot and 0 are exchanged, gives a diffeomorphism because the 0-framed unknot geometrically links with the dotted circle once, so this exchange represents cutting out a 4-ball and pasting it back. After a handle slide and isotopy, we obtain Figure 28 (a). By following the steps described in Figure 28, we obtain Figure 28 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle slide and isotopy, we obtain Figure 29 (f). After a handle handle and isotopy, we obtain Figure 29 (f). After a handle handle and isotopy, we obtain Figure 29 (f). After a handle handle and isotopy, we obtain Figure 29 (f). After a handle handle handle and isotopy, we obtain Figure 29 (f). After a handle handle handle and isotopy, we obtain Figure 29 (f). After a handle handle handle and isotopy, we obtain Figure 29 (f). After a handle handle handle and isotopy, we obtain Figure 29 (f). After a handle handle handle handle and isotopy, we obtain Figure 29 (f). After a handle handle handle handle and isotopy, we obtain Figure 29 (f). After a handle handle handle handle and isotopy, we obtain Figure 29 (f). After a handle handle handle handle handle and isotopy, we obtain Figure 29 (f). After a handle handle handle handle handle handle and isotopy, we obtain Figure 29 (f). After a
Proof of Corollary 1.3.: First, recall that if \(M\) is a simply-connected 4-manifold, the result of the surgery along any embedded \(S^{1}\) must be diffeomorphic to either \(M\#S^{2}\!\times\!S^{2}\) or \(M\#S^{2}\tilde{\times}S^{2}\) ([11, Proposition 5.2.3]). Furthermore, if \(M\) is a non-spin simply-connected 4-manifold, then \(M\#S^{2}\!\times\!S^{2}\) and \(M\#S^{2}\tilde{\times}S^{2}\) are diffeomorphic ([11, Proposition 5.2.4]). Since the 4-manifolds \(R_{8}\) and \(\mathbf{CP}^{2}\#8\overline{\mathbf{CP}^{2}}\) are non-spin and simply-connected, we only have to show that the results of the surgeries along embedded \(S^{1}\) in those 4-manifolds are diffeomorphic.
By Theorem 1.2, we know that there exists a diagram of \(R_{8}\) and a diagram of \(\mathbf{CP}^{2}\#8\overline{\mathbf{CP}^{2}}\) such that each diagram contains a copy of the diagram of \(W_{2}\) (in Figure 1), and each diagram becomes identical after changing a dotted circle into a 0-framed unknot. In general,
Figure 21.
Figure 23.
Figure 22. Another diagram of \(R_{8}\).
Figure 23.
changing a dotted circle into a \(0\)-framed unknot corresponds to a surgery along an embedded \(S^{1}\)[11, Section 5.4]. Therefore the results of the surgeries along embedded \(S^{1}\) in \(R_{8}\) and \(\mathbf{CP}^{2}\#8\mathbf{CP}^{2}\) are diffeomorphic.
|
2309.14937 | Virtual Reality as a Tool for Studying Diversity and Inclusion in
Human-Robot Interaction: Advantages and Challenges | This paper investigates the potential of Virtual Reality (VR) as a research
tool for studying diversity and inclusion characteristics in the context of
human-robot interactions (HRI). Some exclusive advantages of using VR in HRI
are discussed, such as a controllable environment, the possibility to
manipulate the variables related to the robot and the human-robot interaction,
flexibility in the design of the robot and the environment, and advanced
measurement methods related e.g. to eye tracking and physiological data. At the
same time, the challenges of researching diversity and inclusion in HRI are
described, especially in accessibility, cyber sickness and bias when developing
VR-environments. Furthermore, solutions to these challenges are being discussed
to fully harness the benefits of VR for the studying of diversity and
inclusion. | André Helgert, Sabrina C. Eimler, Carolin Straßmann | 2023-09-26T13:48:30Z | http://arxiv.org/abs/2309.14937v1 | Virtual Reality as a Tool for Studying Diversity and Inclusion in Human-Robot Interaction: Advantages and Challenges
###### Abstract
This paper investigates the potential of Virtual Reality (VR) as a research tool for studying diversity and inclusion characteristics in the context of human-robot interactions (HRI). Some exclusive advantages of using VR in HRI are discussed, such as a controllable environment, the possibility to manipulate the variables related to the robot and the human-robot interaction, flexibility in the design of the robot and the environment, and advanced measurement methods related e.g. to eye tracking and physiological data. At the same time, the challenges of researching diversity and inclusion in HRI are described, especially in accessibility, cyber sickness and bias when developing VR-environments. Furthermore, solutions to these challenges are being discussed to fully harness the benefits of VR for the studying of diversity and inclusion.
## I Introduction
Virtual Reality (VR) has established itself as a powerful research tool in human-computer interaction through the use of unique capabilities and immersive experiences [1]. This is not only due to the fact that VR can be used to create dynamic and controllable environments, but also because experimental studies have shown that VR-based assessments are just as effective compared to traditional surveys and represent human behavior and physiological patterns just as they do in real-world scenarios [2]. For this reason, VR is also increasingly used as a research tool for human-robot interactions (HRI) [3, 4]. Since communication between humans and robots can be very different and individual due to various diversity characteristics, it is important that robots can respond to them appropriately. As society becomes more diverse, diversity-related challenges arise that increasingly affect us in the private and public places. This development is not only challenging in human-human interaction, but also affects all computer systems with which humans interact directly, including robots. Therefore, robotic systems must be able to respond to and interact with each individual, taking into account individual diversity factors. In this paper, we will discuss the capabilities of VR in the study of HRI, especially in terms of diversity and inclusion research, with advantages in the field of controllability, manipulability, flexibility and extended measuring methods as well as its challenges and possible solutions.
## II VR as a Research Tool
VR is an important subset of immersive technologies that is increasingly being used in HCI and psychological studies. One reason is the possibility to conduct studies which are difficult to perform or control in the real world. Thus, in today's world, VR appears as one of the most beneficial tools to achieve effective results in the field of therapy and rehabilitation in patients [5], such as in anxiety disorders [6] or in developing skills to deal with pain [7]. Furthermore, the use of VR in the fields of educational [8], social [9] and experimental psychology [10], among others, has produced advantages in data collection methods and measurement data that are difficult to implement in real-world scenarios.
VR is also already being used extensively in HRI, providing researchers with new ways to explore and understand the complexity of human-robot interaction. For example, Shariati et al. designed a VR-robot based on a real social robot, which was specifically designed to improve learning, clinical therapy for children with chronic diseases, and education [11]. The virtual robot was compared with a real robot in an experimental study and the results indicate that the acceptance of a VR-robot is the same as that of a real robot, and that the virtual robot did not present a significant difference in performance from the real robot. Shariati et al. conclude that the VR-platform has the potential to have an important auxiliary solution for social robot research. Another controlled study was conducted to investigate whether VR can be a suitable platform for exploring social interactions between humans and robots [12]. The quantitative results suggest that the core aspects of human-robot social interactions are preserved in a VR-simulation. These examples show that the medium VR is already used in HRI to produce valid research results. It is therefore natural to explore VR in relation to diversity and inclusion in HRI. Common advantages of VR applications can be applied to the diversity topic, which will be explained in detail in the next chapter.
## III Advantages in Exploring Diversity and Inclusion in VR
In addition to the countless applications opportunities of VR, it can also be used to explore diversity and inclusion by providing exclusive characteristics of VR, but also by extending common measurement methods. Since VR is used for a wide variety of research purposes, the following characteristics are presented in the context of HRI-studies.
### _Controllable Environment_
The use of VR as a research tool in HRI provides a significant advantage by allowing studies to be conducted that are difficult to control in the real world [13, 14]. Conducting studies in a controlled virtual environment helps minimize or even eliminate confounding factors that can
occur in field studies. This has the advantage that research results can be generated without the influence of external factors. In addition, every variable can be controlled, from visual to auditory stimuli and even haptic impressions. This level of controllability allows researchers to simulate and manipulate a variety of scenarios to investigate specific questions. Specifically in relation to diversity and inclusion, for example, social interactions can be controlled in their nature of communications and adaptivity to diversity factors. Investigations can be limited to specific situations, which may not be as easy to conduct due to a non-controllable environment. This circumstance ensures a high degree of standardization and reproducibility, which are crucial aspects of controlled studies.
### _Manipulability_
A key advantage of using VR in HRI research is the high degree of manipulability it offers. Unlike physical robots, VR-simulations can encompass a wide range of robot variations, allowing researchers to explore various factors such as robot appearance, behavior, and capabilities [15]. Researchers can manipulate these variables to study how different robot attributes affect human perception, attitude, and behavior. For example, they can study how variations in the robot's gender, ethnicity, or voice affect users' trust, engagement, and willingness to interact. Of course, this manipulability can also be used to study diversity and inclusion characteristics, as different people have different expectations and perceptions based on their background, prior knowledge and so on. However, in addition to manipulating the robot, the environment itself can also be dynamically adapted. Thus, the robot and the interaction with humans can be tested and evaluated in different scenarios and use cases. This allows possible conclusions to be drawn about a wide variety of variables in the environment and an interaction with respect to the perception of the HRI.
### _Flexiblity_
VR-environments provide researchers with the flexibility to tailor interactions e.g. based on specific diversity factors [16] which are difficult or impossible to implement in the real world. By integrating features such as language preferences, cultural backgrounds, or even physical impairments, researchers can create personalized experiences for participants. This customization promotes inclusivity and enables the examination of human-robot interactions across various demographic groups. This may help to counteract discriminatory bias and minimize the reproduction of stereotypes by the robot. In the area of accessibility, the use of VR offers a wide range of possibilities, which can be explored and tested. Features related to accessibility could be, for example, speech or gesture recognition or adaptive interfaces for people with physical disabilities [17]. Understanding how different individuals or communities interact with robots can inform the design of more inclusive and culturally sensitive robotic systems. For example, needs of people with physical limitations or other physical characteristics such as different body sizes can be captured by simulating these situations on their own game character in VR. Through these simple technical adjustments, different diversity characteristics and their needs can be identified, which could provide for a more inclusive experience with the robot in future iterations.
### _Extended Measuring Methods_
VR is commonly used as a research method to study human behavior and cognition because, when properly utilized, it provides an expanded data foundation through specialized measurement methods [1]. There are several measurement methods for researching human-robot interaction in VR. Common methods include eye tracking [18] and physiological measurements [19]. For example, eye-tracking technology can be used to collect data on a person's eye movements and what they are looking at and for how long. From this, behavior patterns and focal points of attention can be identified. Physiological surveys could unobtrusively collect data [20] on the person's emotional state or measure stress in interaction with the robot and cognitive load. With regard to diversity factors, conclusions could be drawn from people with different characteristics on the perception of the robot and the environment, which can subsequently help to make the human-robot interaction more diversity-friendly. Both methods can be an extension of conventional measurement methods and reveal implications for HRI.
## IV Challenges in Exploring Diversity and Inclusion in VR
In addition to the advantages of VR, which can represent a more diversified exploration and design of different scenarios, the technology can also have an exclusive or even discriminatory effect. In the following sub-chapters, these challenges are discussed, in particular the accessibility problems, the cyber sickness phenomenon and the bias that developers bring to VR-systems.
### _Accessibility_
The VR-technology may not be accessible for all individuals, especially for people with disabilities. Since conventional VR-devices consist of a headset and two controllers, the use may work better or worse for different limitations. For example, visual impairments affect the perception of the virtual world and, depending on their severity, may be a criterion for exclusion from VR. Hearing impairments can quickly make the virtual world more difficult to use if the design is not inclusive [21]. Furthermore, limitations in the mobility of the person can be a problem, since common VR-applications provide for two controllers to move around in the virtual world [22].
### _Cyber Sickness_
Cyber Sickness (CS) is a syndrome that can occur when using VR, when there is a discrepancy between the visual information projected through VR-goggles and the sensory information perceived by the body. This can lead to nausea and dizziness and affects the VR-experience to a great extent.
CS is considered a major barrier to the acceptance and adoption of VR, as there is not yet a fully comprehensive solution for the non-occurrence of the syndrome [23]. There are also gender differences in the occurrence of CS. For example, women are more likely to experience cyber sickness when using VR-applications because they have a different average IPD (the distance between the pupils of both eyes) than men [24] and they cannot adjust their IPD on VR-goggles because they only support the mean IPD of men [25].
### _Bias in VR-environments_
Researching diversity and inclusion issues can be challenging because, for example, prejudiced perceptions of developers and researchers and a non-sensitive perception on other social groups can influence the VR-environment itself. These prejudices could possibly be (also less) obviously reflected in the development and design process [26]. Not only can this be potentially discriminatory, but it can also stigmatize and exclude different social groups. Algorithms based on artificial intelligence (e.g. robotic systems), which are needed for a huge amount of projects in VR can be discriminatory towards certain social groups. For example, a hiring algorithm from Amazon systematically downgraded applications and resumes of female applicants based on biased training data [27]. The training set was primarily male centered. Similar discriminatory occurrences exist, for example, in facial recognition [28] and in social communication with people [29].
## V Discussion
VR offers various ways for researching diversity and inclusion in different settings and szenarios, including HRI. Although we are provided with many new and effective ways to recognize and implement diversity factors, the medium itself can discriminate and could have an exclusive effect. As already discussed, accessiblity, cyber sickness and a bias in VR-environments are a few relevant variables. The question is how we can overcome these challenges in order to exploit the opportunities of VR in terms of diversity and inclusion-friendly implementations.
When looking at accessibility challenges, there is a very diverse and broad issue. Restrictions on the person are as individual as the person himself. However, special implementations can help to make VR-applications inclusive. For example, there are already solutions in the field of vision impairments, such as those of Zhao et al. who have developed a VR-toolkit, with which various low vision limitations can be improved by the use of e.g. edge measurement and depth enhancement [30]. For hearing impairments, the design of the VR-environments could be adapted to have possible implemented speech outputs put onto the user's glasses via text. For limited mobility or motor impairments it would be possible to use an adaptive controller as a replacement for the two conventional controllers. An adaptive controller such as the Xbox Adaptive Controller by Microsoft [31] can be adjusted and individualized to various constraints. This allows people who are unable to operate a conventional game controller to explore VR-environments and interact with them. Only the virtual representation of the VR-controllers and thus the hand tracking are not (yet) possible. However, this limitation can be minimized by an inclusive design of the VR-environment.
Cyber sickness can be mitigated by ensuring a minimal latency between physical movement in the real world and the corresponding virtual movement in the VR-world. Additionally, optimizing movement settings to avoid abrupt movements and make them feel more natural, like in real life, can help. For example, a jerky motion when climbing stairs in unoptimized VR-applications. Approaches to reduce cyber sickness are currently being investigated. Tian et al., for instance, examined the seating position and whether the virtual coordinate system contradicts the received real-world coordinates through our vestibular system [32]. Promising results were achieved in reducing cyber sickness when the real vertical axis aligned with the virtual one.
As many of the previous challenges can be attributed to a certain bias among developers, it is also important to generate a certain awareness among developers. This could not only address existing challenges in the area of diversity and inclusion but also avoid bias in future systems. For this reason, it is important to ensure a certain sensitivity to diversity factors among researchers and developers, but also to constantly question one's own knowledge with regard to diversity.
## VI Conclusion
VR has established itself as an effective research tool in the field of human-computer interaction. In particular, VR shows its unique capabilities in human-robot interaction research focusing on diversity and inclusion. Through controllable environments, flexibility in customization of interactions and manipulability of robot attributes, VR enables detailed investigation of the complex interactions between humans and robots. Another advantage of VR lies in the advanced measurement methods it offers. Eye-tracking and physiological measurements can be used to capture behavioral patterns and cognitive processes of humans during HRI studies. This allows for analysis of individual perceptions and reactions to the robot and the environment. This helps to make human-robot interactions more diverse and inclusive.
Nevertheless, there are also challenges in exploring diversity and inclusion in HRI with VR. Biases and prejudices of developers and researchers can negatively impact the design and development process and lead to discrimination and exclusion of certain social groups. Therefore, it is important for researchers and developers to be sensitive to diversity factors and continuously challenge their steps regarding diversity. Furthermore, the topics of accessibility and cyber sickness are still challenges today, where there are already initial implications or solutions, which nevertheless represent a hurdle. Addressing these challenges and promoting diversity and inclusion in VR-based HRI research can lead to more equitable and unbiased human-robot interactions. By harnessing the potential of VR, researchers can help
develop robotic systems that address individual diversity factors, promote inclusion, and minimize discriminatory biases. This will ultimately contribute to improved quality and effectiveness of human-robot interactions.
|
2309.13080 | SPICED: News Similarity Detection Dataset with Multiple Topics and
Complexity Levels | The proliferation of news media outlets has increased the demand for
intelligent systems capable of detecting redundant information in news articles
in order to enhance user experience. However, the heterogeneous nature of news
can lead to spurious findings in these systems: Simple heuristics such as
whether a pair of news are both about politics can provide strong but deceptive
downstream performance. Segmenting news similarity datasets into topics
improves the training of these models by forcing them to learn how to
distinguish salient characteristics under more narrow domains. However, this
requires the existence of topic-specific datasets, which are currently lacking.
In this article, we propose a novel dataset of similar news, SPICED, which
includes seven topics: Crime & Law, Culture & Entertainment, Disasters &
Accidents, Economy & Business, Politics & Conflicts, Science & Technology, and
Sports. Futhermore, we present four different levels of complexity,
specifically designed for news similarity detection task. We benchmarked the
created datasets using MinHash, BERT, SBERT, and SimCSE models. | Elena Shushkevich, Long Mai, Manuel V. Loureiro, Steven Derby, Tri Kurniawan Wijaya | 2023-09-21T10:55:26Z | http://arxiv.org/abs/2309.13080v3 | # SPICED: News Similarity Detection Dataset with Multiple Topics and Complexity Levels
###### Abstract.
Nowadays, the use of intelligent systems to detect redundant information in news articles has become especially prevalent with the proliferation of news media outlets in order to enhance user experience. However, the heterogeneous nature of news can lead to spurious findings in these systems: Simple heuristics such as whether a pair of news are both about politics can provide strong but deceptive downstream performance. Segmenting news similarity datasets into topics improves the training of these models by forcing them to learn how to distinguish salient characteristics under more narrow domains. However, this requires the existence of topic-specific datasets, which are currently lacking. In this article, we propose a new dataset of similar news, SPICED, which includes seven topics: Crime & Law, Culture & Entertainment, Disasters & Accidents, Economy & Business, Politics & Conflicts, Science & Technology, and Sports. Futhermore, we present four distinct approaches for generating news pairs, which are used in the creation of datasets specifically designed for news similarity detection task. We benchmarked the created datasets using MinHash, BERT, SBERT, and SimCSE models.
datasets, neural networks, news similarity +
Footnote †: The work was conducted during the internship at Huawel IRC.
+
Footnote †: The work was conducted during the internship at Huawel IRC.
+
Footnote †: The work was conducted during the internship at Huawei IRC.
## 1. Introduction
The internet has led to a rise in online publishers and an overflow of news content. Users spend more time sifting through multiple articles on the same events in news aggregator services, making it harder to find relevant information.
Publicly available training resources are scarce for developing systems for similar news article detection. Existing semantic textual similarity (STS) datasets are not suitable for news similarity detection, as they are specific to a single topic, such as MedSTS (Krishnan et al., 2016) and CORD19STS (Krishnan et al., 2016). However, news similarity detection is inherently influenced by the high degree of heterogeneity in news content and structure, which follows a well-understood taxonomy based on news categories or genres. These categories affect how easy or hard it is to compare news articles. For example, sports news generally contains more distinctive features and less ambiguity than political news. Therefore, we need to compare news across different categories as well as within the same category, to assess the performance of different models on different levels of similarity. To this end, high-quality datasets are crucial for improving news similarity detection in complex cases, where similarity within the same topic is harder to discern than between unrelated topics.
In this work, we propose SPICED (**S**cience, **S**ports, **P**olitics, **C**rime, **C**ulture, **E**conomy, **D**issaters), a multi-topic dataset addressing the mentioned problems. It includes Crime & Law, Culture & Entertainment, Disasters & Accidents, Economy & Business, Politics & Conflicts, Science & Technology, and Sports topics. By utilizing the original dataset, we propose four distinct approaches for creating pairs in the context of the news similarity detection task. Each approach offers a unique combination of true similar and false similar news pairs. Our contributions are as follows:
* We provide an original dataset of 977 similar news pairs in English (1,954 news articles), devoted to the seven different popular news topics.
* We provide 32 datasets, all derived from an original gold-standard dataset. These datasets represent four different approaches for creating news pairs within the context of both single-topic and multi-topic similar news detection.
* We benchmark these created datasets using four algorithms for STS tasks which are prevalent in the literature: MinHash, BERT, SBERT, and SimCSE.
## 2. Related Works
Even if to the best of our knowledge there are no news similarity datasets built using categorized news article pairs, there exist other datasets that are still pertinent for this task.
SemEval-2022 Task 8 (DevDev et al., 2019) provides a multilingual news article similarity dataset of around 10,000 news article pairs. Human annotators rated similarity using a Likert scale across seven dimensions, including geographic, temporal, and narrative aspects. However, the dataset lacks news article classification based on a taxonomy.
SentEval is a toolkit that evaluates universal sentence representations across tasks like classification, natural language inference, and sentence similarity (Beng et al., 2015; Chen et al., 2016; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019; Chen et al., 2019). The datasets, collected from various sources including news articles and forum discussions, are labeled with similarity scores. SentEval measures sentence distance using cosine distance and reports Pearson (Pearson, 1959) and Spearman (Spearman, 1969) correlations.
The SICK dataset (SICK, 2016) includes 10,000 English sentence pairs annotated for entailment (SICK-E) and relatedness detection (SICK-R). SICK-E has three labels (entailment, contradiction, neutral), while SICK-R uses a 5-point rating scale.
The MRPC dataset(Serban et al., 2017; Chen et al., 2018) consists of 5,801 paraphrase pairs, capturing sentence variations with synonymy and syntactic changes. Derived from millions of web sentence pairs, it achieves 67% semantic equivalence when compared with human annotators.
## 3. Dataset creation
This section covers the creation of the news article similarity dataset, consisting of paired articles and a binary similarity label.
### Collecting News Articles
_WikiNews1_, a collaborative journalism project by the Wikimedia Foundation, follows guidelines2 that require news articles to have a topic category and be supported by at least two independent and authoritative sources. We only considered the sources that had valid and accessible URLs. Since these sources cover the same news events and share salient information, they can be considered similar and hence can be used to create this proposed news article similarity dataset.
Footnote 1: [https://en.wikinews.org/wiki/Main_Page](https://en.wikinews.org/wiki/Main_Page)
Footnote 2: [https://en.wikinews.org/wiki/Wikinews:Pillars_of_writing](https://en.wikinews.org/wiki/Wikinews:Pillars_of_writing)
In April 2022, we collected the WikiNews articles using _BeautifulSoup3_ of the 7 most populous categories: Crime & Law, Culture, Disasters & Accidents, Economy & Business, Politics & Conflicts, Science & Technology, and Sports. Table 1 records statistics of the collected articles.
Footnote 3: [https://www.crummy.com/software/BeautifulSoup/bsd/doc/](https://www.crummy.com/software/BeautifulSoup/bsd/doc/)
### Measuring Similar News
We utilize baseline similarity models to query news article similarity, supplementing our raw data. This approach enables us to efficiently identify suitable examples of similar news by offloading some of the work to oracles.
The _SimHash_ algorithm4(Krishnaman et al., 2017) is employed to identify pairs of news articles with high similarity. The determination of similarity or dissimilarity is based on a threshold specified within the SimHash implementation. Subsequently, a validation process is conducted to ensure that both news articles in a pair originate from the same WikiNews webpage.
Footnote 4: [https://github.com/1eong/simhash](https://github.com/1eong/simhash)
Next, for the subset of similar news articles (according to the SimHash filtering step) originating from the same WikiNews webpage, we utilize the transformer-based model _SBERT_(Serban et al., 2017), specifically the paraphrase-multilingual-mpnet-base-v25 model, to identify the most similar news articles within the dataset. The approach of creating SimHash pairs separately for each topic is applied consistently.
Footnote 5: We use paraphrase-multilingual-mpnet-base-v2.
### Dataset Annotation
Experts review and assess rudimentary approximated news pairs to gather appropriate samples for our final gold-standard annotations. Two annotators evaluate all proposed pairs and agree on similarity. They resolve discrepancies through discussion to determine whether to keep or discard the pair.
For experts annotation, we define the following criteria that any similar pair of news articles must satisfy:
1. Both news articles in a pair must be about the same topic and event (for example, topic - sports, event - UEFA Champions League final);
\begin{table}
\begin{tabular}{l r r r r r r r} \hline \hline
**Topics** & **CL** & **CE** & **DA** & **EB** & **PC** & **ST** & **SP** \\ \hline \multicolumn{6}{l}{**Document statistics**} \\ \hline \# Webpages & 4,419 & 2,129 & 2,757 & 2,320 & 7,675 & 2,064 & 2,423 \\ \# Source articles & 7,495 & 3,759 & 4,716 & 3,881 & 14,075 & 3,681 & 3,738 \\ \hline \multicolumn{6}{l}{**Words per source**} \\ \hline Mean & 606.9 & 518.4 & 544.7 & 605.1 & 629.6 & 662.1 & 579.3 \\ Median & 553 & 409 & 487 & 519 & 563 & 583 & 472 \\ Minimum & 34 & 26 & 37 & 31 & 34 & 42 & 43 \\ Maximum & 2,420 & 2,974 & 2,918 & 3,663 & 2,514 & 3,092 & 2,200 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Summary of statistics describing the number of collected documents per topics and their lengths, as measured by the number of words.
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline
**Topics** & **CL** & **CE** & **DA** & **EB** & **PC** & **ST** & **SP** \\ \hline \multicolumn{6}{l}{**Filters**} \\ \hline SimHash & 76,996 & 8,672 & 24,015 & 30,291 & 123,791 & 8,916 & 14,954 \\ Source of the same Wikinews page. & 511 & 259 & 316 & 312 & 852 & 273 & 334 \\ SBERT & 501 & 230 & 300 & 279 & 779 & 249 & 318 \\ Experts annotation & 238 & 95 & 137 & 120 & 364 & 136 & 94 \\ Duplicates removal & 192 & 90 & 124 & 107 & 259 & 111 & 94 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Statistics show the number of similar pairs at each sequential filtering step. As we move down through each filtering step the number of articles is reduced to build our gold-standard dataset.
2. Both news articles should have similar lengths to avoid information asymmetry, where one article contains significantly more information than the other;
3. Opinion articles, prone to biases, should be excluded from similar news classifications. Similar news should be factual and not influenced by the authors' interpretations;
4. Any numerical values cited in the articles should be consistent. For example, if one article mentions 10 road accident victims and its pair states "more than 8 people," they should still be considered similar;
5. The time of publication must be close. News articles discussing the same event but published at significantly different times are considered dissimilar.
Duplicate pairs are eliminated in the filtering process when news articles cover multiple topics. We remove the pair from the topic with more samples to ensure dataset balance.
### Statistics
The publicly available dataset6 contains 977 similar pairs across 7 topics. Table 2 shows the number of similar news pairs at each sequential filtering step, including SimHash, source confirmation, SBERT, expert annotation, and duplicate removal. These steps, which involve machine learning and manual checks, culminate in the final number of similar pairs for each topic.
Footnote 6: You can downloaded the dataset at [https://zenodo.org/record/8044777](https://zenodo.org/record/8044777)
Table 1 shows statistics counts in the dataset, with averages ranging from 518.4 to 662.1 words and a maximum of 3663 words in the Economics and Business (EB) topic. This poses a challenging task for similarity computation, making the dataset valuable for developing models that can handle news articles of different lengths.
### Approaches for creating datasets
We present an additional contribution and novelty by introducing several approaches for creating news pairs within our datasets. The Table 3 provides the number of training and test instances for each dataset corresponding to each approach.
**Inter-Topic** This set includes similar news pairs as positive samples and dissimilar news pairs from different topics as negative samples. This approach proposes distinguishing dissimilar pairs when they belong to different topics.
**Intra-Topic** This set contains positive and negative pairs within the same topic, split into seven separate subsets corresponding to different topics. We also remove the challenging examples from the negative pairs, as they will belong to the next approach's datasets.
**Hard Examples** This set consists of all positive pairs and the 3,000 most similar negative pairs, according to SimHash, within each intra-topic. These examples are the most similar among the dissimilar pairs. We also exclude these pairs from their corresponding intra-topic set, ensuring that there is no overlap between the negative pairs in the intra-topic and hard examples sets.
**Combined** This set includes all possible positive and negative pairs. Concretely, these pairs consists of the union of all negative pairs from the previous three datasets: Inter-Topic, Intra-Topic and Hard Negatives.
## 4. Benchmarking
This section describes the models, experiments and benchmark results of our novel similarity news dataset.
### Pretrained Models
**Minhash**(Mih et al., 2017) is an efficient method for estimating set similarity using the Jaccard coefficient. We used the snapy library7 to obtain a simple baseline for more complex algorithms.
Footnote 7: [https://libraries.io/pypi/snapy](https://libraries.io/pypi/snapy)
**BERT**(Dong et al., 2018) - a classical way to obtain embeddings for the news and find the cosine similarities between them afterwards. We used the BERT-base-uncased model8.
\begin{table}
\begin{tabular}{l r r} \hline \hline
**Model** & Train & Test \\ \hline \hline
**Inter Topic** & & \\ \hline All & 767,587 & 148,382 \\ \hline \hline
**Intra Topic** & & \\ \hline Crime \& Law (CL) & 33,678 & 5,770 \\ Culture \& Entertainment (CE) & 5,526 & 640 \\ Disaster \& Accidents (DA) & 12,606 & 1,950 \\ Economy \& Business (EB) & 8,778 & 1,245 \\ Politics \& Conflict (PC) & 63,241 & 11,190 \\ Science \& Technology (ST) & 9,681 & 1,378 \\ Sporting Activities (SP) & 6,285 & 753 \\ \hline
**Hard Examples** & & \\ \hline Crime \& Law (CL) & 2,234 & 958 \\ Culture \& Entertainment (CE) & 2,162 & 928 \\ Disaster \& Accidents (DA) & 2,186 & 938 \\ Economy \& Business (EB) & 2,174 & 933 \\ Politics \& Conflict (PC) & 2,281 & 978 \\ Science \& Technology (ST) & 2,177 & 934 \\ Sporting Activities (SP) & 2,165 & 929 \\ \hline \hline
**Combined** & & \\ \hline All & 921,403 & 177,310 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Statistics include the number of similar pairs for each dataset corresponding to the four approaches.
**SBERT**(Kumar et al., 2017) - a modified version of BERT that employs siamese and triplet network structures, to obtain semantically meaningful sentence embeddings. The all-mpnet-base-v2\({}^{\prime}\) model was used for our experiments.
**SimCSE**(Kumar et al., 2017) - a simple contrastive learning framework that greatly advances the state-of-the-art sentence embeddings. The model has shown superior performance over BERT-base on several STS datasets. We use bert-base-uncased as the encoder for training SimCSE using the original authors implementation10.
Footnote 10: [https://huggingface.co/sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
Footnote 10: [https://github.com/princeton-nlp/SimCSE](https://github.com/princeton-nlp/SimCSE)
For BERT, SBERT, and SimCSE, we computed cosine similarity with threshold selection, while for MinHash, we used Jaccard similarity coefficient with optimal threshold.
### Experiment Configurations
The best threshold was chosen based on the highest F1-score on the training set, and it was used to evaluate the F1-score on the testing set. BERT, SBERT, and SimCSE experiments were conducted on a single V100-32GB GPU, while MinHash utilized 72 CPU cores. The entire process took approximately 12 hours per model across all levels and topics.
### Results
We conducted the experiments with all the models we described above on the datasets created through four different approaches: Intra Topic, Inter Topic, Hard Examples, and Combined. All results are displayed in Table 4.
**Inter-Topic** SBERT achieved the highest F1-score (0.920) followed by SimCSE with an F1-score of 0.896, while MinHash attained the lowest F1-score of 0.707.
**Intra-Topic** In intra-topic experiments, SimCSE showed a lower average F1-score (0.890) compared to the inter-topic approach. However, MinHash, BERT, and SBERT demonstrated superior performance compared to their inter-topic results.
**Hard Examples** The results for the intra-similarity approach are lower than those for the inter-similarity approach for each model. SBERT performs the best, followed by SimCSE, BERT, and MinHash with the lowest average result.
**Combined** In the case of the combined approach, SBERT (F1-score: 0.922) exhibits the highest results, similar to the other approaches, while MinHash (F1-score: 0.757) demonstrates the lowest results, also consistent with the other approaches.
## 5. Conclusion and Future work
In this paper, we introduce a novel news dataset for semantic textual similarity that accounts for emergent semantic categories within the text. Our curated dataset comprises 32 training and test sets, according to four approaches for news pairs creation: Inter-Topic Similarity, Intra-Topic Similarity, Hard Example Mining, and Combined Similarity. The dataset is publicly available and aims to foster future model improvements.
For future work, we would like to extend the dataset to make it not only multi-topic but also multi-lingual, using news in different languages for training/testing. We also plan to compare our dataset with other existing ones, such as SemEval-2022, to evaluate how interchangeable these datasets are.
|
2309.14923 | ML-based PBCH symbol detection and equalization for 5G Non-Terrestrial
Networks | This paper delves into the application of Machine Learning (ML) techniques in
the realm of 5G Non-Terrestrial Networks (5G-NTN), particularly focusing on
symbol detection and equalization for the Physical Broadcast Channel (PBCH). As
5G-NTN gains prominence within the 3GPP ecosystem, ML offers significant
potential to enhance wireless communication performance. To investigate these
possibilities, we present ML-based models trained with both synthetic and real
data from a real 5G over-the-satellite testbed. Our analysis includes examining
the performance of these models under various Signal-to-Noise Ratio (SNR)
scenarios and evaluating their effectiveness in symbol enhancement and channel
equalization tasks. The results highlight the ML performance in controlled
settings and their adaptability to real-world challenges, shedding light on the
potential benefits of the application of ML in 5G-NTN. | Inés Larráyoz-Arrigote, Marcele O. K. Mendonca, Alejandro Gonzalez-Garrido, Jevgenij Krivochiza, Sumit Kumar, Jorge Querol, Joel Grotz, Stefano Andrenacci, Symeon Chatzinotas | 2023-09-26T13:32:18Z | http://arxiv.org/abs/2309.14923v1 | # ML-based PBCH symbol detection and equalization for 5G Non-Terrestrial Networks
###### Abstract
This paper delves into the application of Machine Learning (ML) techniques in the realm of 5G Non-Terrestrial Networks (5G-NTN), particularly focusing on symbol detection and equalization for the Physical Broadcast Channel (PBCH). As 5G-NTN gains prominence within the 3GPP ecosystem, ML offers significant potential to enhance wireless communication performance. To investigate these possibilities, we present ML-based models trained with both synthetic and real data from a real 5G over-the-satellite testbed. Our analysis includes examining the performance of these models under various Signal-to-Noise Ratio (SNR) scenarios and evaluating their effectiveness in symbol enhancement and channel equalization tasks. The results highlight the ML performance in controlled settings and their adaptability to real-world challenges, shedding light on the potential benefits of the application of ML in 5G-NTN.
Machine Learning, 5G Non-Terrestrial Networks, Satellite Communications, Channel estimation, Symbol Enhancement, Equalization, Physical Broadcast Channel.
## I Introduction and Background
The application of Machine Learning (ML) techniques in wireless communications is continuously proving its enormous potential towards performance enhancement and acceleration of complex signal processing algorithms. Recently Non-Terrestrial Networks (NTN) has gained significant momentum among the research community, especially after the inclusion of NTN as a part of 3GPP ecosystem from the recent Release-17 on-wards [1]. Besides, 3GPP Release-18 [2] will natively embrace artificial intelligence and machine learning based technologies for providing data-driven and, intelligent network solutions.
Several studies have explored the application of ML in the context of 5G-NTN. The authors in [3] apply reinforcement learning to determine appropriate scheduling policy for link selection in a LEO based 5G-NTN. Their simulations show the effectiveness of this approach in terms of improving end-to-end loss rates and bandwidth utilization for a non-static channel. Authors in [4] focus on application of ML techniques to address the problem of handovers in a LEO based 5G-NTN. Location of the UE is taken as the important feature to train the ML model for improving the conditional handover decisions. In a survey article [5], the authors have provided deep insights into the applications of artificial intelligence (AI) empowered techniques for 5G and 6G NTN which include: channel estimation, mobility management, doppler estimation and compensation, resource management, network procedures to name a few. Moreover, ML has been extensively investigated in order to address fundamental physical layer challenges in wireless communication systems. In [6], joint channel estimation and symbol detection is performed by one DNN in an end-to-end manner. However, domain-specific knowledge is exploited in [7, 8] by breaking a single DNN in two. Notably, these studies primarily rely on simulations and lack real-world data validation, highlighting the need for practical testing.
In this work, we focus on ML for symbol detection and equalization in 5G-NTN's Physical Broadcast Channel (PBCH). The PBCH plays a crucial role in conveying essential data, via the Master Information Block (MIB) which is necessary for initial access procedure in the User Equipment (UE); this includes acquisition of System Information Block
1 (SIB1) and the location of resources in Physical Downlink Control Channel (PDCCH). To the best of author's knowledge, ML based techniques has not been used for symbol detection in 5G-NTN over real world data. We benefit from the 5G-NTN testbed at the University of Luxembourg to record live Overthe-Satellite (OTS) IQ samples for the functional validation and performance verification of our ML algorithms.
## II System model and Testbed
### _5G NR Synchronization System Block_
This research aim to improve the MIB decodification. In Fifth Generation (5G), the MIB is embeded into the Synchronization Signal Block (SSB), and to help the UE decoding it, it is send in bursts. These burst have a period of 20 ms in most of the configurations, and each burst consist on several repetitions of the SSB. This number of repetitions also is controlled by the system configuration. All these parameters are described in detail in the 5G standard document [9].
Therefore, the first task of the UE is to find in the Resource Grid (RG) the position of the SSB. Fig. 1 shows the receiver designed to extract the MIB from the received signal using a ML enhanced equalization. The receiver functionalities are described in the following paragraphs, whereas the ML model is further detailed in the next section.
The process to locate the SSB in the 5G signal starts by a blind search of the Primary Synchronization Signal (PSS) and the Secondary Synchronization Signal (SSS) based in the Global Synchronization Channel Number (GSCN) raster. In our experiment, we skip the blind search using the GSCN as we have control of the transmitter.
Once we have found the PSS and SSS, the receiver knows the position in time and frequency of the SSB within the RG and a coarse estimation of the Carrier Frequency Offset (CFO). With this information, the receiver is able to locate the Resource Element (RE) that correspond to the MIB and the ones used for the DeModulation Reference Signal (DMRS).
The next step in our receiver is to estimate the channel. In 5G this is done by using the DMRS pilots. Our receiver uses these pilots for the Neural Network (NN) equalization and further enhancement of the MIB decoding process. This process is the core of this research and is detailed in the next section.
### _Testbed_
We are using a system composed of a USRP (N310 model), which is a software-defined radio device that can capture and transmit various types of wireless signals over a wide range of frequencies. The USRP is connected to a laptop via an Ethernet cable, which allows us to control the USRP settings and process the captured signals using MATLAB. The USRP is used to capture 5G signals from a terrestrial base station located at the 6GSpace Lab, which is a research facility that aims to develop and test innovative solutions for future wireless communications. The base station has an antenna that targets the SES satellites in geostationary orbit, which provide global coverage and high data rates for 5G services.
The captured signals contain SS/PBCH blocks that carry the MIB, which is mandatory system information that provides basic parameters for initial cell selection and access. To decode the MIB, we need to perform several steps using MATLAB.
We configure the USRP to capture a certain length of samples at a specific frequency and gain. The center frequency of the 5G signal is 2029.25 MHz, but there is an offset between the carrier and the USRP, so we set the configured frequency to 2029.2 MHz. This offset introduces some phase error in the received signal, which we need to compensate for in the later stages. We also need to choose a capture length that is long enough to contain at least one SS/PBCH block, which occurs every 20 ms according to the 5G standard.
Fig. 1: Block diagram of the 5G signal capturing and decoding system with proposed NNs.
## III Methodology
We have considered two distinct NN algorithms to enhance various aspects of the on-ground 5G UE receiving chain. The first one, denoted as Symbol Enhancement NN, is dedicated to refining received symbols post-equalization. The second one, referred to as Equalization NN, is designed to improve channel estimation and, critically, the equalization process itself. Figure 2 depicts the block diagram of the 5G UE PBCH receiving chain indicating where the proposed NNs are placed within it.
The Symbol Enhancement NN just focuses on enhancing symbols following the equalization process, performing a task of relatively low complexity. Conversely, the Equalization NN needs to perform a more challenging task due to the multifaceted nature of its objective (i.e. joint channel estimation, noise characterization, equalization and the enhancement of the equalization process applied to received data).
In subsection III-A, we describe the model architecture and the training process common to both NNs. We use synthetic and real data from our 6GSPACE Lab testbed [10] for training, with specific procedures described in subsections III-B and III-C for synthetic and real data, respectively.
### _NN architecture and training process_
For both NNs, we employ a fully connected architecture with three hidden layers, each containing \(K\) neurons. The hidden layers use the hyperbolic tangent activation function, whereas the output layer employs a linear activation function. The NNs take real-valued block versions of the received complex symbols as input. For training, both NNs employ the Adam optimizer with a learning rate of \(\mu=0.001\) to minimize the mean squared error (MSE) between transmitted symbols and predictions. The training process employs mini-batches of 50 samples, requiring 40 epochs to train the NNs.
### _Synthetic Data_
Both transmitted and received synthetic data have been generated using MATLAB in order to train and validate the NNs. To generate the transmitted samples, we meticulously defined specific parameters to replicate the 5G signal used in the real data case. These parameters include selecting the SSB "Case A" block pattern, corresponding to a sub-carrier spacing (SCS) of 15 kHz, and establishing a minimum channel bandwidth of 5 MHz. As a channel model, we have considered Additive White Gaussian Noise (AWGN), a carrier frequency offset (CFO) uniformly distributed within the SCS, and an integer and fractional delay corresponding to a GEO satellite delay. Additionally, we introduced the standard variability in the MIB by changing the cell identification number and the frame sequence number on the synthetically generated 5G samples.
Both NNs were trained using a dataset comprising 3024 transmitted and received SSB signals, with each dataset tailored to a specific SNR. For the Symbol Enhancement NN, the ML model was trained using the post-equalization symbols, where the channel estimation and equalization were performed with a classical Minimum Mean Squared Error (MMSE) algorithm. On the other hand, the Equalization NN was trained with the symbols just after the SSB synchronization algorithm, leaving the NN in charge of the complete channel estimation and equalization tasks.
Furthermore, to gain a more comprehensive understanding of the performance of these ML models, we performed the ML test using three distinct configurations:
* Configuration 1: Test multiple ML models, each trained with a different SNR.
* Configuration 2: Test a single ML model trained across a range of SNRs.
* Configuration 3: Test a single ML model trained with a fixed SNR of 20 dB.
The SNRs considered for Options 1 and 2 included 0 dB, 2 dB, 5 dB, 7 dB, 10 dB, 15 dB, and 20 dB.
### _Real Data_
In the real data scenario, several differences arise compared to synthetic data. The 5G UE receiver
Fig. 2: Block diagram of the 5G UE PBCH receiving chain with proposed NNs
lacks real-time access to transmitted data due to dynamic factors. For example, the frame sequence number is not a priori known at the receiver since it is constantly increasing per each transmitted frame. On the other hand, the channel conditions tend to fluctuate over time and introduce dynamic variations into the experiments. Those are coming from both the satellite payload (e.g. non-linearities) and the over-the-air channel effects (e.g. tropospheric fading).
Perfect knowledge of the transmitted data is required to train the NNs, and as mentioned above, this is not possible with the current setup. To address this issue, we have adopted the synthetic regeneration-after-decoding approach depicted in Figure 3. This process involves decoding the PBCH bits using the 5G standard approach until the 32-bit payload data is obtained. The 32-bit payload data comprises 24 bits corresponding to the MIB and an additional 8 bits from various parameters. In the 5G standard, the 32-bit payload data has attached a 24-bit cyclic redundancy check (CRC) code. If the CRC determines that the 32-bit payload data are correct, the regeneration-after-decoding is executed. The 32-bit payload data are fed into the Bose-Chaudhuri-Hocquenghem (BCH) encoder, yielding 864 bits. These bits are then modulated with a Quadrature Phase-Shift Keying (QPSK) scheme obtaining the 432 symbols originally transmitted through the satellite, which are used for NNs training purposes. The regeneration-after-decoding approach is acceptable in this case since the SNR level at which the decoder operates is larger than the SNR ranges at which the NN are evaluated.
For the testing phase of the NNs with the real data over the satellite, the USRP gain settings were configured to three different values: 70 dB, 25 dB, and 20 dB. Such gain values correspond to SNR values of 20 dB, 10 dB and 3 dB respectively.
## IV Results
In this section, we evaluate the proposed NN models for symbol enhancement and equalization tasks using both synthetic and real data. As both NNs consider the tasks as regression problems, the ML metric used to evaluate the models is the MSE loss.
The NNs produce real-valued complex symbols, which are then converted back into complex symbols. These symbols can be represented as constellations (see Figure (a)a). We demodulate the complex symbols to obtain received bits, which are compared to transmitted bits to calculate the bit error rate (BER), serving as a system performance metric.
### _Synthetic data_
In this subsection, we consider a synthetic dataset for both the training and testing phases of the proposed NN models. This synthetic dataset is essential for assessing how well our models function in a controlled environment. The obtained MSE is shown in Figures (a)a and (b)b for Symbol Enhancement NN and Equalization NN, respectively, trained and tested with SNR = 20 dB. The NNs exhibit optimal performance when evaluated under SNR conditions matching their training data. For instance, improved symbol constellations are evident in Figures (a)a and (b)b when the NNs are tested with samples that have the same SNR as the training data.
On the other hand, testing the models at SNRs different from their training data leads to significant performance degradation For instance, when Equalization NN is trained with samples with a SNR of 20 dB and then tested with samples with a SNR of 10 dB, the gap between the MSE obtained during training and validation increases, as illustrated in Figure (a)a.
A similar degradation in performance can be noticed in Figure (b)b when the models are trained across a range of SNRs but are evaluated under a specific, fixed SNR condition.
Fig. 3: Transmitted real data regeneration-after-decoding approach.
The models' ability to generalize and maintain their efficacy across a variety of real-world scenarios is directly impacted by the nature of the samples used during training.
We compare the BER without ML techniques to the BER with our NN models in Figure 7. We explore three training scenarios. In the first scenario, individual training for each SNR results in specialized models for each SNR setting. In the second scenario, training across various SNRs leads to a single unified model for symbol enhancement and another for equalization, testing the models' adaptability. In the third and final scenario, exclusive training at SNR = 20 dB produces a model specialized for this SNR setting. These trained models are then tested at specific SNR levels.
### _Real data_
In this subsection, we consider a dataset obtained from real-world experimental tests to train and test the proposed NN models. The obtained MSE is shown in Figures 7(a) and 7(b) for Symbol Enhancement NN and Equalization NN, respectively.
Symbol Enhancement NN demonstrates a satisfactory level of performance, maintaining acceptable MSE values, and its constellations in Figures 9(a) and 9(a) closely resemble the transmitted symbols. In terms of BER, Figure 11 shows Symbol Enhancement NN outperforming traditional equalizers without ML. This indicates Symbol Enhancement NN's effectiveness and robustness in real-world data scenarios.
Fig. 4: Learning curves for NNs trained and tested with SNR = 20dB with synthetic data.
Fig. 5: Constellations for NNs trained and tested with SNR = 20dB with synthetic data.
Fig. 8: Learning curves for NNs trained and tested with SNR = 20dB with real data.
Fig. 6: Learning curves for Equalization NN with synthetic data. (a) Trained with SNR = 20 dB and tested with SNR = 10 dB. (b) Trained with several SNRs and tested with SNR = 20 dB
Fig. 7: BER curves before and after Symbol Enhancement NN and Equalization NN for synthetic data.
On the other hand, Equalization NN exhibits a larger gap between training and validation MSE in Figure (b)b, indicating poorer performance with real data compared to synthetic data. While Figures (b)b and (b)b suggest Equalization NN's ability to equalize symbols, its BER is less favorable than conventional equalizers without ML, as shown in Figure 11. This underscores the challenge of addressing equalization problems when training data inadequately represents the issues targeted by the Equalization NN.
## V Conclusion and Future Work
In this work, we've highlighted the potential of machine learning to improve 5G satellite signal decoding. We proposed two NNs to handle symbol enhancement and equalization tasks. The results demonstrate significant improvements for both synthetic and real datasets. Our future work aims to broaden our training dataset by capturing a wider range of signals. We expect to refine our machine learning models and enhance their adaptability in real-world satellite communication environments.
|
2307.16440 | Towards Head Computed Tomography Image Reconstruction Standardization
with Deep Learning Assisted Automatic Detection | Three-dimensional (3D) reconstruction of head Computed Tomography (CT) images
elucidates the intricate spatial relationships of tissue structures, thereby
assisting in accurate diagnosis. Nonetheless, securing an optimal head CT scan
without deviation is challenging in clinical settings, owing to poor
positioning by technicians, patient's physical constraints, or CT scanner tilt
angle restrictions. Manual formatting and reconstruction not only introduce
subjectivity but also strain time and labor resources. To address these issues,
we propose an efficient automatic head CT images 3D reconstruction method,
improving accuracy and repeatability, as well as diminishing manual
intervention. Our approach employs a deep learning-based object detection
algorithm, identifying and evaluating orbitomeatal line landmarks to
automatically reformat the images prior to reconstruction. Given the dearth of
existing evaluations of object detection algorithms in the context of head CT
images, we compared ten methods from both theoretical and experimental
perspectives. By exploring their precision, efficiency, and robustness, we
singled out the lightweight YOLOv8 as the aptest algorithm for our task, with
an mAP of 92.77% and impressive robustness against class imbalance. Our
qualitative evaluation of standardized reconstruction results demonstrates the
clinical practicability and validity of our method. | Bowen Zheng, Chenxi Huang, Yuemei Luo | 2023-07-31T06:58:49Z | http://arxiv.org/abs/2307.16440v2 | Towards Head Computed Tomography Image Reconstruction Standardization with Deep Learning Assisted Automatic Detection
###### Abstract
Three-dimensional (3D) reconstruction of head Computed Tomography (CT) images elucidates the intricate spatial relationships of tissue structures, thereby assisting in accurate diagnosis. Nonetheless, securing an optimal head CT scan without deviation is challenging in clinical settings, owing to poor positioning by technicians, patient's physical constraints, or CT scanner tilt angle restrictions. Manual formatting and reconstruction not only introduce subjectivity but also strain time and labor resources. To address these issues, we propose an efficient automatic head CT images 3D reconstruction method, improving accuracy and repeatability, as well as diminishing manual intervention. Our approach employs a deep learning-based object detection algorithm, identifying and evaluating orbitomatell line landmarks to automatically reformant the images prior to reconstruction. Given the dearth of existing evaluations of object detection algorithms in the context of head CT images, we compared 12 methods from both theoretical and experimental perspectives. By exploring their precision, efficiency, and robustness, we singled out the lightweight YOLOv8 as the aptest algorithm for our task, with an mAP of 92.77% and impressive robustness against class imbalance. Our qualitative evaluation of standardized reconstruction results demonstrates the clinical practicability and validity of our method.
head computed tomography images, three-dimensional reconstruction, object detection
## I Introduction
Three-dimensional (3D) reconstruction of computed tomography images has become an indispensable instrument in a broad spectrum of clinical tasks, particularly over the last few years [1][2][3]. Specifically, head CT scans hold a pivotal role, revealing key details about structures such as the brain, skull, sinuses, and various soft tissues. This provides crucial insights instrumental in diagnosing and treatments for a wide range of conditions, from traumas and tumors to strokes and sinusitis [4][5]. However, manual interpretation of these scans often proves time-consuming and error-prone. Historically, this process has been fraught with potential pitfalls due to human error and variability in diagnostic interpretation. This is primarily attributed to the intricate details and subtle variations that can signify different pathological conditions [6][7].
Employing computer 3D reconstruction software is possible to transform a sequence of CT axial images into a comprehensive three-dimensional model. This process transforms the series of two-dimensional scans into an intuitive representation of the spatial relationships among tissue structures. It's a transformative leap from the conventional methods but isn't devoid of challenges. The outcome facilitates precise surgical planning and patient-specific treatment strategies, a significant enhancement in medical procedures. The current quality control standards for head CT images take the orbitomeatal line as the baseline for scans. This line is drawn from the center of the external auditory canal (EAC) to the outer canthus of the ipsilateral eye [8].
Nonetheless, it is often challenging to achieve ideal head CT images free of lateral deviations and based on the orbitomeatal line. Traditional solutions have been more reactive than proactive in this respect. Factors such as poor positioning by technicians, patient's physical constraints, and restrictions in the tilt angle of the CT scanner contribute to this challenge. As a result, radiologic technologists frequently manually reformat axial images using thin-layer data post-CT examination. This makeshift solution, while practical, isn't optimal. Unfortunately, this process may precipitate deviations in the reconstruction baseline as a consequence of the operator's subjective judgment, thereby affecting the accuracy of reconstruction outcomes. Additionally, manual reformatting and the subsequent 3D reconstruction require a significant investment in terms of time and labor.
Deep learning has emerged as a vital player in fields such as object detection, medical image segmentation [9][10][11], diagnostics [12][13], and predictive analytics [14][15][16]. Yet, its full potential, especially in the domain of CT image reconstruction, remains largely untapped. Previous research proposed a semi-automatic multiplanar reconstruction method [17]. This method mandates manually setting five head landmarks on axial images to identify the orbitomeatal line, facilitating 3D and multiplanar reconstruction. However, it heightens the labor intensity given the necessity of manual landmark identification.
Another approach utilizes an object detection algorithm to automatically reformat head CT images based on the orbitomeatal line [18]. Despite its benefits, this method mainly focuses on the automatic reformatting of axial head CT images, with limited involvement in 3D reconstruction. Moreover, the You Only Look Once (YOLO) model used in this object detection algorithm exhibits an accuracy of only 0.68, suggesting substantial potential for improvement in detection accuracy.
Navigating these prevailing gaps, we introduce a robust and trustworthy automated 3D reconstruction method, a solution that could substantially enhance diagnostic precision. The primary contributions of this paper are as follows.
1. We have proposed an automatic 3D reconstruction method for head CT images, utilizing automatically
reformatted CT images. This method not only promises more accurate and repeatable results, mitigating potential inconsistencies arising from varying technicians' manual reconstruction judgments, but also curtails the need for manual input, yielding considerable savings in time and labor.
2. Capitalizing on a deep learning-based object detection algorithm, we have devised a streamlined process for 3D reconstruction. This algorithm identifies and evaluates orbitomeatal line landmarks, typically manually annotated by radiologists, to automate image reformatting. This process significantly reduces labor expenditure associated with image reformatting.
3. We conducted an exhaustive comparative analysis of 12 deep learning-based object detection methods, focusing on their accuracy, efficiency, and robustness in the context of automatic reformatting of CT images. Grounded in both theoretical and experimental insights, we discern the most effective and efficient algorithm for this particular task.
## II Materials and methods
### _Datasets_
The dataset we utilized comprises 140 consecutive non-contrast head CT scans gathered in January 2021. These scans were acquired utilizing a 128-detector row CT scanner (SOMATOM Definition AS+, Siemens, Erlangen, Germany), with specific scanning parameters: tube voltage set to 100kV, tube current at 447mA, a field-of-view of 222mm, and a reconstruction thickness of 1mm. The demographic distribution within these 140 cases was balanced, with approximately 49% (68 cases) being male. Age distribution within the dataset varied, ranging from 18 to 94 years.
On average, each case comprised 139 slices. Following the crucial steps of annotation and data preprocessing, a total of 619 annotated images were generated from all the slices. These images were then categorized into a training set, a validation set, and a test set following an 8:1:1 ratio. This distribution resulted in a training set consisting of 495 images, a validation set composed of 62 images, and a test set encompassing another 62 images. The distribution of four classes is 312, 286, 176, 114 instances for Right Eyes, Left Eyes, Right EAC, and Left EAC, respectively. The distinct difference in counts among these classes, especially between the Right Eyes and Left EAC, can pose challenges in model
Fig. 1: Proposed approach
training. This dataset serves as the foundation for the proposed automated 3D reconstruction method, facilitating its validation and evaluation.
### _Proposed approach overview_
Our proposed approach hinges on standardizing the CT slice data via 3D rotation prior to reconstruction. This standardization process first calls for the identification of four landmarks that determine the orbitomeatal line: bilateral eyes and bilateral external auditory canals. As depicted in Fig. 1, we employ a deep learning-based object detection algorithm to automatically identify these four pivotal landmarks. In every slice of each sequence, the object detection algorithm is applied to spot all possible landmarks. These potential landmarks are then assessed to ascertain the most suitable ones for the respective sequence. Utilizing these landmarks, we calculate rotation angles, subsequently performing a 3D rotation for image reformatting. Ultimately, we execute the 3D reconstruction of the reformatted DICOM sequences.
To actualize automated landmark detection, we harness a supervised object detection model, which is trained on preprocessed data. Prior to training, we carry out preprocessing on the raw dataset. The preprocessed data are manually annotated by radiological technologists utilizing the annotation tool LabelImg. This labeled dataset is subsequently partitioned into a training set and a validation set, enabling the trained model to be deployed for potential landmark detection.
The exploration of the performance of object detection algorithms in the context of head CT images remains largely untapped. With the intent of identifying the most suitable algorithm for this task, we selected 12 deep learning-based object detection algorithms for a comparative analysis of both theoretical and experimental performance. As delineated in Table I, we provide a high-level overview evaluating various aspects: Algorithm Complexity, Model Architecture Complexity, Training Procedure Complexity, Inference Speed, and Robustness to Dataset Shift. This systematic examination paves the way for an informed choice of the best-suited algorithm for the task at hand.
### _Data preprocessing_
The data preprocessing involves a transition of the original DICOM data into JPEG format images, while removing the overlays. This conversion maintains the original DICOM size of 512x512 pixels. The processed images possess a resolution of 96dpi, along with an 8-bit depth and a tri-channel color scheme.
### _Potential landmarks detection_
Our deep learning-based object detection algorithm identifies four pivotal landmarks: the bilateral eyes and bilateral external auditory canals. We explored 12 object detection models to aid in potential landmark detection. These models' architectures, which encompass their stages, backbone, neck, bounding box loss, and classification loss, are displayed in Table II. Notably, the YOLO series models, functioning as one-stage detectors, confer the benefit of high computational speed and low computational load, albeit with a trade-off in varying accuracy degrees.
The training generally encompasses two stages: the freezing stage and the unfreezing stage. In the freezing stage, the model's backbone is immobilized, the feature extraction network remains static, and the overall network is fine-tuned. This stage offers benefits stemming from its lower computational
Fig. 2: Two-stage supervised training process. In the freezing phase, the model’s backbone is static for fine-tuning, ensuring efficient computation and swift training. This lasts for 50 epochs before transitioning to the unfreezing stage, where the entire model is adaptable, with a batch size reduced by half. For detailed training parameters, see Table III. Notably, SGD and Adamw optimizer require 300 epochs while Adam optimizer requires 100 epochs in the unfreezing training. All models are fine-tuned using pre-trained weights from prominent datasets like COCO and PASCAL VOC 2007 to ensure optimal initialization.
resource demand and faster training speed. In contrast, during the unfreezing stage, the model's backbone is not frozen, leading to changes in the feature extraction network, implying that all network parameters can undergo modifications. The process initiates with 50 epochs of freezing training, subsequently advancing to unfreezing training. It is noteworthy that the batch size during the unfreezing phase is halved compared to the freezing phase.
The parameters used during training are outlined in Table III. Specifically for the DETR model, we employ only the adamw optimizer to expedite training. In contrast, other models utilize both the SGD and Adam optimizers. Considering that SGD necessitates a more extended period to converge, a larger total number of epochs is set. Conversely, Adam can operate with a relatively smaller number of total epochs. Bearing in mind that training a network from scratch might yield unsatisfactory results as a result of exceedingly random weights and unremarkable feature extraction, we fine-tune each model using officially trained weights on representative large-scale datasets in the field of image recognition, such as COCO [31], ImageNet [32], and PASCAL VOC 2007 [33].
### _Landmark identification_
In every case, the head CT image, based on its volumetric data, produces a series of potential orbitomeatal line landmark groups \(G_{i}\), where \(i\in{1,2,\cdots,N}\). Each of these groups consists of four types of landmarks: the bilateral eyes and the bilateral external auditory canals. Every landmark is equipped with a confidence score and a square bounding box, centered around the (x, y) coordinate.
To ascertain the relative \(z\) position for each of the four types of landmarks, we rely on the confidence score of the bounding boxes, as expressed in the following equation:
\[z=\arg\max_{i}{(C_{i})} \tag{1}\]
Where \(C_{i}\) stands for the bounding box's confidence score pertaining to the orbitomeatal line landmark group indexed as \(i\).
Fig. 3 illustrates that, for each landmark type, the bounding box with the highest confidence score is chosen among the candidate bounding boxes. Its index serves as the relative z-coordinate. This establishes the detection box of the three-dimensional coordinates (x, y, z) as the orbitomeatal line landmark.
### _Head CTs reformatting_
To reformat CT images, we conduct a three-dimensional rotation using Euler angles. These angles comprise a set of three distinct parameters, determining the position of a rigid body rotating around a fixed point. We compute these Euler angles based on landmarks from the bilateral eyes and external auditory canals.
Specifically, we establish a reference coordinate system, with the x-axis perpendicular to the sagittal plane, the y-axis perpendicular to the coronal plane, and the z-axis perpendicular to the axial plane.
In this system, the roll angle signifies the rotation around the x-axis. It represents the angle between the rotation vector on the sagittal plane and the y-axis. The pitch angle indicates rotation around the y-axis, i.e., the angle between the rotation vector on the coronal plane and the x-axis. The yaw angle refers to the rotation around the z-axis, or the angle between the rotation vector on the axial plane and the x-axis, as Fig. 4 illustrates.
Let the left eye landmark coordinates be \((\mathbf{x}_{Left-eye},\mathbf{y}_{Left-eye},\mathbf{z}_{Left-eye})\) and the right eye landmark coordinates be \((\mathbf{x}_{Right-eye},\mathbf{y}_{Right-eye},\mathbf{z}_{Right-eye})\). The coordinates for the left external auditory canal landmark are \((\mathbf{x}_{Left-EAC},\mathbf{y}_{Left-EAC},\mathbf{z}_{Left-EAC})\), and those for the right external auditory canal landmark are \((\mathbf{x}_{Right-EAC},\mathbf{y}_{Right-EAC},\mathbf{z}_{Right-EAC})\). Based on these landmarks, we calculate the roll angle \(\mathbf{r}\), opting for the smaller angle value from both sides.
The roll angle is calculated as follows:
Fig. 3: Landmark identification example. This is a detection result of the orbitomeatal line landmarks (bilateral eyes and bilateral EAC) on consecutive CT slices in a single case. The green, red, purple, and blue dotted boxes respectively highlight the slices where the left eye, right eye, left EAC, and right EAC were detected with the highest confidence. The indices of these selected slices are then used as the z-coordinate values for the landmarks.
\[\mathbf{r}=\min\biggl{(} \arctan\left(\frac{\mathbf{\mathrm{z}}_{Left-eye}-\mathbf{\mathrm{z}}_{Left -EAC}}{\mathbf{\mathrm{y}}_{Left-eye}-\mathbf{\mathrm{y}}_{Left-EAC}}\right), \tag{2}\] \[\arctan\left(\frac{\mathbf{\mathrm{z}}_{Right-eye}-\mathbf{\mathrm{z}}_{ Right-EAC}}{\mathbf{\mathrm{y}}_{Right-eye}-\mathbf{\mathrm{y}}_{Right-EAC}}\right)\biggr{)}\]
Subsequently, we compute the pitch angle \(\mathbf{p}\) based on the bilateral orbital and external auditory canal landmarks, again taking the smaller angle value.
The pitch angle is calculated as follows:
\[\mathbf{p}=\min\biggl{(} \arctan\left(\frac{\mathbf{\mathrm{z}}_{Left-eye}-\mathbf{\mathrm{z}}_{ Right-eye}}{\mathbf{\mathrm{x}}_{Left-eye}-\mathbf{\mathrm{x}}_{Right-eye}}\right), \tag{3}\] \[\arctan\left(\frac{\mathbf{\mathrm{z}}_{Left-EAC}-\mathbf{\mathrm{z}}_{ Right-EAC}}{\mathbf{\mathrm{x}}_{Left-EAC}-\mathbf{\mathrm{x}}_{Right-EAC}}\right)\biggr{)}\]
Finally, we calculate the yaw angle \(\mathbf{y}\) based on the bilateral orbital landmarks:
\[\mathbf{y}=\arctan\left(\frac{\mathbf{\mathrm{y}}_{Left-eye}-\mathbf{\mathrm{y}}_{ Right-eye}}{\mathbf{\mathrm{x}}_{Left-eye}-\mathbf{\mathrm{x}}_{Right-eye}}\right) \tag{4}\]
From the above calculations, we derive three rotation angles, based on which we carry out a three-dimensional rotation to obtain the reformatted DICOM format head CT images. This rotation process can be implemented with the SimpleITK library in Python, or alternatively, DICOM files can be read using the pydicom package in Python, with the rotation applied through library Scikit-image.
### _Standardized reconstruction_
We undertake the three-dimensional reconstruction of the reformatted DICOM sequence utilizing the Visualization Toolkit (VTK). This task can be carried out and directly visualized with a medical image analysis platform, 3D Slicer.
## III Experiment and Results
### _Experiment details_
The experiments were conducted in an environment comprising an Intel i7-13700k 3.4GHz CPU, an Nvidia RTX 4090 GPU with 24GB memory, and 64GB system memory. The software setup included Python 3.7, PyTorch 1.9.3, CUDA 11.6, and CUDNN 8.3.0. We employed SimpleITK 2.2.1 for 3D rotations and VTK 9.1.0 for 3D reconstructions.
### _Evaluation metrics_
We assess model performance based on factors such as accuracy, convergence, computational efficiency, and memory footprint. For accuracy, we employ metrics like mean Average Precision (mAP), F1 score, Precision, and Recall. The mAP, a prevalent metric in object detection tasks, is the mean of precision scores at varying recall levels, offering a single-figure measure of quality across these levels. We also appraise the Average Precision (AP) for the four distinct landmark classes.
As for convergence, we compute the loss on both the validation and the test sets. Computational efficiency is measured using GFLOPs (Giga Floating Point Operations Per Second), denoting the number of billion floating-point operations a system performs each second. With regard to memory usage, we assess the total parameter count of each model. This count indicates the potential memory requirement of the model, as each parameter denotes a value that the model must store and update during training.
Solely relying on the total parameter count may not present the complete picture regarding the memory efficiency of a model. Even models with similar FLOPs can have different computational speeds and memory usages. For example, group convolution consumes a large MAC and seriously affects the speed, which is not considered in FLOPs. Therefore, we incorporate the Memory Access Cost (MAC) as an efficiency evaluation metric.
To holistically observe model performance concerning accuracy, efficiency, and memory usage, we propose two indexes: the Precision Efficiency Index (PEI) and the Computational Precision Efficiency Index (CPEI):
\[PEI=\frac{mAP}{Total\ parameters} \tag{5}\]
\[CPEI=\frac{mAP}{GFLOPs} \tag{6}\]
In relative terms, PEI conveys the mAP per parameter present in the model, while CPEI denotes the mAP achieved per unit of computational effort.
### _Detection model evaluation_
In our experiment, the Adam optimizer outperformed SGD in terms of convergence speed, requiring 100 epochs of training compared to SGD's 300, with training data recorded every 10 epochs.
Table IV reveals YOLOv8 as the most accurate among the 12 object detection models, recording a mAP of 0.9277, while EfficientDet trailed closely with 0.9034. YOLOv8 stood out for robustness against class imbalance, especially for specific classes like the right external auditory canal and both eyes. EfficientDet excelled in efficiency, as evidenced by its top
Fig. 4: Head CT 3D Rotation. Rotation is performed based on the Euler rotation angles (roll angle, pitch angle, and yaw angle) calculated from the orbitomeatal line landmarks coordinates.
Fig. 5: Detection models mAP. (a) Adam. (b) SGD. EfficientDer’s impressive accuracy relies heavily on extensive iterative epochs. EfficientDet demands 150 epochs with SGD for peak performance, compared to YOLOX’s 100 epochs. EfficientDet’s accuracy fell behind the other nine models, with an mAP below 0.3 after 100 epochs with Adam, yet thrives under SGD training.
PEI and CPEI scores in Table V. Yet, for a balanced blend of precision and resilience, YOLOv8 is the optimal choice, accepting its minor efficiency trade-off.
EfficientDet, while precise, mandates extended training durations. Fig. 4(b) showcases it taking 150 epochs with SGD to peak, compared to YOLOX's 100 epochs. Moreover, using Adam as an optimizer for 100 epochs saw EfficientDet's mAP plummeting below 0.3 (Fig. 4(a)), yet it shined under SGD. Its slower convergence rate is evident in Fig. 6.
YOLOX, YOLOv7, and YOLOv3 also delivered commendable results, their mAPs' hovering around 0.85, underscoring the YOLO architecture's potential. In contrast, DETR and Faster R-CNN lacked the required precision for this task.
Fig. 8 highlights the impeccable precision of EfficientDet and YOLOv8 at various recall levels, more so with limited bilateral EAC landmarks. YOLOv8 consistently scored high in F1 scores for all four landmark detections, evident in Fig. 9. For bilateral eyes with ample samples, YOLOv3, YOLOX, and YOLOv8 showcased broad threshold range robustness, as reinforced by Fig. 9(c) and Fig. 9(d). Fig. 8(a) captures the performance nuances when the score threshold is at 0.5. Notably, YOLOv5 had significantly low F1 scores, hinting at possible model inefficiencies. Cases of F1 scores surpassing AP indicate potential performance inconsistencies across decision thresholds.
### _Qualitative evaluation of Reconstruction_
A trio of experienced radiology specialists individually scrutinized both the non-standardized and standardized reconstruction outcomes, aiming to evaluate the quality of the reconstruction images. The assessment was performed with a five-tier grading scheme (1 - Subpar, 2 - Mediocre, 3 - Average, 4 - Superior, 5 - Excellent). The evaluation criteria encompassed three key elements: structural fidelity, absence of distortion, and consistency in representation across diverse viewpoints. More specifically, the experts considered whether
Fig. 6: Detection models loss. (a) loss with Adam. (b) loss with SGD. (c) validation loss with Adam (d) validation loss with SGD. EfficientDet is the slowest to converge among the 12 models. This underscores the potential of simple models with multiple epochs in such object detection tasks. Except for efficientdet, the other 11 models have similar convergence rates and are sufficiently learned after 10 iterations. All 12 models achieve good fit, the training loss and validation loss have converged and the difference between them is very small.
the holistic architecture and shape of the original head were retained in the reconstructed outcomes, whether the form, contours, and features remained identifiable without distortion, and whether the reconstituted outcomes depicted a uniform interpretation of the original head, regardless of the viewpoint. To quantify the distinction between the non-standardized and standardized reconstruction outcomes, a Wilcoxon signed-rank test was employed, with the threshold of significance defined as \(P<\) 0.05.
Table VI shows the score distributions for both non-standardized and standardized reconstruction results. Observers 1, 2, and 3 yielded average scores with associated standard deviations for the non-standardized reconstruction results of 3.4 \(\pm\) 1.0, 3.5 \(\pm\) 1.0, and 3.5 \(\pm\) 1.0, respectively. Meanwhile, the standardized reconstruction results correspondingly elicited scores of 4.0 \(\pm\) 1.0, 4.1 \(\pm\) 1.0, and 4.1 \(\pm\) 1.0, each manifesting a statistically significant discrepancy at \(P<\) 0.001. Among the standardized reconstruction outcomes, the quantity of cases that were assigned scores of 3 or higher, thereby being classified as clinically viable, were 43 (82.7%), 46 (88.5%), and 48 (92.3%) for observers 1, 2, and 3, respectively.
## IV Discussion
The standardized 3D reconstruction of head CT scans has profound ramifications in clinical settings. By establishing a structured, consistent, and high-resolution representation of cranial anatomy, clinicians are afforded a panoramic view of intricate structures and potential anomalies that may elude traditional imaging techniques. Therefore, our standardized 3D reconstruction holds multifaceted significance:
Fig. 7: (a) Results of landmarks detection for different models in terms of average precision and F1-score (score threshold = 0.5). (b) Heatmap of AP for landmarks detection. (c) Heatmap of F1-score for landmarks detection. YOLOv5 exhibits very low F1 scores across all classes, indicating a discrepancy between the model’s precision and recall. This could suggest that while the model may be correctly identifying a reasonable number of objects, it could also be missing many objects or marking too many false positives, thereby leading to low F1 scores. Moreover, there are isolated instances where the F1 score exceeds the AP. This could imply that while the model’s precision and recall are balanced at the specific decision threshold used for the F1 score, the model’s precision may vary more across all recall levels. This suggests that the model’s performance might not be consistently good for different decision thresholds.
1. **Enhanced precision in segmentation.** The standardized 3D reconstructions facilitate sharper segmentation, especially for complex structures like the brain and skull [34]. By enabling clearer delineations, they empower clinicians to isolate specific anatomical regions with heightened accuracy, which is paramount in tasks ranging from tumor localization to post-operative assessments [35].
2. **Feature extraction & quantitative measurements.** The reconstruction aids in spotlighting specific landmarks or features with precision. This becomes crucial when identifying and tracking the progression of specific anatomical lesions or growths. Furthermore, quantitative measurements, such as determining the volume of a tumor or assessing the length and angle of certain structures, become more feasible and precise [36].
3. **Alignment through 3D image registration.** Another pivotal application is in the realm of 3D medical image registration [37]. By aligning multiple 3D images within a shared spatial domain, our reconstruction method paves the way for more holistic patient assessments, cross-referencing different imaging sessions for comprehensive insights.
4. **Extensibility to other anatomical regions & modalities.** We anticipate that our approach can be extrapolated to other anatomical regions like limbs and the chest, showcasing its versatility. While some degree of model adaptation or transfer learning may be warranted for optimal outcomes, the foundational methodology remains universally applicable. Beyond CT scans, modalities like Magnetic Resonance Imaging (MRI) might benefit from our standardized reconstruction technique, thus broadening its potential impact.
Despite these advancements, our work is subject to certain limitations:
Primarily, while our standardized reconstruction gained favorable subjective outcomes in the qualitative evaluation, indicative of its utility in a clinical setting, these results are
Fig. 8: Average precision for detection models. (a) Left external auditory canal. (b) Right external auditory canal. (c) Left eye. (d) Right eye. EfficientDet and YOLOv8 demonstrated superior precision across different levels of recall. Particularly when detecting samples with fewer bilateral EAC landmarks, both models managed to upload precision while increasing recall.
somewhat empirical. It is advisable to implement quantitative metrics to assess the loss of detail in the standardized reconstruction induced by interpolation in the reconstruction process. By treating non-standardized reconstruction performed on original CT images as the ground truth, metrics such as Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Dice Similarity Coefficient (DSC), or Jaccard Index could provide a more nuanced comparison between non-standardized and standardized reconstruction.
Secondarily, our existing data, characterized by its nuances in age, pathology, data acquisition methods, and other variables, might not fully encapsulate the intricate heterogeneity of a broader patient demographic.
This constraint in our dataset's diversity might influence the generalizability of our model. For instance, performance variations might arise when interpreting CT scans sourced from older or different imaging equipment, or when applied across diverse patient populations with unique pathologies. Such factors accentuate the need to scrutinize our model's applicability more rigorously across diverse clinical landscapes.
Given these challenges, it's paramount to enhance our dataset with a more varied collection of data. This could involve sourcing from multiple clinical settings with different scanning protocols and a wider demographic range. Incorporating diverse data types, such as MRI scans, alongside CT scans, could offer richer and more comprehensive insights. Engaging in collaborative initiatives with clinics and hospitals globally can also pave the way for a more encompassing dataset, bolstering the model's adaptability.
For those seeking immediate clinical application of our model, we recommend fine-tuning it using a subset of local data, ensuring it remains attuned to specific clinical nuances. Regular performance evaluations in the light of evolving
Fig. 9: F1 score for detection models. (a) Left external auditory canal. (b) Right external auditory canal. (c) Left eye. (d) Right eye. In terms of F1 scores, YOLOv8 consistently performed well across all four landmark detections. YOLOv8 achieved peak F1 scores at a lower threshold, denoting high F1 scores without a stringent threshold. None of the models exhibited consistent performance across thresholds for bilateral EACs, likely due to smaller sample sizes. Conversely, for bilateral eyes where the sample size was larger, YOLOv3, YOLOX, and YOLOv8 maintained high F1 scores across a wide threshold range, reflecting more robust models that perform well irrespective of the precise threshold.
patient demographics or changes in imaging technologies are also essential to maintain consistent accuracy.
In conclusion, to fully harness the potential of our approach across varied clinical contexts and amongst diverse patient populations, future efforts should focus on addressing these limitations and continuously refining our approach.
## V Conclusion
In this paper, we presented a robust and efficient automated method for standardized three-dimensional reconstruction of head CT images using a deep learning-based object detection algorithm. Our solution seamlessly identifies and assesses landmarks for image reformatting, reducing inconsistencies and time demands associated with manual processes. Through a detailed analysis of 12 object detection algorithms, we identified YOLOv8 as the most fitting choice for our task, based on reliability and efficiency. Standardized reconstruction results further confirmed our method's clinical relevance and validity. Our innovative fusion of deep learning and radiology illuminated through this work not only holds promise for boosted diagnostic efficiency but also underscores the transformative potential of AI-driven healthcare solutions.
|
2309.07787 | Optimal inexactness schedules for Tunable Oracle based Methods | Several recent works address the impact of inexact oracles in the convergence
analysis of modern first-order optimization techniques, e.g. Bregman Proximal
Gradient and Prox-Linear methods as well as their accelerated variants,
extending their field of applicability. In this paper, we consider situations
where the oracle's inexactness can be chosen upon demand, more precision coming
at a computational price counterpart. Our main motivations arise from oracles
requiring the solving of auxiliary subproblems or the inexact computation of
involved quantities, e.g. a mini-batch stochastic gradient as a full-gradient
estimate. We propose optimal inexactness schedules according to presumed oracle
cost models and patterns of worst-case guarantees, covering among others
convergence results of the aforementioned methods under the presence of
inexactness. Specifically, we detail how to choose the level of inexactness at
each iteration to obtain the best trade-off between convergence and
computational investments. Furthermore, we highlight the benefits one can
expect by tuning those oracles' quality instead of keeping it constant
throughout. Finally, we provide extensive numerical experiments that support
the practical interest of our approach, both in offline and online settings,
applied to the Fast Gradient algorithm. | Guillaume Van Dessel, François Glineur | 2023-09-14T15:23:25Z | http://arxiv.org/abs/2309.07787v1 | # Optimal inexactness schedules for Tunable Oracle based Methods
###### Abstract
Several recent works address the impact of inexact oracles in the convergence analysis of modern first-order optimization techniques, e.g. Bregman Proximal Gradient and Prox-Linear methods as well as their accelerated variants, extending their field of applicability. In this paper, we consider situations where the oracle's inexactness can be chosen upon demand, more precision coming at a computational price counterpart. Our main motivations arise from oracles requiring the solving of auxiliary subproblems or the inexact computation of involved quantities, e.g. a mini-batch stochastic gradient as a full-gradient estimate. We propose optimal inexactness schedules according to presumed oracle cost models and patterns of worst-case guarantees, covering among others convergence results of the aforementioned methods under the presence of inexactness. Specifically, we detail how to choose the level of inexactness at each iteration to obtain the best trade-off between convergence and computational investments. Furthermore, we highlight the benefits one can expect by tuning those oracles' quality instead of keeping it constant throughout. Finally, we provide extensive numerical experiments that support the practical interest of our approach, both in _offline_ and _online_ settings, applied to the Fast Gradient algorithm.
* CONTACT Guillaume Van Dessel. Email: [email protected]
inexact oracles; tunable accuracy; optimal schedules; first-order algorithm
## 1 Introduction
Typical iterative optimization schemes rely on key ingredients often referred to as _oracles_[6]. In what concerns continuous optimization, with respect to both convex and nonconvex realms, the vast majority of the papers use two distinctive types of oracles at each iteration, namely the _informative_ and _computational_ ones [21].
1. **Informative**: one assumes the possibility to obtain differential information about the objective (zero, first,..., higher-order) at successive query points.
2. **Computational**: one assumes the ability to update sequences of iterates following some rules, often involving the resolution of easy or, at least, not too complicated, subproblems.
By default, one considers implicitly that both oracles yield _error-free_ outputs. Nevertheless there exist cases for which such commodity appears unreasonable, e.g. when the objective value stands as the result of a non-trivial optimization problem (I) [27] or when one cannot solve exactly subproblems in (II) [13]. Unfortunately, more and more problems of practical interest exhibit a structure that does not allow for _exact_ oracles. Therefore there has been put a lot of efforts over the past years in dealing with
inexactness about _informative_ and _computational_ oracles in order for widely spread algorithms (Bregman Proximal Gradient, Prox-Linear, their accelerated variants, etc.) [5, 20, 26] to remain applicable in such scenarios. As argued in the previous paragraph, a common source of inexactness is due from the necessity of numerically solving auxiliary non-trivial optimization problems to produce oracles' outputs.
**Example 1.1**.: (_saddle-point problems_) For instance, [3] analyzed the Gradient Method (GM) that allowed for what they defined as \((\delta,L,\mu)\) inexact oracles. Consider saddle-point problems of the type
\[F^{*}=\min_{x\in\mathbb{R}^{d}}\,\left\{F(x):=\max_{u\in\mathbb{R}^{n}}\,G(u) +\langle Au,\,x\rangle\right\}>-\infty \tag{1}\]
where \(G\) is \(L(G)\) smooth and \(\mu(G)>0\) strongly-concave and \(A\in\mathbb{R}^{d\times n}\) is a matrix. They showed that \(F\) is \(L(F)=\frac{\lambda_{\max}(AA^{T})}{\mu(G)}\) smooth, \(\mu(F)=\frac{\lambda_{\min}(AA^{T})}{L(G)}\) strongly-convex and its gradient is given as \(\nabla F(x)=Au^{*}(x)\) where \(u^{*}(x)\) is the exact minimizer of
\[\max_{u\in\mathbb{R}^{n}}\,G(u)+\langle Au,\,x\rangle \tag{2}\]
In the general case, (2) cannot be solved exactly. Nevertheless, it can be approximately solved up to \(\delta>0\) global accuracy quite efficiently, e.g. by dedicated accelerated first-order methods [12]. That is, instead of \(u^{*}(x)\), one can provide \(u_{x}\in\mathbb{R}^{n}\) such that
\[G(u^{*}(x))+\langle Au^{*}(x),x\rangle-\left(G(u_{x})+\langle Au_{x},\,x \rangle\right)\leq\delta \tag{3}\]
and use approximate information in GM: \(\tilde{F}(x)=G(u_{x})+\langle Au_{x},\,x\rangle-\delta\), \(\nabla\tilde{F}(x)=Au_{x}\).
Thereby and luckily enough, it happens that one can _tune_ the quality of oracles by adequately choosing the amount of computational time spent on these problems. Looking back at the above example, it is well-known that the number \(\omega\) of iterations one must undertake to solve (2) up to \(\delta\) accuracy (3) scales as \(\kappa(G)^{-\frac{1}{2}}\,\log(\delta^{-1})\) where \(\kappa(G)=\mu(G)/L(G)\). After \(N\) steps of GM involving a sequence of such inexact gradients with parameters \(\{\delta_{k}\}_{k=0}^{N-1}\) at iteration \(k=0,\ldots,N-1\) while using a constant stepsize \(L^{-1}=(2L(F))^{-1}\), it is proven in [3] that for
\[\hat{x}_{N}=\frac{\sum_{k=0}^{N-1}\left(1-\kappa\right)^{N-1-k}x_{k}}{\sum_{k =0}^{N-1}\left(1-\kappa\right)^{N-1-k}}\]
the following guarantees held
\[F(\hat{x}_{N})-F^{*}\leq\frac{M(N)+3\,\sum_{k=0}^{N-1}\left(1-\kappa\right)^{ N-1-k}\delta_{k}}{\sum_{k=0}^{N-1}\left(1-\kappa\right)^{N-1-k}} \tag{4}\]
with \(\kappa=4^{-1}\kappa(G)\), \(M(N)=2^{-1}\,LR^{2}\left(1-\kappa\right)^{N}\) and \(R\) initial distance to a minimizer of (1). One observes the additive impact of such inexactness on GM's convergence.
It is worth noticing that inexactness also naturally occurs in the context of stochastic gradient methods. Within such framework one can _tune_ stochastic gradient and/or
Hessian's bias by averaging more or less sample gradients and/or Hessians [7, 9].
Despite the fact that convergence guarantees under a different level of inexactness at each iteration are well-established in the literature, very few works devise specific inexactness schedules as in [4, 25]. This can provably constitute a miss of opportunity. Akin to our introductory example, let us assume that we dispose of relations linking the quality of oracles (\(\delta\)) with the computational efforts invested in the creation of their output (\(\omega\)). It is possible to retrieve optimal inexactness schedules taking into account the _trade-off_ between the computational price and the harm in terms of convergence guarantees, e.g. (4), of the prescribed oracle precision. We detail our optimality criteria in Section 3. Informally, we aim in this work at answering the question:
_How can we make the most of a computational budget when using optimization methods dealing with oracle inexactness?_
### Related work
At first sight and as is often the case in mathematical optimization research, the goal of our work can be summed up simply as an improvement of a worst-case convergence bound. Indeed, under the models we motivate in Section 2 and the assumptions we introduce in Section 3, we solve a specific instance of _non-linear allocation_ problems to provide our enhanced inexactness schedules in terms of convergence guarantees per computational cost unit.
How straightforward might it sound, to the best of our knowledge there actually exists not so many previous works on that subject, i.e. _trade-off_ optimality for iterative algorithms dealing with controllable inexactness. We contrast however this last sentence by reassuring that a bunch of new papers propose criteria involving parameter fixed _relative_ inexactness (see [17, 24] and references therein). In opposition with the framework of _absolute_ inexactness, computational costs to achieve _relative_ inexactness are by nature prone to uncertainty and worst-case guarantees dependencies on this parameter often appears quite opaque. One usually leaves the relative inexactness parameter as fixed to a conservatively low wished terminal accuracy. Authors of [12] studied a mix between _relative_ and _absolute_ inexactness, highlighting the positive impact of incorporating _absolute_ oracle accuracy.
With a similar approach to ours, [11] focuses on (Accelerated) Proximal Gradient algorithms as analyzed by [19] in their seminal paper. Unlike us, they do not show explicit closed-forms, i.e. analytical value for \(\delta_{k}\), method's level of inexactness at iteration \(k=0,\ldots,N-1\) as in (4), and their results remain mainly theoretical.
More recently, [25] comes up with specific schedules for the inexactness in the auxiliary subproblems they deal with in order to lower the overall computational load after \(N\in\mathbb{N}\) iterations. Our more general results encompass theirs if we were concerned with the same context of inexact Augmented Lagrangian. As a byproduct of an asymptotical analysis, i.e. \(N\to\infty\), [23] whose follow-up resides in [12], suggests to use inexactness schedules that decrease sufficiently fast in order to maintain (up to a logarithmic factor) the rate of convergence of the _error-free_, i.e. _exact_, counterpart of the algorithms at scope. In [4] were tried _online_ (adaptive) as well as purely _offline_ schedules in what concerns the accuracy of the inexact higher-order tensor steps. Both last works provide _non-constant_ inexactness schedules but none of them brings into balance (inexact) oracles' computational complexities.
### Contributions
Let us consider an iterative algorithm involving controllable inexactness, e.g GM as depicted in the previous paragraph. We can state our main contributions as follows.
* Firstly, considering fixed the number \(N\in\mathbb{N}\) of inexact oracles calls, we propose a systematic _offline_ procedure to devise the amount \(\delta_{k}\) of inexactness to adopt at each iteration \(k\in\{0,\ldots,N-1\}\) based on an oracle cost model and algorithm's guarantees. To that end, we solve a rather general _non-linear assignment_ problem, result of independent interest. Building upon this, we present closed-forms results about the optimal inexactness schedules for a broad class of oracle cost models, directly inspired from practical scenarios.
* Secondly, we propose an _online_ heuristic extension in which neither \(N\) nor the overall computational budget allocated must be fixed beforehand.
* Thirdly, we conduct numerical experiments that sustain the validity of our approach, either _offline_ or _online_, by comparing it against (a) _constant_ inexactness schedules and (b) _non-constant_ inexactness schedules from the literature [12, 23]. We emphasize on the fact that our strategy is fully implementable.
### Outline
At the end of the present section, we clarify our notations and define the useful concept of _descending rank_. In Section 2 are motivated and then explained the concept of Tunable (Inexact) Oracles, i.e. we develop the models of costful (inexact) oracles and convergence under an inexactness schedule which we illustrate thanks to three examples serving as guidelines. We then proceed to Section 3 in which we show all the contributions teased just above. This section represents the main content of this paper. We conclude by quantifying the computational savings of our approach on the aforementioned guideline examples in Section 4.
### Preliminaries
We introduce some handy notations and a definition extensively used in this paper.
#### 1.4.1 Sets
\(\mathbb{N}=\{1,\ldots,\infty\}\) will refer to the set of strictly positive integers. \(\mathbb{R}_{+}\) (respectively \(\mathbb{R}_{-}\)) contains all the non-negative (respectively non-positive) real numbers. We denote \(\mathbb{R}_{++}=\mathbb{R}_{+}\backslash\{0\}\) and \(\mathbb{R}_{--}=\mathbb{R}_{-}\backslash\{0\}\). Let \(n\in\mathbb{N}\), we define \([n]=\{0,\ldots,n-1\}\).
#### 1.4.2 Sequences
Our enumerating indices start at \(0\). We write column vectors \(v\in\mathbb{R}^{n}\) as \(v=(v_{0},\ldots,v_{n-1})^{T}\). Depending of the context, we will equivalently write \(v\equiv\{v_{k}\}_{k=0}^{n-1}\).
#### 1.4.3 Shorthands
\(\mathbf{e}\in\mathbb{R}^{n}\) stands as the vector of size \(n\) full of \(1\). For any \(\mathcal{S}\subseteq[n]\), \(\mathbf{e}_{\mathcal{S}}\) is defined as follows: \((\mathbf{e}_{\mathcal{S}})_{k}=1\) if \(k\in\mathcal{S}\) and \(0\) otherwise. On the other hand, \(\mathbf{0}\in\mathbb{R}^{n}\) stands as the vector of size \(n\) full of \(0\), i.e. \(\mathbf{0}=\mathbf{e}_{\emptyset}\). For any \(v\in\mathbb{R}^{n}\), we define the subvector \((v)_{\mathcal{S}}\) of size \(|\mathcal{S}|\leq n\) whose values are taken from \(v\) at indices in \(\mathcal{S}\).
**Operations on vectors** Let \(p\in\mathbb{R}\) and \(u,v\in\mathbb{R}^{n}\). We proceed to entry-wise operations like \(v^{p}=(v_{0}^{p},\ldots,v_{n-1}^{p})^{T}\), \(u\odot v=(u_{0}\cdot v_{0},\ldots,u_{n-1}\cdot v_{n-1})^{T}\), \(u^{T}v=\sum_{k=0}^{n-1}\,u_{k}\cdot v_{k}\) and \(|v|=(|v_{0}|,\ldots,|v_{n-1}|)^{T}\) More generally, applying any operator \(o:\mathbb{R}\to\mathbb{R}\) in an element-wise fashion on a vector \(v\) is authorized: \(o(v)=(o(v_{0}),\ldots,o(v_{n-1}))^{T}\).
\(p\)-normsFor any \(p\geq 1\) and any \(v\in\mathbb{R}^{n}\), the \(p\)-norm of \(v\) reads \(||v||_{p}=(\mathbf{e}^{T}|v|^{p})^{\frac{1}{p}}\).
As usual, we extend the notation with \(p=\infty\) and set \(||v||_{\infty}\stackrel{{\Delta}}{{=}}\max\{|v_{k}|\,|\,k\in[n]\}\).
In what concerns matrices, the \(p\)-norm of any element \(V\in\mathbb{R}^{n\times d}\) translates to
\[||V||_{p}=\sup_{u\neq\mathbf{0}}\frac{||Vu||_{p}}{||u||_{p}}\]
Unless stated otherwise, we understand norms and distances as Euclidean ones (\(p=2\)) throughout this paper, i.e. \(||v||:=||v||_{2}\) and \(||V||:=||V||_{2}\).
Lipschitz continuity\(g:\mathbb{R}^{n}\to\mathbb{R}^{d}\) is \(L\) Lipschitz (continuous) with respect to a \(p\)-norm if for any \(y,x\in\operatorname{dom}(g)\), \(||g(y)-g(x)||_{p^{*}}\leq L\,||y-x||_{p}\) with \(p^{*}=p\cdot(p-1)^{-1}\).
InequalitiesLet \(\mathcal{L}:\Xi\subseteq\mathbb{R}\to\mathbb{R}\cup\{\infty\}\), \(\mathcal{J}:\Xi\subseteq\mathbb{R}\to\mathbb{R}\cup\{\infty\}\) be two functions. Let \(\operatorname{dom}(\mathcal{L})=\operatorname{dom}(\mathcal{J})\), we consider the following:
\[\mathcal{L}\succeq\mathcal{J} \Leftrightarrow\mathcal{L}(\sigma)\geq\mathcal{J}(\sigma),\; \forall\sigma\in\Xi\] \[\mathcal{L}\succ\mathcal{J} \Leftrightarrow\mathcal{L}(\sigma)>\mathcal{J}(\sigma),\; \forall\sigma\in\operatorname{dom}(\mathcal{J})\]
For any \(u,v\in\mathbb{R}^{n}\), we also note:
\[u\succeq v\Leftrightarrow u_{k}\geq v_{k},\;\forall k\in[n]\] \[u\succ v\Leftrightarrow u_{k}>v_{k},\;\forall k\in[n]\]
**Definition 1.2**.: \((\)_descending rank\()\)_ Let \(\nu\in\mathbb{R}^{n}\) and let \(\rho:[n]\to[n]\) be a bijection such that the vector \(\hat{\nu}=(\nu_{\rho^{(-1)}(0)},\ldots,\nu_{\rho^{(-1)}(n-1)})^{T}\) is sorted in _descending_ mode.
For any \(k\in[n]\), we call \(\rho(k)\) the _descending rank_ of \(\nu_{k}\) (with respect to \(\rho\)).
**Remark 1**.: It comes that if \(\rho(k)=j\) then \(\nu_{k}\) is, according to the sorting induced by \(\rho\), the \((j+1)\)-th biggest element of \(\nu\).
## 2 Tunable (Inexact) Oracles
Here below we aim at defining the class of Tunable Oracles Methods (TOM) from which one can benefit by adapting the amount of inexactness in the oracles involved at each iteration. Prior to this goal, we recall some findings about methods incorporating inexactness and we introduce our assumed oracle cost model. Finally, we substantiate the concept of Tunable Oracles Methods with thee complete examples serving as common thread.
### Impact of Inexact Oracles
The worst-case behaviour analysis of iterative methods relying on inexact oracles displays some favorable structure. Let \(N\in\mathbb{N}\) be the number of performed iterations, one defines the total amount of inexactness at iteration \(k\in\{0,\ldots,N-1\}\) as \(\delta_{k}\). Researchers come up with convergence models
\[\mathcal{C}(N)\leq\mathcal{E}(\{\delta_{k}\}_{k=0}^{N-1}) \tag{5}\]
where
* \(\mathcal{C}(N)\) acts as a positive _gauge_ one aims to minimize
* \(\mathcal{E}:\mathbb{R}_{+}^{N}\to\mathbb{R}_{+}\) informs about convergence under the impact of a sequence \(\{\delta_{k}\}_{k=0}^{N-1}\)
\(\mathcal{C}\) gauges include gradient mapping norms [5], functional gaps [3, 20].
As in Example 1.1 inequality (4), a possible instance for our model could show up as
\[\underbrace{F(\hat{x}_{N})-F^{*}}_{\mathcal{C}(N)}\leq\underbrace{\frac{M(N)+ 3\,\sum_{k=0}^{N-1}\,(1-\kappa)^{N-1-k}\,\delta_{k}}{\sum_{k=0}^{N-1}\,(1- \kappa)^{N-1-k}}}_{\mathcal{E}(\{\delta_{k}\}_{k=0}^{N-1})}\]
**Remark 2**: Within the _error-free_ framework, i.e. \(\delta_{k}=0\) for every integer \(k\),
\[\lim_{N\to\infty}\,\mathcal{E}(\{0\}_{k=0}^{N-1})=0\]
When accounting for errors, one theoretically observes convergence only up to \(\mathcal{E}(\{\delta_{k}\}_{k=0}^{N-1})\) accuracy. It can happen that the schedule \(\{\delta_{k}\}_{k=0}^{N-1}\) does not decrease sufficiently fast leading to a well-known phenomenon referred to as _error-accumulation_ in the literature [2]. This latter translates to
\[\lim_{N\to\infty}\,\mathcal{E}(\{\delta_{k}\}_{k=0}^{N-1})=\infty\]
### Cost of Inexact Oracles
Without loss of generality, one can define a _reference_ inexactness \(\bar{\delta}>0\) together with constants \(0\leq m<1<M<\infty\) such that the inexactness of the oracles fall in the segment \(\Xi:=[m\,\bar{\delta},M\,\bar{\delta}]\). This means that at each iteration \(k\in\{0,\ldots,N-1\}\), one allows the user to freely pick up any \(\delta_{k}\in\Xi\). As previously unveiled, such request costs a computational tribute, namely \(\mathcal{B}_{k}(\delta_{k})\geq 0\). Suitable for a variety of applications [5, 7, 12], we suggest the oracle cost model
\[\mathcal{B}_{k}(\delta_{k})=b_{k}\,h(\delta_{k}) \tag{6}\]
where
* \(b_{k}>0\) denotes the cost distortion of iteration \(k\)
* \(h:\Xi\to\mathbb{R}_{+}\) dictates how the cost fluctuates with \(\delta_{k}\)
Recalling once again the introduction, one identifies the _a priori_ number \(\omega_{k}\) of inner-iterations to obtain \(\delta_{k}\) inexact information about \(F\) as a multiple of
\[\mathcal{B}_{k}(\delta_{k})=\underbrace{\sqrt{\kappa(G)^{-1}}}_{b_{k}}\, \underbrace{\log(\delta_{k}^{-1})}_{h(\delta_{k})}\]
**Remark 3**.: When unknown or when no argument justifies that any iteration turns out to be less expensive than another, we arbitrarily set \(b_{k}=1\) for any integer \(k\geq 0\). Let us also point out that in the absence exogenous indication, \(m=0\) and \(M=\infty\).
### Tunable Oracle Method
An instance of Tunable Oracle Methods (TOM) stands as a \(N\) steps _iterative_ algorithm \(\mathcal{A}\) whose iterations involve a notion of _controllable_\(\delta\)-inexactness as in (6) such that \(\mathcal{A}\) converges in the sense of (5). Furthermore, in order to truly exploit such _controllable_ feature, we require the explicit knowledge of \(\{b_{k}\}_{k=0}^{N-1}\), \(\mathcal{E}\) and \(h\), up to a multiplicative constant factor. Such desire emphasizes the _offline_ nature focus of the paper at this stage. We make use of _a priori_ information about the behaviour of \(\mathcal{A}\) and the expected cost one should encounter while requesting a schedule of \(\{\delta_{k}\}_{k=0}^{N-1}\) inexact oracles.
We explain later on how to take advantage of FOM in an _online_ setting.
### Guideline examples
Throughout the sequel, we make use of the following:
* \(\Psi:\mathbb{R}^{d}\to\mathbb{R}\cup\{\infty\}\) proper closed convex function, not necessarily smooth.
* \(\ell:\mathbb{R}^{q}\to\mathbb{R}\) closed convex function, \(L_{\ell}<\infty\) Lipschitz continuous and simple, i.e. its proximal operator can be computed at negligible cost1. Footnote 1: Existence of a closed-form or at the expense of an easy one dimensional segment-search.
* \(c:\mathbb{R}^{d}\to\mathbb{R}^{q}\) a smooth map with \(L_{\nabla c}<\infty\) Lipschitz continuous Jacobian \(\nabla c\).
Combining these ingredients, we finally introduce \(f=\ell\circ c\) and \(F=f+\Psi\) nonconvex, nonsmooth in the general case. We will assume that
\[F^{*}=\min_{x\in\mathbb{R}^{d}}\,F(x)>-\infty\]
**Example 2.1**.: (_composite convex optimization with inexact proximal operator_)
The inexact Accelerated Forward-Backward algorithm (iAFB) from [12] tackles so called convex composite problems, ubiquitous in image processing [1]. The specificities of this class read: \(q=1\), \(\ell=\operatorname{id}_{\mathbb{R}}\), convexity (respectively \(\mu\geq 0\) strong-convexity) of \(c\) (respectively \(\Psi\)) thus \(f=c\) and \(\nabla f\) is \(L_{f}=L_{\nabla c}\) Lipschitz continuous. Among other appealing features, iAFB presented therein allows for \(\Psi\) whose proximal mapping is not simple, e.g. in _sparse overlapping groups regularization_[18]. At \(z\in\mathbb{R}^{d}\), \(\lambda>0\), the primal-dual pair of problems related to the proximal step of a function \(\phi\) translate to
\[\min_{x\in\mathbb{R}^{d}}\left\{\Phi_{p}(x;\lambda,z,\phi):= \frac{1}{2}\,||x-z||^{2}+\lambda\,\phi(x)\right\} (P)\] \[\max_{v\in\mathbb{R}^{d}}\left\{\Phi_{d}(v;\lambda,z,\phi):= \frac{1}{2}\,(||z||^{2}-||z-\lambda v||^{2})-\lambda\,\phi^{*}(v)\right\} (D)\]
where \(\phi^{*}\) is the Fenchel conjugate of \(\phi\). iAFB produces iterates \(x_{k}\), \(y_{k}\), \(z_{k}\), \(v_{k}\) thanks to coefficients \(\lambda_{k}\in\mathcal{O}(L_{f})\) and \(A_{k}\in\Omega(k^{2})\) for any integer \(k\geq 0\). Either iAFB employs predefined sequence of stepsizes \(\{\lambda_{k}^{-1}\}_{k\in\mathbb{N}}\) and then \(A_{k}\), accounting as certificate sequence, can be computed in advance, i.e. _offline_, or it adopts Armijo line-search to adapt to local smoothness \(\lambda_{k}\) and one only has access to \(A_{k}\) for iterations \(k^{{}^{\prime}}\geq k\), in an _online_ fashion. Following authors' notations, \((x_{k+1},v_{k+1})\) stems as a \(\delta_{k}\) inexact output from the proximal step oracle at iteration \(k\geq 0\) if
\[\mathrm{PD}_{\lambda\phi}(x,v;z):=\Phi_{p}(x;\lambda,z,\phi)-\Phi_{d}(v; \lambda,z,\phi)\leq\delta_{k} \tag{7}\]
with \(x=x_{k+1}\), \(v=v_{k+1}-\mu\,x_{k+1}\), \(\lambda=\frac{\lambda_{k}}{1+\lambda_{k}\mu}\), \(\phi=\Psi-\frac{\mu}{2}||\cdot||^{2}\) and \(z=\frac{y_{k}-\lambda_{k}\nabla f(y_{k})}{1+\lambda_{k}\mu}\).
Under this notion of inexactness, after \(N\in\mathbb{N}\) steps, the following guarantees hold
\[\underbrace{F(x_{N})-F^{*}}_{\mathcal{C}(N)}\leq\underbrace{\frac{4R^{2}+ \sum_{k=0}^{N-1}\left(A_{k+1}\cdot(1+\mu\lambda_{k})^{2}\cdot\lambda_{k}^{-1} \right)\delta_{k}}{A_{N}}}_{\mathcal{E}\left(\{\delta_{k}\}_{k=0}^{N-1}\right)} \tag{8}\]
where \(R<\infty\) denotes the distance from \(x_{0}\) to the set of global minimizers of \(F\).
The presumed oracle cost, i.e. the work \(\mathcal{B}_{k}(\delta_{k})\) needed to produce \((x_{k+1},v_{k+1})\) fulfilling (7) remains problem dependent. It is also highly influenced by the technique employed to solve the pair (P) / (D). In [12], authors deal with CUR factorization problem with _sparse overlapping groups regularization_. They simply use FISTA, a _fast_ first-order method, to solve (D) yielding a linked sequence of primal recovered iterates (P). In the present setting, it is shown in [10] that the primal-dual gap decreases sublinearly as \(\omega_{k}^{-1}\) where \(\omega_{k}\) denotes the number of steps of FISTA in (D). Then, for an arbitrarily chosen \(b_{k}=1\) (see Remark 3), we model the cost as
\[\mathcal{B}_{k}(\delta_{k})=\underbrace{\delta_{k}^{-1}}_{h(\delta_{k})} \tag{9}\]
**Example 2.2**.: (_composition of convex functions optimization_) The paper [5] considers the most general setting for which global optimality is, obviously, out of reach. Let us focus on a single method analyzed therein, the inexact Prox-Linear algorithm, iPL in short terms. Given \(z\in\mathrm{dom}(\Psi)\) and \(t>0\), one defines the functional
\[F_{t}(\cdot;z)=\Psi(\cdot)+\ell(c(z)+\nabla c(z)(\cdot-z))+\frac{t^{-1}}{2}|| \cdot-z||^{2}\]
We call \(\delta\) inexact solution \((x_{+},\xi)\) of the minimization of \(F_{t}(\cdot\,;\,z)\) a pair fulfilling the properties: \(||\xi||\leq\delta\) and
\[x_{+}\in\arg\min_{x\in\mathbb{R}^{d}}\Psi(\cdot)+\ell(\xi+c(z)+\nabla c(z)( \cdot-z))+\frac{t^{-1}}{2}||\cdot-z||^{2}\]
In the case of an exact \((x_{+},0)\) (i.e. \(\delta=0\) inexact) solution, the proximal gradient mapping \(\mathcal{G}_{t}(x)\) at \(z\) with stepsize \(t\leq(L_{\ell}\,L_{\nabla c})^{-1}\) stands as the vector \(\mathcal{G}_{t}(x)=t^{-1}(x-x_{+})\) if and only if \(F(x_{+})\leq F_{t}(x_{+};z)\). Its norm constitutes a measure of stationarity [5] that generalizes the gradient norm in the smooth unconstrained optimization framework.
When calling up \(\{\delta_{k}\}_{k=0}^{N-1}\) inexact solutions at successive iterates \(z=x_{k}\in\operatorname{dom}(\Psi)\) and stepsizes \(t=t_{k}\in]0,(L_{\ell}\,L_{\nabla c})^{-1}]\), iPL produces iterates \(\{x_{k}\}_{k\geq 0}\) such that:
\[\underbrace{\min_{k^{\prime}\in\{1,\ldots,N\}}||\mathcal{G}_{t_{k}}(x_{k^{ \prime}})||^{2}}_{\mathcal{C}(N)}\leq\underbrace{\sum_{k=0}^{N-1}4\,t_{k}^{- 1}\Big{(}F(x_{k})-F(x_{k+1})+4\,L_{\ell}\,\delta_{k}\Big{)}}_{\mathcal{E}\big{(} \{\delta_{k}\}_{k=0}^{N-1}\big{)}} \tag{10}\]
Again, the oracle cost should be derived from the complexity of the inner method used to obtain the sequence of \(\{\delta_{k}\}_{k=0}^{N-1}\) inexact solutions at pairs \(\{(x_{k},t_{k})\}_{k=0}^{N-1}\). When no further specific structure is taken as granted, excepted an easy proximal mapping of \(\Psi\) (unlike the previous example (8)), [5] suggests to exploit duality. At any iteration \(k\geq 0\), one can compute a \(\delta_{k}\) inexact pair \((x_{k+1},\xi_{k+1})\) as previously explained by finding a subgradient \(\xi_{k+1}\) of the (negated) Fenchel conjugate function of \(F_{t_{k}}(\cdot\,;\,x_{k})\) whose norm does not exceed \(\delta_{k}\). With the oracles at hand and accounting for the fact that this conjugate boils down to a sum of a smooth convex term and a proximable nonsmooth convex function, A-HPE from [14] ensures a minimal subgradient norm of order \(\mathcal{O}(t_{k}||\nabla c(x_{k})||^{2}\,\omega_{k}^{-\frac{3}{2}})\) after \(\omega_{k}\) A-HPE iterations.
Hence, it comes that the number of A-HPE iterations required to get a \(\delta_{k}\) inexact pair scales as \(t_{k}^{\frac{2}{3}}\,||\nabla c(x_{k})||^{\frac{4}{3}}\,\delta_{k}^{-\frac{2}{ 3}}\). Obviously, one does not know in advance the value \(||\nabla c(x_{k})||\) since it relies on \(x_{k}\), only computable from iteration \(k\geq 0\). Remark 3 then suggests that any \(\gamma>0\), e.g. \(\gamma=1\), can serve as artificial upper-bound so that either iPL uses predefined stepsizes \(\{t_{k}\}_{k\in\mathbb{N}}\) (_offline_) or adjusted stepsizes (_online_).
\[\mathcal{B}_{k}(\delta_{k})=\underbrace{t_{k}^{\frac{2}{3}}}_{b_{k}}\, \underbrace{\delta_{k}^{-\frac{2}{3}}}_{h(\delta_{k})} \tag{11}\]
Note that if \(\operatorname{dom}(\Psi)\) is bounded with known diameter, one can use regularization and apply Nesterov fast gradient method FGM on strongly-convex composite objectives in order to obtain an enhanced \(t_{k}^{\frac{1}{2}}\,||\nabla c(x_{k})||\,\delta_{k}^{-\frac{1}{2}}\) complexity [16]. We emphasize that _stopping criteria_ are readily available to check whether \(\delta_{k}\) accuracy has been reached.
**Example 2.3**.: (_robust optimization on convex hull_) Closely related to [12], [20] extends the analysis of Fast Gradient Method (FGM) under the presence of inexactness by allowing \(f\) to be smooth and convex relative to some Legendre kernel function (see [10]), covering the so-called Bregman setting. Therefore, their results also encompass the Euclidean setting which we stick to for the sake of simplicity. We assume that \(\Psi\) represents an indicator function of a convex subset \(X\subseteq\mathbb{R}^{d}\), \(q=1\), \(\ell=\operatorname{id}_{\mathbb{R}}\) and \(c\) is \(\mu\geq 0\) strongly-convex. That is, \(f\) is \(\mu\geq 0\) strongly-convex and \(\nabla f=\nabla c\) is \(L_{f}=L_{\nabla c}\) Lipschitz continuous. Let \(\Theta=\{\theta_{i}\}_{i=1}^{n}\) be a collection of \(n\in\mathbb{N}\) vectors from \(\mathbb{R}^{d}\) dubbed as _scenarios_ and let \(\sigma>0\). In _robust optimization_, one might be interested in minimizing the function
\[f:X\to\mathbb{R},\,x\to f(x)=\frac{\mu}{2}\,||x||^{2}+\max_{\theta\in \operatorname{conv}(\Theta)}\langle\theta,x\rangle-\frac{\sigma}{2}||\theta- \bar{\theta}||^{2}\]
for some _anchor scenario_\(\bar{\theta}\), e.g. \(\bar{\theta}=n^{-1}\,\sum_{i=1}^{n}\,\theta_{i}\).
In other words, one would like to minimize a (regularized) linear objective taking into account that the cost vector could be any convex combination of previously encountered costs \(\{\theta_{i}\}_{i=1}^{n}\). Akin to Example 1.1, we deduce that the _exact_ gradient of \(f\) at any \(x\in X\) reads \(\nabla f(x)=\mu x+\theta^{*}(x)\) where \(\theta^{*}(x)=\arg\max_{\theta\in\mathbf{conv}(\Theta)}\langle\theta,x\rangle -\frac{\sigma}{2}||\theta-\bar{\theta}||^{2}\) and gradient's Lipschitz constant \(L_{f}=\sigma^{-1}\).
An approximate maximizer \(\theta_{x}\in\mathbf{conv}(\Theta)\) of the problem defining \(f\) at \(x\) that verifies
\[\langle(\theta^{*}(x)-\theta_{x}),x\rangle+\frac{\sigma}{2}\big{(}||\theta_{x }-\bar{\theta}||^{2}-||\theta^{*}(x)-\bar{\theta}||^{2}\big{)}\leq\delta \tag{12}\]
can be used to construct \(\tilde{f}(x)=\frac{\mu}{2}||x||^{2}+\langle\theta_{x},x\rangle-\frac{\sigma} {2}||\theta_{x}-\bar{\theta}||^{2}\simeq f(x)\) and \(\nabla\tilde{f}(x)=\mu x+\theta_{x}\simeq\nabla f(x)\), providing \((2\delta,2L_{f}+\mu,\mu)\) inexact information as originally understood in [2]. [20] show that FGM ([20], Algorithm 2) involving a sequence of \(\{\delta_{k}\}_{k=0}^{N-1}\) inexact information and fixed stepsizes produces a final iterate \(x_{N}\) such that
\[\underbrace{F(x_{N})-F^{*}}_{\mathcal{C}(N)}\leq\underbrace{\frac{R^{2}+2\, \sum_{k=0}^{N-1}\,A_{k+1}\,\delta_{k}}{A_{N}}}_{\varepsilon\big{(}\{\delta_{k }\}_{k=0}^{N-1}\big{)}} \tag{13}\]
where, once again, \(R<\infty\) denotes the distance from \(x_{0}\) to the set of global minimizers of \(F\). As for iAFB, the coefficients \(A_{k}\) for \(k\geq 0\) serve as convergence certificates and are involved in subtle convex combinations of iterates within FGM. In the present setting, \(A_{k}\)'s value can be determined beforehand, \(A_{k}\in O(\max\{k^{2},(1+\frac{1}{4}\sqrt{\mu/(\mu+2L_{f})})^{2k}\})\). Let \(O=[\theta_{1},\ldots,\theta_{n}]^{T}\) and let \(\hat{\kappa}:=\frac{\lambda_{\min}(OO^{T})}{\lambda_{\max}(OO^{T})}\). By using another version of FISTA described in [22], one can take advantage of possible strong-convexity, i.e. \(\hat{\kappa}>0\), of the usual reformulation of the inner-problem :
\[\max_{\theta\in\mathbf{conv}(\Theta)}\langle\theta,x\rangle-\frac{\sigma}{2} ||\theta-\bar{\theta}||^{2}=\max_{w\succeq\mathbf{0},\,||w||_{1}=1}\langle O ^{T}w,x\rangle-\frac{\sigma}{2}||O^{T}w-\bar{\theta}||^{2}\]
Assuming that \(\hat{\kappa}>0\), just as in Example 1.1, one can link the work \(\omega_{k}\) to obtain \(\delta_{k}\) accurate \(\theta_{x_{k}}\) at iteration \(k\geq 0\) by writing
\[\mathcal{B}_{k}(\delta_{k})=\underbrace{\sqrt{\hat{\kappa}^{-1}}}_{b_{k}} \underbrace{\log(\delta_{k}^{-1})}_{h(\delta_{k})} \tag{14}\]
whereas in the absence of strong-convexity, i.e. \(\hat{\kappa}=0\), one would rather set \(\mathcal{B}_{k}(\delta_{k})=\sqrt{\lambda_{\max}(OO^{T})}\,\delta_{k}^{-\frac {1}{2}}\) as common for smooth convex optimization [16]. We explain in Section 4 how one can easily monitor the quality of an approximate solution for (12).
**Remark 4**.: Usually, the workload \(\omega\propto\mathcal{B}(\delta)\) to obtain (the output) of \(\delta\) inexact oracles stays _by nature_ an integer quantity, e.g. a number of inner-iterations (Example 2.1, 2.2 and 2.3). For the sake of simplicity however, we will accept that it varies continuously as in the formulas (9), (11), (14).
## 3 Optimal Inexactness Schedules
Now that the impact and the cost of inexact oracles have been introduced, we can elaborate about the main objective of this paper. We assume for the time being that \(N\in\mathbb{N}\) is fixed. One can try to suffer the smallest possible effect from inexact oracles in order to ensure the best worst-case convergence upper-bound. Obviously, one would like each \(\delta_{k}\) to match its best value \(m\bar{\delta}\). However, sometimes one cannot afford a schedule \(\{\delta_{k}=m\bar{\delta}\}_{k=0}^{N-1}\) if one limits the overall computational budget. On the other hand, when not obliged to, it is not advisable to ask for the worst oracle accuracy at each iteration, i.e. \(M\bar{\delta}\), yielding the inexactness schedule \(\{\delta_{k}=M\bar{\delta}\}_{k=0}^{N-1}\).
Thereby, we propose to solve a master problem that aims at devising the optimal _trade-off_ between the costs of oracles and worst-case guarantee harms due to their inexactness. We start by providing our blanket Assumptions A and B.
### Framework
From now on we will make a little abuse of notation about the _reference_ inexactness \(\bar{\delta}>0\). Depending on the context, this latter will either depict a real value, a \(N\) steps inexactness schedule \(\{\delta_{k}=\bar{\delta}\}_{k=0}^{N-1}\) or even the corresponding vector in \(\mathbb{R}_{+}^{N}\), i.e. \(\bar{\delta}\,\mathbf{e}\).
**Assumption A**.: We have access to \(a\succ\mathbf{0}\), \(\exists\,\tilde{\mathcal{E}}:\mathbb{R}_{+}\to\mathbb{R}_{+}\)_strictly increasing_ with
\[\mathcal{E}(\delta):=\mathcal{E}(\{\delta_{k}\}_{k=0}^{N-1})=\tilde{\mathcal{ E}}\Bigg{(}\sum_{k=0}^{N-1}\,a_{k}\,\delta_{k}\Bigg{)} \tag{15}\]
We now invoke Assumption B suggesting that one is able to predict the overall cost \(\mathcal{B}(\delta)\) of a schedule of inexactness \(\delta\). Furthermore, some technicalities about the structure of the function \(h\) from (6) are stated.
**Assumption B**.: We have access to \(b\succ\mathbf{0}\) and \(h:\Xi\to\mathbb{R}_{+}\)_differentiable_, _invertible_, _strictly decreasing_ with
\[\mathcal{B}(\delta):=\sum_{k=0}^{N-1}\,\mathcal{B}_{k}(\delta_{k})=\sum_{k=0 }^{N-1}\,b_{k}\,h(\delta_{k}) \tag{16}\]
\(h^{\prime}:\mathrm{dom}(h^{\prime})\supseteq\mathrm{int}(\Xi)\to\mathrm{Im}( h^{\prime})\subseteq\mathbb{R}_{-}\) must be _invertible_ and _strictly increasing_.
**Remark 5**.: Assumption A tells us that on any subset of \(\mathbb{R}_{+}^{N}\), minimizing \(\mathcal{E}(\{\delta_{k}\}_{k=0}^{N-1})\) boils down to minimizing \(\sum_{k=0}^{N-1}\,a_{k}\,\delta_{k}\), i.e. a minimizer of \(a^{T}\,\delta\) stays optimal for \(\mathcal{E}(\delta)\).
**Remark 6**.: In what concerns our examples, one can identify problem dependent constants \(C_{1},C_{2}>0\) such that \(\tilde{\mathcal{E}}\) admits a shared structure \(s\to\tilde{\mathcal{E}}(s)=C_{1}+C_{2}\,s\).
In their adaptive _online_ versions, sometimes practically more attractive, parameters \(\lambda_{k}\) (iAFB) and \(t_{k}\) (iPL) are not known in advance since they rely on line-searches at iteration \(k\geq 0\). Unfortunately, they influence the values of \(a_{k}/b_{k}\)'s. Thenceforth, Example 2.1 and 2.2 would fail to satisfy Assumption A and/or B. Nevertheless, as already argued, one can fix \(\lambda_{k}\leq L_{\nabla c}^{-1}\) (iAFB), \(t_{k}^{-1}\geq L_{\ell}\cdot L_{\nabla c}\) (iPL) for every \(k\geq 0\) and avoid the line-searches. In such _offline_ circumstances, both \(\{a_{k}\}_{k\geq 0}\) and \(\{b_{k}\}_{k\geq 0}\) become accessible and the assumptions are fulfilled. If taken constants, i.e. for every \(k\geq 0\), \(\lambda_{k}=\lambda\), \(t_{k}=t\) for well-chosen \(\lambda,t\in\mathbb{R}_{+}\), then \(A_{k}\in\mathcal{O}(\max\{k^{2},(1-\sqrt{\mu/L_{f}})^{-k}\})\) for iAFB, \(A_{k}\in\mathcal{O}(\max\{k^{2},(1+\frac{1}{4}\sqrt{\mu/(\mu+2L_{f}}))^{2k}\})\) in FGD and \(t_{k}\in\mathcal{O}(1)\) in iPL.
### Master Problems
**Accuracy controlled** In its most standard version, we design a master problem \(\delta^{*}(N,\bar{\delta},m,M)\) for which the degrees of freedom reside in the accuracies of the \(N\) iterations of our TOM, i.e \(\delta\in\mathbb{R}_{+}^{N}\). Given a _reference_\(\bar{\delta}\in\mathbb{R}_{+}\) precision, we implicitly deduce the total allocated computational budget as \(\sum_{k=0}^{N-1}\,\mathcal{B}_{k}(\bar{\delta})=\big{(}\sum_{k=0}^{N-1}\,b_{k }\big{)}\,h(\bar{\delta})\). Controlling \(\delta\in[m\bar{\delta},\,M\bar{\delta}]^{N}\), we minimize the bound \(\mathcal{E}(\delta)\) under the budget constraint \(\mathcal{B}(\delta)=\mathcal{B}(\bar{\delta})\). Taking into account Assumptions A, B and Remark 5, one translates
\[\delta^{*}(N,\bar{\delta},m,M)\in\arg\min_{m\,\bar{\delta}\,\preceq\,\delta\, \preceq\,M\,\bar{\delta}}\sum_{k=0}^{N-1}\,a_{k}\,\delta_{k}\,\text{ s.t. }\sum_{k=0}^{N-1}\,b_{k}\,h(\delta_{k})=\mathcal{B}(\bar{\delta}) \tag{17}\]
#### 3.2.1 General solutions
We present a first theorem which will proved to be useful to guarantee some consistency in the optimal schedules. Its scope encompasses more general structured _nonlinear_ resource allocation problems. Related results can be found deeply rooted in the literature, the reader should refer to [8] and the references therein.
**Theorem 3.1**.: _Let \(N\in\mathbb{N}\), \(\Xi=[l,u]\subseteq\mathbb{R}\) and \(D>0\). Let \(\{n_{k}\}_{k=0}^{N-1}\) and \(\{d_{k}\}_{k=0}^{N-1}\) be two sequences of continuously differentiable functions on \(\Xi\) and \(\{n_{k}\}_{k=0}^{N-1}\) elements are furthermore strictly increasing on \(\Xi\). If for every \(k\in[N]\), the ratio \(\mathcal{I}_{k}=n_{k}^{\prime}/d_{k}^{\prime}\) is strictly increasing and strictly negative on \(\Xi\) then any solution \(\sigma^{*}\) of_
\[\min_{\sigma\in\Xi^{N}}\sum_{k=0}^{N-1}\,n_{k}(\sigma_{k})\,\text{ s.t. }\sum_{k=0}^{N-1}\,d_{k}(\sigma_{k})=D \tag{18}\]
_admits as a property that for all pairs of indices \((k_{1},k_{2})\in[N]^{2}\),_
\[\mathcal{I}_{k_{1}}\succeq\mathcal{I}_{k_{2}}\Rightarrow\sigma_{k_{1}}^{*} \geq\sigma_{k_{2}}^{*} \tag{19}\]
\begin{table}
\begin{tabular}{l|c|c|c} & \(h(\delta_{k})\) & \(\propto a_{k}\) & \(\propto b_{k}\) \\ \hline \hline Example 2.1 & \(\delta_{k}^{-1}\) & \(A_{k+1}\cdot(1+\mu\lambda_{k})^{2}\cdot\lambda_{k}^{-1}\) & 1 \\ \hline Example 2.2 & \(\delta_{k}^{-\frac{2}{3}}\) & \(t_{k}^{-1}\) & \(t_{k}^{\frac{2}{3}}\) \\ \hline Example 2.3 & \(\log(\delta_{k}^{-1})\) / \(\delta_{k}^{-\frac{1}{2}}\) & \(A_{k+1}\) & 1 \\ \end{tabular}
\end{table}
Table 1: Illustration of Assumptions A and B.
_Moreover,_
\[\mathcal{I}_{k_{1}}\succ\,\mathcal{I}_{k_{2}}\;\wedge\;\sigma^{*}_{k_{1}},\sigma^ {*}_{k_{2}}\,\in\,]l,u[\,\Rightarrow\sigma^{*}_{k_{1}}>\sigma^{*}_{k_{2}} \tag{20}\]
Proof.: We prove Theorem 3.1 in Appendix A.1.
**Example 3.2**.: The reader can get more insight thanks to the following example.
For every \(k\in[N]\), let \(\sigma\to n_{k}(\sigma)=(k+1)^{3}\,\sigma\), \(\sigma\to n_{k}(\sigma)=-\log(\sigma)\), \(l=1\) and \(u=10^{5}\). The functional ratios \(\mathcal{I}_{k}\) are given by
\[\mathcal{I}_{k}\,:\,]l,u[\rightarrow\mathbb{R},\,\sigma\rightarrow\mathcal{I }_{k}(\sigma)=\frac{n^{{}^{\prime}}_{k}(\sigma)}{d^{{}^{\prime}}_{k}(\sigma)} =\frac{(k+1)^{3}}{-\sigma^{-1}}=-(k+1)^{3}\,\sigma^{-1}\]
On \(\Xi=[l,u]\), \(n_{k}\)'s and \(d_{k}\)'s are continuously differentiable and \(n_{k}\)'s are increasing while \(\mathcal{I}_{k}\)'s are strictly increasing and strictly negative. In addition, if \(k_{1}\leq k_{2}\),
\[\mathcal{I}_{k_{1}}\succeq\mathcal{I}_{k_{2}}\]
since for every \(\sigma\in\Xi\), \(\mathcal{I}_{k_{1}}(\sigma)=-(k_{1}+1)^{3}\,\sigma^{-1}\geq-(k_{2}+1)^{3}\, \sigma^{-1}=\mathcal{I}_{k_{2}}(\sigma)\).
#### Reordering
In the case where all the ratios \(\{\mathcal{I}_{k}\}_{k=0}^{N-1}\) can be ordered, i.e. there exists a bijective mapping \(\tau:[N]\rightarrow[N]\) such that for all pair of indices \((k_{1},k_{2})\in[N]^{2}\),
\[\tau(k_{1})<\tau(k_{2})\Rightarrow\mathcal{I}_{k_{1}}\succeq\mathcal{I}_{k_{ 2}} \tag{21}\]
Theorem 3.1 suggests that the entries \(\{\sigma^{*}_{k}\}_{k=0}^{N-1}\) of an optimal solution \(\sigma^{*}\) of (18) can be _ranked_ as well. Provided a suitable comparison vector \(\nu\in\mathbb{R}^{N}\), we advocate the usefulness of \(\rho\), _descending rank_ function based on \(\nu\) to act like a \(\tau\) function above, i.e. for any \(k\in[N]\), we would have
\[\rho(k)=\tau(k) \tag{22}\]
As a reminder from Definition 1.2, we write \(\rho(k)=j\) if \(\nu_{k}\) is the \((j+1)\)-th largest element of \(\nu\). It follows from Theorem 3.1 that \(\sigma^{*}_{k}\) must correspond to the \((j+1)\)-th largest element of \(\sigma^{*}\). As displayed in Theorem 3.3, Theorem 3.1 applies verbatim to problem (17) by picking for every \(k\in[N]\), \(n_{k}(\sigma_{k})=a_{k}\,\sigma_{k}\), \(d_{k}(\sigma_{k})=b_{k}h(\sigma_{k})\) and \(D=\mathcal{B}(\bar{\delta})\). One valid comparison vector that fulfills (21) and (22) would be \(\nu_{k}=b_{k}/a_{k}\).
Let \(\delta^{*}\) be an optimal solution for (17). We summarize last paragraph's key content by underlining the fact that the biggest the value of \(b_{k}/a_{k}\) will be, the biggest the optimal inexactness at iteration \(k\in[N]\), according to our master problem, will be as well.
\[\rho(k_{1})<\rho(k_{2})\Rightarrow\nu_{k_{1}}=\frac{b_{k_{1}}}{a_{k_{1}}} \geq\frac{b_{k_{2}}}{a_{k_{2}}}=\nu_{k_{2}}\Rightarrow\delta^{*}_{k_{1}} \geq\delta^{*}_{k_{2}}\]
We are now ready to state a general theorem about master problem (17).
**Theorem 3.3**.: _Let Assumptions A and B hold with \((a,b)\in\mathbb{R}^{N\times 2}_{++}\), \(m<1<M\) and \(h:[m\bar{\delta},M\bar{\delta}]\to\mathbb{R}_{+}\) being convex. \(\exists N_{\oplus},N_{\ominus}\in\{0,\dots,N-1\}\), \(\lambda^{*}\in\mathbb{R}\) such that \(\forall k\in\{0,\dots,N-1\}\),_
\[\delta_{k}^{*}=\begin{cases}M\,\bar{\delta}&k\in\oplus:=\{\tilde{k}\,|\,\rho( \tilde{k})<N_{\oplus}\}\\ (h^{{}^{\prime}})^{(-1)}\Big{(}\frac{a_{k}\,\lambda^{*}}{b_{k}}\Big{)}&k\in \mathcal{T}:=\{\tilde{k}\,|\,N_{\oplus}\leq\rho(\tilde{k})\leq N-1-N_{\ominus} \}\\ m\,\bar{\delta}&k\in\ominus:=\{\tilde{k}\,|\,\rho(\tilde{k})>N-1-N_{\ominus} \}\end{cases} \tag{23}\]
_where \(\rho(k)\) depicts the descending rank of \(\nu_{k}=b_{k}/a_{k}\), \(\lambda^{*}\) satisfies the equality_
\[\sum_{k=0}^{N-1}\,b_{k}\,h(\bar{\delta})-\Bigg{[}h(M\bar{\delta})\Bigg{(}\sum _{k\in\oplus}b_{k}\Bigg{)}+h(m\bar{\delta})\Bigg{(}\sum_{k\in\ominus}b_{k} \Bigg{)}\Bigg{]}=\sum_{k\in\mathcal{T}}\,b_{k}\,h\bigg{(}(h^{{}^{\prime}})^{(- 1)}\Big{(}\frac{a_{k}\,\lambda^{*}}{b_{k}}\Big{)}\bigg{)} \tag{24}\]
_and \(\delta^{*}\) stands as a solution of (17)._
Proof.: The proof of Theorem 3.3 is given in Appendix A.2.
**Remark 7**.: If all the entries of \(\nu\in\mathbb{R}^{N}_{++}\) differ, their ordering is unique, so becomes \(\delta^{*}\) as shown in Appendix A.3.
**Remark 8**.: One only need to know \(a_{k}\) (respectively \(b_{k}\)) up to a common factor \(K_{a}>0\) (respectively \(K_{b}>0\)). That is, it is sufficient to know \(\tilde{a}_{k}\) (respectively \(\tilde{b}_{k}\)) such that for any \(k\in[N]\), \(a_{k}=K_{a}\,\tilde{a}_{k}\) (respectively \(b_{k}=K_{b}\,\tilde{b}_{k}\)). In Theorem 3.3, instead of looking for \(\lambda^{*}\), one then searches for another constant \(\tilde{\lambda}^{*}=\lambda^{*}\,K_{a}/K_{b}\) that would act like \(\tilde{\lambda}^{*}\,\tilde{a}_{k}/\tilde{b}_{k}=\lambda^{*}\,a_{k}/b_{k}\).
**Example 3.4**.: (_toy example_) We want here to give a first glimpse about the upcoming optimal schedules that will apply for (among others) situations reflected in Example 2.1, 2.2 and 2.3. To this purpose, we clarify the above notations by writing explicitly what they entail given an academic toy example rightfully meeting Assumptions A and B. We consider that the _impact coefficients_ and _relative costs_ of oracles are given for any \(k\in\{0,\dots,N-1=79\}\) by
\[a_{k}=(k+1)\qquad b_{k}=\begin{cases}\frac{3}{420}&0\leq k<20\\ \frac{2}{420}&20\leq k<40\\ \frac{8}{420}&40\leq k<80\end{cases}\]
The _reference_ inexactness parameter is chosen as \(\bar{\delta}=10^{-4}\) and \(m=0<1<M=2\).
The oracle cost \(h:[0,1]\to[0,\infty)\) is convex, differentiable and fluctuates poly-logarithmically with \(0\leq\delta\leq 1\) as \(\delta\to h(\delta)=\log^{2}(\delta^{-1})\). \(h^{\prime}\) is negative and strictly increasing from \([0,1]\) to \((-\infty,0]\), its inverse \((h^{\prime})^{(-1)}:(-\infty,0]\to[0,1]\) exists for \(-\omega<0\)
\[-\omega\to(h^{\prime})^{(-1)}(-\omega)=2\,\omega^{-1}\,\mathcal{W}_{0}(\omega/2)\]
where \(\mathcal{W}_{0}\) depicts the Lambert \(\mathcal{W}\) function on its \(0\)-branch. Here, Theorem 3.3 applies. Obviously, \(\lim_{\delta\to 0}\,h(\delta)=\infty\) translates to \(\ominus=\emptyset\Leftrightarrow N_{\ominus}=0\). From that point, we can efficiently solve KKT conditions. They inform that for our present problem instance, \(N_{\oplus}=10\). Fortunately, the indices \(k\in[80]\) for which \(\nu_{k}\) values are the \(N_{\oplus}\) biggest fall in \(\oplus=\{0,\dots,9\}\). We can conclude that the set \(\mathcal{T}\) contains the indices \(\{10,\dots,79\}\).
We summarize the calculated optimal schedules for (17) in (25) bearing in mind that \(\lambda=-\lambda^{*}\simeq 27.5757\). Figure 1 graphs the optimal inexactness schedule in (25).
\[\delta_{k}^{*}=\begin{cases}2\cdot 10^{-4}&k\in\{0,\ldots,9\}\\ \big{(}\frac{2\,(3/420)}{(k+1)\,\lambda}\big{)}\mathcal{W}_{0}\big{(}\frac{(k +1)\,\lambda}{2\,(3/420)}\big{)}&k\in\{10,\ldots,19\}\\ \big{(}\frac{2\,(2/420)}{(k+1)\,\lambda}\big{)}\mathcal{W}_{0}\big{(}\frac{(k +1)\,\lambda}{2\,(2/420)}\big{)}&k\in\{20,\ldots,39\}\\ \big{(}\frac{2\,(8/420)}{(k+1)\,\lambda}\big{)}\mathcal{W}_{0}\big{(}\frac{(k +1)\,\lambda}{2\,(8/420)}\big{)}&k\in\{40,\ldots,79\}\end{cases} \tag{25}\]
#### 3.2.2 Closed-form solutions
In this section we focus on analytical closed-form solutions we can obtain from the previous theorems when specifying a certain type of \(h\) function, related to the practical examples motivating our oracle cost model (6). Yet, it usually remains to seize the correct values of \(N_{\oplus}\) (number of iterations either performed at worst oracle accuracy or involving the least computational efforts) and \(N_{\odot}\) (number of iterations either achieved with the best oracle accuracy or demanding the heaviest computational efforts).
To circumvent a cautious search for the right pair \((N_{\oplus},N_{\odot})\in\{0,\ldots,N-1\}^{2}\), one can immediately detect whether \(N_{\oplus}=0=N_{\ominus}\) with a simple trial in constant time. It is essentially what we achieve in Corollaries 3.5 and 3.6. In such circumstances, \(\oplus=\emptyset=\ominus\) (Theorem 3.3), i.e. the _transient set_\(\mathcal{T}\) that normally collects the \(k\)-indices of iterations associated with \(\nu_{k}\) values smaller than the \(N_{\oplus}\) biggest and bigger than the \(N_{\ominus}\) smallest involves here all the iterations, i.e. \(\mathcal{T}=\{0,\ldots,N-1\}\).
Functional familyLet us formally declare a meaningful functional family of \(h\) functions that captures our applications of interest. For any \(r>0\), we define the convex function \(h_{r}:\mathbb{R}_{+}\to\mathbb{R}_{++}\) and the inverse of its derivative \((h_{r}^{\prime})^{(-1)}:\mathbb{R}_{--}\to\mathbb{R}_{+}\)
\[h_{r}(\delta)=\delta^{-r}\ \Rightarrow\ (h_{r}^{\prime})^{(-1)}\big{(}-\omega \big{)}=\left(\frac{\omega}{r}\right)^{-\frac{1}{r+1}} \tag{26}\]
**Corollary 3.5**.: _Let Assumptions A and B hold with \((a,b)\in\mathbb{R}_{++}^{N\times 2}\), \(m<1<M\) and \(h=h_{r}:[m\,\bar{\delta},M\,\bar{\delta}]\to\mathbb{R}_{+}\). If \(m\,\bar{\delta}\preceq\mathring{\delta}\preceq M\,\bar{\delta}\) with_
\[\mathring{\delta}=\bar{\delta}\cdot\left(\frac{\sum_{k=0}^{N-1}\,(b_{k}\,a_{k }^{r})^{\frac{1}{(r+1)}}}{\sum_{k=0}^{N-1}\,b_{k}}\right)^{\frac{1}{r}}\cdot \big{(}b\odot a^{-1}\big{)}^{\frac{1}{r+1}} \tag{27}\]
_then \(\mathring{\delta}\) is optimal for (17)._
**Remark 9**.: Corollary 3.5 simply tells that if \(\mathring{\delta}\) from (27) is feasible for our master problem (17) under the choice \(h=h_{r}\) then it must be optimal. For the sake of completeness, we also provide in Appendix A.4 closed-form schedules in what concerns an extended family of \(h_{r}\) functions. It includes the logarithmic model \(h_{0}(\delta)=\log(\delta^{-1})\) from our introductory Example 1.1 as a pathological case \(r\to 0\) and \(N_{\oplus},N_{\ominus}\) are not necessarily zero. As a drawback consequence, the employed notations become heavier.
#### 3.3.2 Interpretation
Rather intuitively, an iteration whose _impact coefficient_\(a_{k}\) is bigger harms more the upper-bound \(\mathcal{E}(\{\delta_{k}\}_{k=0}^{N-1})\) on the objective gauge \(\mathcal{C}(N)\) (cfr. (5)) thus requires more precision, i.e. a small \(\delta_{k}\). However its associated cost model \(\mathcal{B}_{k}(\cdot)=b_{k}\,h(\cdot)\) (cfr. (6)) eventually reweights the accuracy of the oracle through \(b_{k}\) according to its relative computational burden with respect to the other iterations. Hence, \(\delta_{k}^{*}\) are governed by _compound impacts_\(\nu_{k}=b_{k}/a_{k}\),
\[\delta_{k}^{*}\propto\left(\frac{b_{k}}{a_{k}}\right)^{\frac{1}{r+1}} \tag{28}\]
At fixed \(a,b\), as \(r\) approaches \(0\), the oracles appear cheap and one can gain a lot by experiencing large deviations from \(\bar{\delta}\). Conversely, we observe that if \(r\) tends to infinity then the cost of oracles spikes so that the constant schedule at \(\bar{\delta}\) inexactness level, feasible, becomes optimal. Indeed, at any iteration, requesting an oracle accuracy even slightly better than \(\bar{\delta}\) turns out to be extremely expensive and not affordable. Thus, we must observe \(\mathcal{T}=[N]\) if \(r\) is large enough. Another situation in which one can easily predict that \(\mathcal{T}=[N]\) or, equivalently, \(N_{\ominus}=0=N_{\oplus}\), arises when \(m=0\) and \(M=\infty\). Indeed, akin to Example 3.4, the oracle cost blows up for \(\delta\to m\bar{\delta}=0\).
In addition, if not obliged to, one has no advantage to choose an inexactness parameter arbitrarily large \(\delta\to M\bar{\delta}=\infty\).
**Remark 10**.: Sticking to our main thread, let us express a possible asymptotical trend of optimal schedules of inexactness for Example 2.1 (\(\mu=0\), constant \(\lambda_{k}\)) based on (28)
\[\delta_{k}^{*}\propto A_{k+1}^{-\frac{1}{2}}\in\mathcal{O}(k^{-1})\]
#### 4.2.2 Illustration
We illustrate now the application of our previous theorems on an instance closely related to Example 2.1. Indeed, when \(\mu=0\) and \(\lambda_{k}\in\mathcal{O}(1)\) for any integer \(k\geq 0\), it is known that \(a_{k}\in\Theta(k^{2})\), see e.g. [20]. We display the evolution of quantities \(N_{\oplus}\) (Figure 2) and \(\delta_{k}^{*}\) (Figure 3) with oracle's complexity parameter \(r\) and the factor of maximal tolerated inaccuracy \(M\). Let \(m=0\) and let the _reference_ inexactness tolerated be \(\bar{\delta}=10^{-4}\). In what follows, \(N_{\ominus}=0\) in any situation since the oracle cost model \(h_{r}(\delta)=\delta^{-r}\) explodes as \(\delta\to 0=m\bar{\delta}\). Accordingly, we assume for any \(k\geq 0\) that \(b_{k}=1\) and \(a_{k}=(k+1)^{2}\). We highlight two observations of interest.
* Figure 2: \(N_{\oplus}>0\) if \(N\) itself is big enough and \(M\bar{\delta}\) constrains our master problem (17), i.e. \(M\to 1\). In this latter case, the computational savings on intend to invest in late iterations requiring more care must be spread out on more early iterations since the biggest \(\delta_{k}\) from any optimal schedule \(\{\delta_{k}^{*}\}_{k=0}^{N-1}\) cannot take a value that falls way above the reference \(\bar{\delta}\). In other words, \(M\delta\) does not allow one to save a lot of efforts in the iterations linked with the smallest _impact factors_.
* Figure 3: The variability in the optimal schedules for inexactness \(\{\delta_{k}^{*}\}_{k=0}^{N-1}\) heavily depends on the oracle cost parameter \(r\), as emphasized by the power \((r+1)^{-1}\) in the relationship (28). Our findings discussed in the previous paragraph are graphically confirmed, e.g. when \(r=50\), \(\delta_{k}^{*}\simeq\bar{\delta}\) for every \(k\in\{0,\ldots,N-1\}\).
Figure 2: \(N_{\oplus}>0\) if \(N\) itself is big enough and \(M\bar{\delta}\) constrains our master problem (17), i.e. \(M\to 1\). In this latter case, the computational savings on intend to invest in late iterations requiring more care must be spread out on more early iterations since the biggest \(\delta_{k}\) from any optimal schedule \(\{\delta_{k}^{*}\}_{k=0}^{N-1}\) cannot take a value that falls way above the reference \(\bar{\delta}\). In other words, \(M\delta\) does not allow one to save a lot of efforts in the iterations linked with the smallest _impact factors_.
### Practical extensions
This part of our work is dedicated to direct extensions of the results unveiled so far.
We adapt them to practical scenarios beyond the initial scope of TOM. Firstly, we investigate the modifications one should undertake to apply the concept of tunable oracle when, instead of the oracles' accuracies one would like to monitor the computational work invested in producing their outputs. Secondly and as previously hinted, we propose an _online_ strategy which preserves the structure of optimal inexactness schedules without the knowledge of \(N\).
#### 3.3.1 Work controlled
In some cases, one would like to manually specify the time spent at each iteration. In other words, instead of deciding \(\delta_{k}\) that would _a priori_ lead to a cost of \(\mathcal{B}_{k}(\delta_{k})\), we process the other way around. We choose the amount of computations \(\omega_{k}\in[\omega_{M},\omega_{m}]\) and we expect to incur a level of inexactness \(\delta_{k}\sim\mathcal{B}_{k}^{(-1)}(\omega_{k})\). Therefore, we can equivalently fix a _reference_ total work \(\bar{\omega}\) that acts as a surrogate for the budget term \(\sum_{k=0}^{N-1}\,\mathcal{B}_{k}(\bar{\delta})\). We rewrite the objective from master problem (17) as
\[\sum_{k=0}^{N-1}\,a_{k}\,\delta_{k}\sim\sum_{k=0}^{N-1}\,a_{k}\,\mathcal{B}_{k }^{(-1)}(\omega_{k})=\sum_{k=0}^{N-1}\,a_{k}\,h_{r}^{(-1)}\big{(}\omega_{k}/b_ {k}\big{)}\]
Notice the homogeneity of \(h_{r}\) allows to write for any \(\eta>0\), \(\beta\geq 0\), \(r>0\) and \(\Omega\subseteq\mathbb{R}_{+}^{N}\),
\[\arg\min_{\omega\in\Omega}\sum_{k=0}^{N-1}\,h_{r}^{(-1)}\big{(}\omega_{k}\cdot \eta/b_{k}\big{)}=\arg\min_{\omega\in\Omega}\sum_{k=0}^{N-1}\,h_{r}^{(-1)} \big{(}\omega_{k}/b_{k}\big{)} \tag{29}\]
Figure 3: \(N\)_fixed_, \(r\to 0\) means cheaper oracles and thus more aggressive schedules (see \(r=50^{-1}\)).
This ensures that \(b_{k}\) must only be known up to a common constant multiplicative factor, as previously assumed. We formulate the work controlled counterpart of (17) in (30), assuming that \(h=h_{r}\)
\[\omega^{*}(N,\bar{\omega},\omega_{m},\omega_{M})\in\arg\min_{\omega_{M}\, \preceq\,\omega\,\preceq\,\omega_{m}}\sum_{k=0}^{N-1}\,a_{k}\,h^{(-1)}\Big{(} \frac{w_{k}}{b_{k}}\Big{)}\,\,\,\text{s.t.}\,\,\,\sum_{k=0}^{N-1}\,\omega_{k}= \bar{\omega} \tag{30}\]
Let us state the analogous version of Corollary 3.5 regarding the work controlled framework. Again, the interested reader can look at Appendix A.4 that displays a full version with \(N_{\oplus}\) and \(N_{\ominus}\) not necessarily zero.
**Corollary 3.6**.: _Let Assumptions \(A\) and \(B\) hold with \((a,b)\in\mathbb{R}_{++}^{N\times 2}\), \(\omega_{M}<\bar{\omega}/N<\omega_{m}\) and \(h=h_{r}:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\). If \(\omega_{M}\preceq\hat{\omega}\preceq\omega_{m}\) with_
\[\hat{\omega}=\bar{\omega}\cdot\frac{\big{(}b\odot a^{r}\big{)}^{\frac{1}{r+1 }}}{\sum_{k=0}^{N-1}\big{(}b_{k}\,a_{k}^{r}\big{)}^{\frac{1}{(r+1)}}} \tag{31}\]
_then \(\hat{\omega}\) is optimal for (30)._
**Remark 11**.: Despite being very similar, problems (17) and (30) are not perfectly equivalent in general. They do share the common goal of minimizing \(\mathcal{E}\) subject to a budget constraint \(\mathcal{B}\). Nevertheless, one can see (30) as a version of (17) where the bounds on the achievable \(\delta_{k}\) at iteration \(k\in\{0,\ldots,N-1\}\) vary. Indeed, we have the following bounds:
\[\delta_{k}\in\Big{[}\mathcal{B}_{k}^{(-1)}(\omega_{m}),\,\mathcal{B}_{k}^{(-1 )}(\omega_{M})\Big{]} \tag{32}\]
However, in the neutral case where \(b\propto\mathbf{e}\), these bounds are constant and can be written as \([m\bar{\delta},\,M\bar{\delta}]\) for any \(\bar{\delta}>0\) and well-chosen \(0<m\leq M\) parameters.
#### 3.5.1 Interpretation
Just as in (28), it is not difficult to construe results from Corollary 3.6 by advocating that
\[\omega_{k}^{*}\propto\bigg{(}b_{k}\,a_{k}^{r}\bigg{)}^{\frac{1}{r+1}} \tag{33}\]
We have already seen that when \(a_{k}\) grows, linked oracle's accuracy \(\delta_{k}\) should evolve inversely proportional. As a consequence, the computational cost \(\omega_{k}\) rise accordingly. One should pay attention to the role of \(b_{k}\) in (33). It seems like a bigger \(b_{k}\) implies a bigger \(\omega_{k}\), which turns out to be true. Meanwhile, a bigger \(b_{k}\) is prone to curb the demand for highly accurate oracles in (28) hence intuitively reducing oracle's work \(\omega_{k}\). Yet, there is no contradiction. Combining (28) and Assumption B, we can capture the overall marginal effect of \(b_{k}\) in
\[\omega_{k}=b_{k}h(\delta_{k})\propto b_{k}\,\big{(}(b_{k}/a_{k})^{\frac{1}{r+ 1}}\big{)}^{-r}=b_{k}^{\frac{1}{r+1}}a_{k}^{\frac{r}{r+1}}\]
Finally, let us assert that \(b_{k}\)'s _weighting impact_ diminishes as \(r\to\infty\), \(\omega_{k}^{*}\) becoming asymptotically proportional to the _impact coefficient_\(a_{k}\). Conversely when \(r\to 0\), the suggested oracle cost somehow flattens. It ultimately depends only on \(b_{k}\), constant in the absence of further knowledge (see Example 2.1 and 2.3).
#### 3.3.2 Online version
So far, we have assumed that the choice of \(N\) was exogenous and well thought. Unfortunately, there is no _one fits all_ approach to adequately fix \(N\). Usually, one runs an optimization algorithm and stops it as soon as a relative tolerance, tracked alongside the iterations, is observed. Let \(\delta^{*}\) (respectively \(\omega^{*}\)) be an optimal solution of (17) (respectively (30) for a specified \(N\in\mathbb{N}\) and \(h=h_{r}\). Hidden behind equation (27) (respectively (31)) and emphasized by (28) (respectively (33)), one can retrieve a recursion rule linking the optimal inexactness parameters of two distinctive iterations, say \(\hat{k}\) and \(k\). Let \(\delta_{k}^{*}\in[m\bar{\delta},M\bar{\delta}]\) then for any \(\hat{k}\in[N]\), one can recover
\[\delta_{\hat{k}}^{*}=\max\left\{m\bar{\delta},\min\left\{M\bar{\delta},\left( \frac{b_{\hat{k}}}{a_{\hat{k}}}\cdot\frac{a_{k}}{b_{k}}\right)^{\frac{1}{(r+1 )}}\delta_{k}^{*}\right\}\right\} \tag{34}\]
Following the same logic, a recursion exists for \(\omega_{k}^{*}\in[\omega_{M},\omega_{m}]\),
\[\omega_{\hat{k}}^{*}=\max\left\{\omega_{M},\min\left\{\omega_{m},\left(\frac{b _{\hat{k}}\,a_{k}^{r}}{b_{k}\,a_{k}^{r}}\right)^{\frac{1}{(r+1)}}\omega_{k}^{* }\right\}\right\} \tag{35}\]
Therefore, in practice, one can adopt a _reference_ situation accounting for a lower-bound of \(N_{r}\) iterations then compute its inherent _offline_ optimal schedule. For any iteration \(\hat{k}\geq N_{r}\), one can extrapolate using the recursion rules explained right above.
**Rationale** The extrapolated schedules obtained with (34), (35) present the advantage to preserve the right ratios \(\delta_{\hat{k}}^{*}/\delta_{k}^{*}\) and \(\omega_{\hat{k}}^{*}/\omega_{k}^{*}\) as if we knew the number of iterations performed by our Tunable Oracles Method (TOM) when exited. In addition, it does not break our practical assumption of relative knowledge of \(a_{k}\)'s and \(b_{k}\)'s since we only involve ratios wiping out any common multiplicative constant. Finally, such _online_ schedule allows for \(b_{k}/a_{k}\)'s that are not necessarily pre-computed. As mentioned earlier, in various adaptive methods such coefficients are defined by the final result of local line-search techniques [5, 12, 20]. This feature of our _online_ strategy dramatically extends the applicability of our approach, endowing it with local information exploitation, allowing for possible _non-monotonicity_ in the parameters \(\delta_{k}\) or \(\omega_{k}\) used.
## 4 Numerical Experiments
We present three experiments that showcase both our elaborated _offline_ and _online_ techniques for Tunable Oracle Methods (TOM). We emphasize that we are primarily concerned with showing that our approach effectively improves the efficiency of known existing methods such as FGM [20].
The first experiment serves to validate our theoretical optimal _offline_ schedules for various levels of parameters, i.e. \(r\) (oracle cost parameter), \(\bar{\delta}\) (_reference_ inexactness) and \(N\) (number of performed iterations). In order to stick to our theoretical framework
as much as possible, we generate inexact oracle outputs by adding artificial noise to the gradients used in FGM. The noise is chosen according to a simulated oracle cost one would invest to control the level of inexactness. Confirming that our optimal schedules also perform better in a practical setting is important, as numerical optimization methods (with or without inexactness) typically perform better than their worst-case guarantees in practice. Within this setting, we show the superiority of our schedules compared to a _constant_ schedule approach with matching overall computational cost.
In the second and third experiments, we investigate how the _offline_ and our heuristic _online_ approach behave within a real case where inexactness naturally emerges. Experiments 1,2 and 3 rely on the problem motivated in Example 2.3 but Experiment 3 allows for line-search within FGM hence turning it into an _online_ method.
Our code is freely available on GitHub so that one can observe that the reported results are representative of the usual performances.
#### Test Problem & Data Generation
We describe a _robust optimization_ task involving a regularization parameter \(\mu\geq 0\) and data vectors \(\Theta=\{\theta_{i}\}_{i=1}^{n}\),
\[\theta_{i}\sim\mathcal{N}\big{(}\mathbf{0},I_{d}/p\big{)}\hskip 28.452756pt \forall i\in\{1,\ldots,n\}\]
a collection of \(n\in\mathbb{N}\)_scenarios_ from \(\mathbb{R}^{d}\) (\(p>0\)). We consider the classical problem where one minimizes the worst outcome of the regularized linear objective only over the previously seen _scenarios_, i.e. one wishes to minimize a regularized objective \(\frac{\mu}{2}||x||^{2}+\langle\theta,x\rangle\) over the unit simplex, i.e. \(\Delta:=\{x\in\mathbb{R}^{d}\,|\,x\succeq\mathbf{0},\,||x||_{1}=1\}\) for a meaningful cost \(\theta\) based on historical data about its value, stored in \(O=[\theta_{1},\ldots,\theta_{n}]^{T}\).
\[\min_{x\in\Delta}\,\max_{\theta\in\Theta}\,\langle\theta,x\rangle+\frac{\mu}{2 }||x||^{2}=\min_{x\in\Delta}\,\max_{\theta\in\mathbf{conv}(\Theta)}\langle \theta,x\rangle+\frac{\mu}{2}||x||^{2} \tag{36}\]
### Experiment 1 -- Softmax Optimization under Synthetic Noise
In a large scale setting, solving (36) using the standard epigraph reformulation might be prohibitive and is sometimes replaced by a smoothed version of the robust objective, leaving the feasible set intact, i.e. no extra constraint. Let \(\upsilon>0\), the problem becomes2
Footnote 2: Note that the value of (37) differs at most by \(\epsilon=\mu+n/\upsilon\) from (36).
\[F^{*}=\min_{x\in\mathbb{R}^{d}}\,\overbrace{\underbrace{\upsilon^{-1}\,\log \Big{(}n^{-1}\,\sum_{i=1}^{n}\,\exp(\upsilon\cdot\langle\theta_{i},x\rangle) \Big{)}+\frac{\mu}{2}||x||^{2}}_{f(x)}+\underbrace{\chi_{\Delta}(x)}_{\Psi(x)}}^ {F(x)} \tag{37}\]
Solving (37) using FGM requires the ability to compute (in-)exact information about \(f\) at any query point \(x\in X:=\Delta\), i.e. \((\tilde{f}(x),\nabla\tilde{f}(x))\simeq(f(x),\nabla f(x))\). Here, the value of \(f(x)\) and its gradient \(\nabla f(x)\) are available in closed-form for (37) hence discarding
the need for schedules of inexactness. However, one could think about a game where one has to pay \(\delta^{-r}\) (\(r>0\)) or \(-\log(\delta)\) (\(r=0\)) to obtain \(\nabla f(x)\) corrupted by a noise of limited radius \(\alpha\,\delta\) depending on parameter \(r\geq 0\). I.e. one gets \(\nabla\tilde{f}(x)\) such that
\[\nabla\tilde{f}(x)=\nabla f(x)+\alpha\,\delta\cdot u \tag{38}\]
with \(u\sim\mathcal{U}\big{(}\{u\in\mathbb{R}^{d}\,||\,||u||=1\}\big{)}\). From [3], one can deduce that \((f(x),\nabla\tilde{f}(x))\) as defined above provides a \((4\,\alpha\,\delta,\upsilon\cdot||O||^{2}+\mu,\mu)\) inexact information.
#### 4.2.2 Test instances
As suggested by Example 2.3, let us now try out FGM on problem (37) with fixed stepsize \(L^{-1}=(\upsilon\cdot||O||^{2}+\mu)^{-1}\) and different values of \(\mu\), \(r\), \(N\), \(\bar{\delta}\) while using sequences of inexact information tuples parametrized by schedules \(\{\delta_{k}\}_{k=0}^{N-1}\). That is, for each instance \((\mu,r,N,\bar{\delta})\in\{0,10^{-1}\}\times\{1\}\times\{10,500,1000,5000,1000 \}\times\{10^{-1},10^{-3},10^{-5}\}\) or \((\mu,r,N,\bar{\delta})\in\{0,10^{-1}\}\times\{0\}\times\{10,500,1000,5000,10000 \}\times\{9\cdot 10^{-3},10^{-4},10^{-6}\}\), we compare our _offline_ optimized approach \(\{\delta_{k}=\delta_{k}^{*}\}_{k=0}^{N-1}\) where \(\delta^{*}\) solves (17) with vectors \((a,b)\) specified in Table 1, \(m=0\) and \(M=100\) (we allow the inexactness from our _tunable_ approach to be a hundred times worse than the _reference_\(\bar{\delta}\)) with the _constant_ schedule \(\{\delta_{k}=\bar{\delta}\}_{k=0}^{N-1}\). We define the simulated cost of inexact oracles as \(N/\bar{\delta}=\sum_{k=0}^{N-1}\,(\delta_{k}^{*})^{-1}\) (\(r=1\)) and \(-N\log(\bar{\delta})=-\sum_{k=0}^{N-1}\,\log(\delta_{k}^{*})\) (\(r=0\))3 and match it to the total cost of the constant \(\bar{\delta}\)_reference_ schedule. Finally, in order to tame the effects of the randomness inherent to our synthetic inexact information production, we aggregate the obtained results from FGM over 5 runs with 5 different starting points drawn as \(x_{0}\sim\mathcal{U}\big{(}\Delta\big{)}\). Here below, the dimensions read \((d,n,\alpha,p)=(100,500,100,10)\).
Footnote 3: \(M\,\bar{\delta}<1\) in order to ensure that any iteration costs a strictly positive amount of work.
#### 4.2.3 Estimation of \(F^{*}\)
Regarding this simple example, \(F^{*}\) is estimated by the value of \(F\) at the output from a noise-free FGM applied for \(3\cdot 10^{5}\) iterations.
#### 4.2.4 Results
Figure 4 and 5 show the evolution of the terminal primal optimality gap \(F(x_{N})-F^{*}=f(x_{N})-F^{*}\) after \(N\) iterations of FGM (output iterate \(x_{N}\) is in \(\Delta\)) using the variants of inexactness schedules described above. Each dot plotted on these graphs represents the averaged performance over the \(25=5\cdot 5\) independent runs of either our _tunable_ or the _constant_ approach for a single instance \((\mu,r,N,\bar{\delta})\).
As predicted by the theory, the _tunable_ inexactness schedule reached a better terminal primal optimality gap for every setting tried. Not surprisingly, the gains were dramatically more impressive when the discrepancy within _impact coefficients_\(\{a_{k}\}_{k=0}^{N-1}\) was high. Intuitively, when such coefficients do not vary at all, our optimized _tunable_ approach boils down to the _constant_ schedule and no gain is to expect at all. Table 1 recalls that \(a_{k}=A_{k+1}\) for any \(k\in[N]\) and Remark 6 suggests that \(\{A_{k}\}_{k\in\mathbb{N}}\) grows sublinearly (respectively linearly) when \(\mu=0\) (respectively \(\mu>0\)). Therefore, settings in which \(\mu>0\) are the most promising in terms of gains in favor of the _tunable_ approach, thanks to the inherent high variance of _impact coefficients_. Figure 5 undoubtedly confirmed this hope with substantial gains for our _tunable_ approach although problem's conditioning stays relatively small, i.e. \(\mu/L\simeq 10^{-5}\). Both Figure 4 and 5 show that the benefits of high discrepancy in _impact coefficients_ (hence in optimal inexactness schedules, see (28)) are enhanced by cheap oracles, i.e. \(r\to 0\). We point out as a generic comment that for a small number of iterations, the additive effect of inexactness does not dominate the _error-free_ convergence bound and one can
barely observe any difference between our optimized approach and other inexactness schedules. Finally, one can observe that some of the results linked to more accurate _reference_ inexactness of \(\bar{\delta}\leq 10^{-5}\) led to poorer primal optimality gaps for \(N=10^{2}\). This unexpected phenomenon can be explained by the fact that for bigger values of \(\delta\), (38) might actually yield more aggressive and successful descent directions, compared to plain accurate gradients, again in a regime where the additive impact of inexactness is of secondary importance.
### Experiment 2 & 3 -- Robust Optimization over Convex Hull
As depicted in Example 2.3, one may choose an _anchor scenario_\(\bar{\theta}\in\mathbf{conv}(\Theta)\) and a parameter \(\sigma>0\) in order to regularize the inner maximization problem in (36) (right-hand side) yielding a smoothed outer level objective [15]. Then, a challenging task consists of minimizing the worst possible outcome of the inner-regularized objective over the convex hull of all the _scenarios_, i.e.
\[\min_{x\in\Delta}\;\max_{\theta\in\mathbf{conv}(\Theta)}\left\langle\theta,x \right\rangle-\frac{\sigma}{2}||\theta-\bar{\theta}||^{2}+\frac{\mu}{2}||x||^{2} \tag{39}\]
Figure 4: (no regularization, \(\mu=0\)) — Softmax Optimization with Synthetic Noise
Figure 5: (regularization, \(\mu>0\)) — Softmax Optimization with Synthetic Noise
To that end, we adopt the _anchor_\(\bar{\theta}=n^{-1}\)\(\sum_{i=1}^{n}\)\(\theta_{i}\) and construct the problem
\[F^{*}=\min_{x\in\mathbb{R}^{d}}\underbrace{\frac{\overline{\mu}}{2}||x||^{2}+ \max_{w\succeq 0,\,||w||_{1}=1}\underbrace{\langle O^{T}w,x\rangle-\frac{\sigma}{2}||O^{T}w -\bar{\theta}||^{2}}_{q(w;x)}+\underbrace{\chi_{\Delta}(x)}_{\Psi(x)}}_{f(x)} \tag{40}\]
Again, as motivated in Example 2.3, we use FISTA from [22] with a momentum depending on the number \(\hat{\kappa}=\frac{\lambda_{\min}(OO^{T})}{\lambda_{\max}(OO^{T})}\) in order to get \(w_{x}\succeq\mathbf{0}\), \(||w_{x}||_{1}=1\) ensuring
\[f(x)-\left(\frac{\mu}{2}||x||^{2}+q(w_{x};x)\right)\leq\delta \tag{41}\]
Then \((\tilde{f}(x),\nabla\tilde{f}(x))=(\frac{\mu}{2}||x||^{2}+q(w_{x};x),\mu x+O^{ T}w_{x})\) provides \((2\delta,2\cdot\sigma^{-1}+\mu,\mu)\) inexact information about \(f\), tuple used within FGM to solve problem (39).
We monitor the quality of a candidate \(w_{x}^{(\omega)}\) at any point \(x\in\Delta\) after \(\omega\) inner-iterations thanks to the Frank-Wolfe gap (42). As soon as \(\text{FW}(\omega)\leq\delta\) then \(w_{x}^{(\omega)}\) fulfills (41).
\[\text{FW}(\omega)=\max_{j\in[\omega]}\max_{w\succeq\mathbf{0},\,||w||_{1}=1} \,q(w_{x}^{(j)};x)+\langle\nabla q(w_{x}^{(j)};x),w-w_{x}^{(j)}\rangle-q(w_{x }^{(\omega)};x) \tag{42}\]
In order to speed-up computations to obtain approximate solutions of the inner problem, we give as starting point of FISTA the solution obtained at the last oracle call.
Estimation of \(F^{*}\)Let \(\hat{x}^{*}\in\Delta\) be the best known solution to (40). Then, by convexity of the objective, we estimate a lower-bound for \(F^{*}\) as follows
\[F^{*}=\min_{x\in\Delta}f(x)\geq\min_{x\in\Delta}\underbrace{\frac{\mu}{2}|| \hat{x}^{*}||^{2}+q(w_{\hat{x}^{*}};\hat{x}^{*})}_{f(\hat{x}^{*})}+\langle \underbrace{\mu\hat{x}^{*}+O^{T}w_{\hat{x}^{*}}}_{\nabla\tilde{f}(\hat{x}^{*} )},x-\hat{x}^{*}\rangle+\frac{\mu}{2}||x-\hat{x}^{*}||^{2} \tag{43}\]
where \(w_{\hat{x}^{*}}\) represents the \(w\) candidate that achieved \(10^{-10}\) precision when computing an approximate pair \((\tilde{f}(\hat{x}^{*}),\nabla\tilde{f}(\hat{x}^{*}))\simeq(f(\hat{x}^{*}), \nabla f(\hat{x}^{*}))\) with respect to criterion (41).
Offline Schedules (Experiment 2)**Test instancesHere, we run FGM with stepsize \(L^{-1}=(2\sigma^{-1}+\mu)^{-1}\) for a predefined number of iterations \(N\). We have tested the two following settings in terms of scaling and dimension; \((d,n,p,\sigma)=(100,500,5^{-1},10^{-3})\) and \((d,n,p,\sigma)=(1000,500,5^{-1},10^{-3})\). The first setting led to \(\hat{\kappa}=0\) hence implying that the oracle cost parameter \(r\) equals \(\frac{1}{2}\) whereas under the second setting \(\hat{\kappa}>0\) and \(r\) is taken as \(0\), recalling the arguments of Example 2.3. For each instance \((\mu,r,N,\bar{\delta})\in\{0,10^{-1}\}\times\{\frac{1}{2}\}\times\{10,500,1000,5000\}\times\{10^{-1},10^{-3},10^{-5}\}\) or \((\mu,r,N,\bar{\delta})\in\{0,10^{-1}\}\times\{0\}\times\{10,500,1000,5000\} \times\{9\cdot 10^{-3},10^{-4},10^{-6}\}\), we obtain our _tunable_\(\delta^{*}\) by solving problem (17) in the very same fashion as Experiment 1. Although the overall computational cost of our _tunable_\(\{\delta_{k}=\delta_{k}^{*}\}_{k=0}^{N-1}\) schedule is meant to coincide with the computational cost of the _constant_ schedule \(\{\delta_{k}=\bar{\delta}\}_{k=0}^{N-1}\)
the oracle cost model does not always perfectly fit the true encountered computational costs required to produce the inexact information tuples. Therefore, we record the total number of inner-iterations \(\sum_{k=0}^{N-1}\,\omega_{k}\) as well as the final primal accuracy about (40) for fair comparisons. This latter entails the computation of \(f(x_{N})=F(x_{N})\), also performed at \(10^{-10}\) precision with respect to criterion (41) (see above). Here, \(\hat{x}^{*}\) stems as as the output of FGM for \(8\cdot 10^{3}\) iterations with _constant_ inexactness level of \(10^{-8}\) when \(r=0\). Because of much more expensive oracles, we execute only \(5\cdot 10^{3}\) iterations at inexactness level \(10^{-7}\) to produce \(\hat{x}^{*}\) when \(r=\frac{1}{2}\). To reduce the eventual initialization bias \(x_{0}\sim\mathcal{U}(\Delta)\), we averaged the results of \(10\) repetitions.
#### 4.2.2 Results
Here, we have plotted the terminal primal optimality gaps against the overall oracle workloads \(\sum_{k=0}^{N-1}\,\omega_{k}\) after \(N\) (outer-)iterations of FGM. Since the wall-clock times strongly correlate with the workloads, we have chosen no to display duplicate figures other than Figure 6 and 7. Again, each dot symbolizes the averaged performances tracked for one specific inexactness schedule for one specific instance defined by the vector \((\mu,r,N,\bar{\delta})\). One can observe that most of the trends highlighted in the paragraph about the results of our synthetic first experiment still hold for this real second experiment. Notably, it looks like that regularization plays a key role in the substantial superiority of our optimized _tunable_ approach. Again, this is understood by a variability of possibly several orders of magnitude in the value of _impact coefficients_\(\{a_{k}\}_{k=0}^{N-1}\) when \(\mu>0\). In every case, our _tunable_ approach was more efficient, i.e. it dominated the _constant_ approach in the Pareto sense in the plane of reached primal optimality versus unit of workload spent in the inner problems. On Figure 6 (a) and 7 (a), the _tunable_ schedule based on a _reference_ inexactness \(\bar{\delta}\) fixed to \(10^{-6}\) (yellow) did take more inner-iterations for \(N=500\) than \(N=1000\). Although not intuitive, this behaviour can be explained. It happens that oracles cost way less than expected, especially for low requested accuracies. In what concerns Experiment 2,
it could even happen that FISTA's starting iterate at outer-iteration \(k\), \(w_{x_{k}}^{(0)}=w_{x_{k-1}}^{(\omega_{k-1})}\), already fulfills \(\delta_{k}\) inexactness for free, i.e. \(\omega_{k}=0\). It turns out that our _tunable_ approach for \(N=1000\) benefited more of such _cheap-meal_ phenomenons since its linked schedule demands less accurate oracles than the one with \(N=500\) in early iterations. Overall, a _reference_ inexactness \(\bar{\delta}\in[10^{-3},10^{-2}]\) provided the best _trade-off_ to reach a given primal accuracy target within this _offline_ setting.
Figure 6: (no regularization, \(\mu=0\)) — R.O. over Convex Hull, _offline_ with multiple \(\bar{\delta}\)
#### Online Schedules (Experiment 3)
**Test instances** Finally, we run FGM with adaptive stepsizes \(L_{k}^{-1}>0\) for any \(k\geq 0\). A stepsize is validated at iteration \(k\) as soon as the inequality
\[\tilde{f}(x_{k+1})\leq\tilde{f}(y_{k})+\langle\nabla\tilde{f}(y_{k}),x_{k+1}-y_{ k}\rangle+\frac{L_{k+1}}{2}||x_{k+1}-y_{k}||^{2}+2\,\delta_{k} \tag{44}\]
holds for \((2\delta_{k},\sigma^{-1}+\mu,\mu)\) inexact information tuples at feasible points \(x_{k+1}\) and \(y_{k}\) respectively. FGM increases by \(3/2\) the stepsize when an iteration succeeds, i.e. (44) passes, and decreases it by a factor \(2\) otherwise. Note that (44) is always satisfied for \(L_{k+1}\geq\sigma^{-1}+\mu\). Convergence guarantees like (13) are conserved [20] but now the _impact coefficients_\(a_{k}=A_{k+1}\) depend on \(L_{k+1}\) via the recursion (\(A_{0}=0\))
\[A_{k+1}(1+\mu A_{k})=L_{k+1}(A_{k+1}-A_{k})^{2} \tag{45}\]
and are therefore not predictable in advance. This paves the way for the application of our _online_ approach (34) with \(N_{r}=50\). \(N_{r}\) is kept small in practice, its sole interest being to specify the very first inexactness parameters before the _online_ approach takes over. Yet, one must simulate the \(N_{r}\) first coefficients \(\{a_{k}\}_{k=0}^{N_{r}-1}\) required to devise the _offline_ optimized schedule \(\{\delta_{k}^{*}\}_{k=0}^{N_{r}-1}\) from (17). To that purpose, one fakes the value of \(L_{k+1}\) fixing it to \(L=\sigma^{-1}+\mu\) within (45) and uses the computed \(\{a_{k}=A_{k+1}\}_{k=0}^{N_{r}-1}\). For each level of _reference_ inexactness \(\bar{\delta}\in\{9\cdot 10^{-3},10^{-4},10^{-6}\}\), we tried \(4\) different inexactness schedules. I.e. for any iteration index \(k\geq 0\), the _constant_ one \(\delta_{k}=\bar{\delta}\), the heuristic _online tunable_ (as described above) with \(a_{k}=A_{k+1}\), the fully sublinear monotonically decreasing tuned as in [23] (poly-3), \(\delta_{k}=\bar{\delta}\,(k+1)^{-3}\) and the fully linear one [12] (linear) with \(\delta_{k}=\bar{\delta}\,(1-\sqrt{\mu/L})^{-k}\). For this experiment, \(\hat{x}^{*}\) (see (43)) is taken as the output of adaptive FGM after at \(3\cdot 10^{3}\) iterations with _constant_ inexactness of \(10^{-12}\) for \(r=0\) and \(10^{-8}\) otherwise for \(r=\frac{1}{2}\). We have conducted \(5\) random initializations of the settings; \((d,n,p,\sigma)=(50,100,5^{-1},10^{-3})\) which, akin to Experiment 2, led to \(\hat{\kappa}=0\) and thus \(r=\frac{1}{2}\) and \((d,n,p,\sigma)=(200,100,5^{-1},10^{-3})\) that gave \(\hat{\kappa}>0\) and, accordingly, \(r=0\). Finally, we set the regularization to \(\mu=10^{-1}\).
Figure 7: (regularization, \(\mu>0\)) — R.O. over Convex Hull, _offline_ with multiple \(\bar{\delta}\)
**Results** We decided to keep track of the primal objective value every 10 outer-iteration, i.e. when \(k\,\mathrm{mod}\,10=0\), we did compute \(\hat{f}(x_{k})\simeq f(x_{k})\) with precision \(10^{-10}\). After the runs, we turned these primal objective values into primal optimality gaps thanks to our estimation of \(F^{*}\). We can observe on Figure 8, 9 and 10 that our _online_ heuristic _tunable_ inexactness schedule adapts well to the _error-free_ speed of convergence of FGM, symbolized by the value of the main sequence \(\{A_{k}\}_{k\in\mathbb{N}}\).
We recall that its starting growth is of order \(\mathcal{O}(k^{2})\) but eventually, since \(\mu>0\) in the present experiment, a linear growth rate shows up lately [12, 20]. It is worth noticing that (linear) schedule was the best in the low _reference_ inexactness regime. Nevertheless, overall, neither (poly-3) or (linear) adapts as well as our schedule. Indeed, although comparable to _tunable_ when \(r=\frac{1}{2}\) on Figure 8 (b), 9 (b) and 10 (b), poly-3 does not take advantage of the regularization whereas linear appears way too conservative on Figure 9 and 10, perhaps in order to preserve as much as possible the asymptotical rate of convergence of the _error-free_ counterpart of FGM. The reference _constant_ schedule for inexactness turns out to be competitive until _error-accumulation_ (see Remark 2) undermines further improvements.
## 5 Conclusion
In this paper we considered a class of iterative algorithms, namely Tunable Oracle Methods (TOM), for which one can take advantage from the combined knowledge of both the computational cost associated with the oracle calls and the impact of associated inexactness on the convergence. We have shown how to choose optimally the level of inexactness that should requested at each iteration. Our numerical experiments confirm the superiority of these optimal schedules over the use of constant inexactness, for a given total computational budget, and also compare favorably with existing baselines from the literature. Future work may include a _tight analysis_ of more iterative methods that involve controllable inexactness, e.g. bilevel learning, for which we can also hope that optimal inexactness schedules enhance the practical performances. Another direction for future research would be to gain insight about _random inexactness_ rather than the (possibly) adversarial model considered in this paper. In this context, the goal could be to choose the appropriate amount of work to drive the distribution of inexactness towards the local needs of the iterative algorithm.
## Acknowledgement(s)
The authors are grateful to Yurii Nesterov and Pierre-Antoine Absil for their advice both in terms of content and presentation of the results.
## Disclosure statement
No potential conflict of interest was reported by the author(s).
## Funding
Guillaume Van Dessel is funded by the UCLouvain university as a teaching assistant.
Figure 10: (regularization, \(\mu>0\)) — R.O. over Convex Hull, _online_ with high accuracy \(\bar{\delta}\) |
2310.00160 | Self-Specialization: Uncovering Latent Expertise within Large Language
Models | Recent works have demonstrated the effectiveness of self-alignment in which a
large language model is aligned to follow general instructions using
instructional data generated from the model itself starting from a handful of
human-written seeds. Instead of general alignment, in this work, we focus on
self-alignment for expert domain specialization (e.g., biomedicine, finance).
As a preliminary, we quantitively show the marginal effect that generic
instruction-following training has on downstream expert domains' performance.
To remedy this, we propose self-specialization - allowing for effective model
specialization while achieving cross-task generalization by leveraging only a
few labeled seeds. Self-specialization offers a data- and parameter-efficient
way of "carving out" an expert model out of a generalist pre-trained LLM.
Exploring a variety of popular open large models as a base for specialization,
our experimental results in both biomedical and financial domains show that our
self-specialized models outperform their base models by a large margin, and
even larger models that are generally instruction-tuned or that have been
adapted to the target domain by other means. | Junmo Kang, Hongyin Luo, Yada Zhu, Jacob Hansen, James Glass, David Cox, Alan Ritter, Rogerio Feris, Leonid Karlinsky | 2023-09-29T21:53:46Z | http://arxiv.org/abs/2310.00160v2 | # Self-Specialization: Uncovering Latent Expertise within Large Language Models
###### Abstract
Recent works have demonstrated the effectiveness of self-alignment in which a large language model is, by itself, aligned to follow general instructions through the automatic generation of instructional data using a handful of human-written seeds. Instead of general alignment, in this work, we focus on self-alignment for expert domain specialization (e.g., biomedicine), discovering it to be very effective for improving zero-shot and few-shot performance in target domains of interest. As a preliminary, we first present the benchmark results of existing aligned models within a specialized domain, which reveals the marginal effect that "generic" instruction-following training has on downstream expert domains' performance. To remedy this, we explore **self-specialization** that leverages domain-specific unlabelled data and a few labeled seeds for the self-alignment process. When augmented with retrieval to reduce hallucination and enhance concurrency of the alignment, self-specialization offers an effective (and efficient) way of "carving out" an expert model out of a "generalsit", pre-trained LLM where different domains of expertise are originally combined in a form of "superposition". Our experimental results on a biomedical domain show that our self-specialized model (30B) outperforms its base model, MPT-30B by a large margin and even surpasses larger popular models based on LLaMA-65B, highlighting its potential and practicality for specialization, especially considering its efficiency in terms of data and parameters.
## 1 Introduction
Instruction-tuning (Ouyang et al., 2022; Wei et al., 2022; Mishra et al., 2022; Su et al., 2022) of large language models (LLMs) offers a mechanism to adeptly guide models using specific directives, thereby enhancing their versatility across diverse tasks. However, as promising as this concept might seem, it poses an inherent challenge: the substantial need for quality data (Chung et al., 2022; Wan et al., 2023; Kopf et al., 2023). The very premise of instruction-tuning hinges on the availability of well-crafted, human-annotated data, a resource that is both time-consuming and challenging to scale efficiently (Honovich et al., 2022; Kang et al., 2023).
Furthermore, acquiring domain-specific data is even more demanding as it requires the involvement of domain experts, which is often more expensive (Bai et al., 2021; Wang et al., 2023).
Emerging as a promising solution to this data-intensive challenge is the approach of self-alignment (Wang et al., 2022; Sun et al., 2023). By allowing LLMs to automatically generate instructional
Figure 1: Self-specialization concept. Expertise in various domains is mixed and latent within base LLMs. Target domain expertise is carved out through self-specialization.
data from a handful of human-authored seeds, self-alignment presents a means to harness the internal general knowledge of these models (which results from extensive pre-training on the internet corpora (Devlin et al., 2019; Raffel et al., 2020; Brown et al., 2020)) without extensive human annotations.
However, some pertinent questions remain: (i) How effective are the original or the self-aligned models when applied to more niche domains, such as biomedicine? (ii) Given that neither the initial pre-training nor subsequent self-alignment tuning is domain-specific, we hypothesize that the model expertise in different domains resides in "superposition" in the model's parameters and hidden states. In other words, parametric knowledge in LLMs represents a mixture of semantics and knowledge of various domains. May this hinder each individual expert-domain performance? These inquiries become even more practical when considering the ever-growing demands for specialized models that can cater to domain-specific nuances. In our preliminary study, we find that existing models such as Alpaca (Taori et al., 2023) and Dromedary (Sun et al., 2023), although aligned, exhibit only a modest degree of improvement (compared to source models before alignment) within the specialized domains. These observations underline the need for innovative approaches that can harness the depth of domain-specific knowledge existing in the base models, to ensure the self-generated instruction-tuning data remains both contextually appropriate and accurate.
In this work, we introduce and explore the new concept of **self-specialization** (Fig. 1). Drawing inspiration from the foundational principles of self-alignment, self-specialization goes a step further by incorporating domain-specific seeds and external knowledge. Our approach integrates specialized seed instructions and is further bolstered by a retrieval component. Our goal is to guide models beyond generic alignment, directing them to generate data that are not just contextually fitting for a specialized domain but also maintain high degrees of accuracy.
Through rigorous experiments, we evaluate our self-specialized models within the biomedical domain. Surprisingly, despite the apparent simplicity of our approach, our results, present a compelling case for self-specialization. The experimental results demonstrate that our self-specialized model (30B) using this approach outperforms its base model, MPT-30B (Team, 2023) by a large margin and even surpasses larger models (based on LLaMA-65B (Touvron et al., 2023)), including the ones improved through self-alignment by leading methods (Taori et al., 2023; Sun et al., 2023). Moreover, we show that effective self-alignment is possible with parameter-efficient finetuning techniques (PEFT (Mangrulkar et al., 2022), QLoRA (Dettmers et al., 2023) in our case). This opens an exciting opportunity for memory-efficient multi-specialized models, where we envision that the base LLM is loaded once and is "surrounded" by a set of low-memory-footprint LoRA modules, delivering all the specialization of interest (and potentially their mix (Huang et al., 2023)) at once and in-memory without re-loading (Fig. 1).
Consequently, the contributions of our work encompass:
* We conduct comprehensive benchmarking of general-purpose aligned models within a specialized domain, underscoring the intrinsic challenge of encoding vast general knowledge into a finite set of parameters, motivating the need for specialization.
* This work explores a lightweight solution, self-specialization that enables us to uncover latent expertise within LLMs with minimal supervision.
* Our experiments in a biomedical domain demonstrate the remarkable potential of self-specialization, showcasing its efficiency and practicality. The promising results, achieved with this simple scheme, open new avenues for future work in this realm.
## 2 Preliminaries: Benchmarking Existing Aligned Models
To motivate our exploration of self-specialization, we first begin by addressing a fundamental question: How well do generally aligned models perform on specialized domains? While existing popular ones such as Alpaca and Dromedary have demonstrated the generalizability of following instructions in a general scenario, it remains unclear whether general alignment can also elicit expertise for a certain domain.
Investigating this, we assess the capabilities of Alpaca (Taori et al., 2023) and Dromedary (Sun et al., 2023) against a base model, LLaMA (Touvron et al., 2023), on a collection of benchmarks within the biomedical domain. Ensuring an unbiased comparison, all models are equally fixed with 65B parameters and share the same architecture (LLaMA). We select 10 different biomedical NLP
datasets, covering a diverse set of tasks to ensure a comprehensive mix of content and also to look at the cross-task generalization, the core of instruction tuning. Few-shot (k=5) settings are examined where demonstrations are tailored to each task. Note that in fact, Alpaca is not "self"-aligned in that it uses datasets generated by GPT-3.5 (Ouyang et al., 2022) following the self-instruct process (Wang et al., 2022), unlike Dromedary which uses the same base model. Nonetheless, we also benchmark it to serve a sort of upper bound. The results are shown in Figure 2. Details of the setups are described in Section 4.1.
When benchmarked on the expert (biomedical) domain, we find that both Alpaca and Dromedary have only a slight (1.1 - 2.5%) advantage over LLaMA. While they are aligned to handle a broad set of instructions, they do not seem to effectively improve their specialized domain expertise; intuitively trading their expertise for generality given finite parameters. In light of these findings, it becomes evident that for cases where we are only interested in target domains for all our downstream tasks, there exists a substantial potential for enhancement within domain-specific tasks. This underscores the need for a model or approach, like self-specialization, that could potentially uncover specialization while maintaining cross-task generalizability with minimal supervision. Moreover, if the approach is parameter efficient, it constitutes a considerable advantage, as many specialized models can be efficiently served in memory, sharing their deployment on the same machines / GPUs.
## 3 Self-Specialization
In this section, we describe our method called self-specialization illustrated in Figure 3. Starting with a select set of human-crafted, domain-specific seed instructions, the base model evolves to generate synthetic instructions and corresponding input contexts intrinsic to the domain. As we progress to the response generation phase, we enhance responses with domain-centric knowledge accessed via a retrieval mechanism, retrieving instruction-related knowledge from a domain-specific, and yet unlabeled, source. Upon generation, the specialization triggering stage follows where the base model is aligned, calibrating its expertise to resonate with the target domain. Optionally, this process can be reiterated with the aligned model as a better generator.
### Seed instructions
At the outset, we harness a curated set of seed instructions \(S\), consisting of a triplet \((i,c,y)\), comprised of instruction \(i\), a context \(c\) (e.g., passage), and a response \(y\), respectively. Considering the real-world scenarios where domain-specific data are relatively harder to acquire (Bai et al., 2021), we aim to have a very minimal number of seed instructions. For example, we use only 80 seeds for the biomedical domain, where we sample from existing biomedical NLP benchmarks (Parmar et al., 2022) (detailed in Section 4.1). These seeds encapsulate the "spirit" of the fundamental concepts and intricacies of the targeted domain and yet are clearly insufficient to encompass the entirety of domain knowledge. We conjecture (and surprisingly demonstrated), that the domain knowledge is already residing inside the sufficiently large models generally pre-trained, yet is in a state of superposition, which does not however prevent this knowledge from being uncovered, e.g. by the means of the proposed approach. The instruction seeds are pivotal, acting as the launching pad for the model's journey into specialization.
### Domain-Specific Instruction Generation
With the seed instructions in place, we move to generating domain-specific instructions. While these new instructions are grounded in the initial seeds, they grow to cover a comprehensive scope of the domain. Specifically, a base model \(M_{base}\), such as MPT-30B (Team, 2023) which is "large enough", is prompted to produce new combinations of \((i,c)\) given a handful of seed demonstrations which are
Figure 2: Benchmarking results of a base LLaMA-65B and its aligned variants, Alpaca-65B and Dromedary-65B, on a biomedical domain across 10 datasets, covering various NLP tasks such as question answering, information extraction, classification, etc. 5-shot results are presented.
randomly sampled from the initial seeds pool. The newly formed instructions \(i\), coupled with their corresponding input contexts \(c\), shape a blueprint that the model utilizes in the following stages.
### Domain-Specific Response Generation
Once the synthetic domain-specific instructions \(\{i\}\) and their corresponding contexts \(\{c\}\) are in hand, our approach navigates to the response generation phase. It is certainly imperative for the response not only to be correct but also to be well-aligned with the target domain. We posit that incorporating external domain-relevant knowledge would be beneficial for this case, inspired by Frisoni et al. (2022). Therefore, we let the model \(M_{base}\) also leverage a retrieval component, to infuse its responses with external, domain-relevant knowledge retrieved from an unlabeled domain-specific collection of documents. Specifically, forming the query \(x\) as a concatenation of \(i\) and \(c\), a retriever \(M_{ret}\) fetches top-\(k\) relevant documents \(d_{1:k}\).
\[d_{1:k}=M_{ret}(x=i\oplus c) \tag{1}\]
Then, each document \(d_{j}\) is independently paired with the query \(x\) to form a prompt to \(M_{base}\), and the final domain-specific responses \(y\) are produced from the final distribution computed by marginalizing over the probabilities of each of these \(k\)-combinations at each generation step. The objective of this generation can be thus formally represented as:
\[p(y|x)=\prod_{i}^{t}\sum_{j}^{k}p_{ret}(d_{j}|x;M_{ret})\;p_{lm}(y_{i}|x,d_{j},y_{1:i-1};M_{base}) \tag{2}\]
where \(p_{ret}\) is a relevance score (similarity) from a retriever module. By integrating such domain-specific information, this step encourages the generated target responses to be more nuanced and domain-specific leading to improvements (Section 4.3).
### Triggering Specialization
Upon establishing a robust set of domain-specific responses, the model enters the specialization phase. Here, it undergoes tuning using the synthetic instructional data generated by itself, adjusting
Figure 3: An overview of **Self-Specialization**. (a) We start with a small set of human-authored domain-specific seed instructions. The base model is harnessed to craft synthetic instructions and corresponding input contexts tailored to that particular domain. Subsequently, during the response generation phase, responses are curated given the generated instruction and input pairs, enhanced by infusing domain-relevant knowledge obtained via a retrieval component. The culmination of the process is the specialization phase, where the base model undergoes specialization through tuning (w/ QLoRA) to enhance its expertise in the target domain. (b) Conceptually speaking, this process can be described as uncovering latent expertise within LLMs.
its internal parameters (i.e., QLoRA (Dettmers et al., 2023)) to cater specifically to the domain's nuances. This step is crucial, marking the model's transformation from being generally competent to being domain-specialized while preserving cross-task generalizability, thus resulting in the final self-aligned domain-specialized model: \(M_{aligned}\).
### Iterative Self-Specialization
In the spirit of continuous improvement, our approach optionally undergoes iterative self-specialization. It revisits the generation process of instructions and responses with the better-aligned model \(M_{aligned}\). Here, to maximize the effectiveness of this process, we re-visit the idea of a contrastive decoding scheme (Li et al., 2023). The original idea of Li et al. (2023) is that a larger model can be contrasted with a smaller model in output distributions to improve the generation quality of the larger model. In our case, \(M_{aligned}\) is adopted for the stronger model, whereas \(M_{base}\) is considered as the weaker model, which can be represented as:
\[p(y|x)=\prod_{i}^{t}p_{lm}(y_{i}|x,y_{1:i-1};M_{aligned})-p_{lm}(y_{i}|x,y_ {1:i-1};M_{base}) \tag{3}\]
This process has the potential of refining the model's domain expertise with each iteration (of considering the previous iteration \(M_{aligned}\) as base each time), iteratively improving its responses.
## 4 Experiments
### Experimental Setups
Datasets.For evaluation, we employ various biomedical NLP datasets, most of which are curated in BigBio (Fries et al., 2022). A total of 10 different datasets are adopted to encompass a wide range of NLP tasks: Question Answering (QA), Named Entity Recognition (NER), Relation Extraction (RE), Sentiment Analysis (SA), and Document Classification (DC). Following a prior work (Parmar et al., 2022), all datasets are transformed into instructional data. Specifically, we employ datasets including BioASQ-8b (Factoid, List, Yesno) (Nenitidis et al., 2020), PubMedA-Long (Jin et al., 2019), AnatEM (Pyysalo & Ananiadou, 2013), BioNLP13CG (Pyysalo et al., 2013), NCBI (Dogan et al., 2014), DDI (Herrero-Zazo et al., 2013), Medical Drugs (Khan, 2019), and HoC (Baker et al., 2015). Further details on each of these datasets are elaborated in Appendix A.
Models.We use base MPT model (Team, 2023), one of the powerful open-source foundation models, especially due to its feature of 8k context length. Inspired by the success of a previous work (Sun et al., 2023) that showed that large model size has a significant effect, we specifically adopt the 30B variant for our main experiments. For the retriever, we use simple yet effective BM25 (Robertson et al., 1994), in order to support a practical scenario where sufficient human-labeled data for training a more sophisticated retriever is not available. In addition to MPT-30B, we evaluate Falcon-40B (Almazrouei et al., 2023), another strong open-source model, to further validate the general applicability of self-specialization. For benchmarking of general-purpose aligned models, we evaluate Alpaca-65B (Taori et al., 2023) and Dromedary-65B (Sun et al., 2023) that are both built based on LLaMA (Touvron et al., 2023).
Metrics.In our study, all tasks are approached as a unified text generation problem, aiming to assess the capabilities of generative models. In alignment with an established convention (Parmar et al., 2022), we adopt \(F_{1}\)-Score and also Rouge-L (Lin, 2004) as our evaluation metrics.
Implementation Details.For domain-specific seed data, we use data sampled from BoX (Parmar et al., 2022), which encompasses 32 tasks, up to 5 instances for each dataset, resulting in a compact yet representative seed data of 80 samples in total. These seeds are also used as demonstrations in a prompt for inference. For domain-specific external corpus, we leverage PubMed1 preprocessed in (Phan et al., 2021), which contains \(\approx\)30M abstracts. We generate 5K synthetic instructional data through the self-specialization process. Being equipped with QLoRA (Dettmers et al., 2023) and 4-bit quantization, the model is trained using a simple Alpaca-style template (Taori et al., 2023) on a single A100, taking only a few hours for 3 epochs, resulting in a light-weight (parameter efficient) specialization module that can be attached to the base model inducing its specialization upon request.
### Results
In Table 1, we present the comparative results of our self-specialized MPT-30B model against its base counterpart across 10 distinct biomedical NLP tasks. The evaluation is conducted using various _k_-shot prompting to analyze the impact of different numbers of in-context examples on model performance with/without specialization.
Our findings reveal that the self-specialized MPT-30B model exhibits remarkable progress in the majority of tasks across all configurations, yielding a surprisingly substantial (up to 18 points) improvement in average scores. Specifically, the scores (\(F_{1}\)) witness a rise from 25.15 to 36.63 in a zero-shot setting, from 26.65 to 42.29 in a 1-shot setting, and from 30.18 to 48.41 in a 5-shot setting, respectively. Importantly, the effectiveness of self-specialization becomes evident as it uncovers the latent expertise encoded within the "generalist" base model, showcasing the potential of leveraging inherent knowledge for enhanced domain-specific performance. These advancements underscore the self-specialized model's versatility and adaptability in addressing a wide array of tasks present in a specialized domain.
How does it compare against larger/generally aligned models?In Figure 4, we compare our self-specialized MPT-30B model with 65B models, including LLaMA-65B, and its general instructions aligned variants. Interestingly, the results reveal that our model, despite its \(\approx\)2.2x smaller size, surpasses all 65B models. This not only highlights the lower expert domain performance trade-offs of the "generalist" models in terms of encoding a vast array of general knowledge into a finite set of
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \multicolumn{2}{c}{\(F_{1}\)-Score} & \multicolumn{2}{c}{_k=0_} & \multicolumn{2}{c}{_k=1_} & \multicolumn{2}{c}{_k=5_} \\ \cline{3-8}
**Task** & **Dataset** & **Base** & **Self-Specialized** & **Base** & **Self-Specialized** & **Base** & **Self-Specialized** \\ \hline \hline \multirow{4}{*}{QA} & BioASQ-Factoid & 30.90 & **37.35** & 47.56 & **55.04** & 51.96 & **57.61** \\ & BioASQ-List & 46.06 & **46.99** & **47.57** & 44.55 & 35.09 & **42.17** \\ & BioASQ-Yesno & 21.20 & **85.27** & 10.80 & **94.00** & 8.80 & **95.20** \\ & PubMedQA & 11.98 & **24.16** & **28.89** & 24.87 & **31.69** & 31.31 \\ \hline \multirow{4}{*}{NER} & Average & 27.54 & **48.44** & 33.71 & **54.62** & 31.89 & **56.57** \\ & AnatEM & 9.63 & **11.99** & 7.57 & **15.76** & 6.59 & **21.25** \\ & BioNLP13CG & 24.79 & **24.93** & 21.76 & **31.80** & 26.03 & **41.16** \\ & NCBI & **18.46** & 14.35 & 27.88 & **43.11** & 17.99 & **46.54** \\ \cline{2-8} & Average & **17.63** & 17.09 & 19.07 & **30.22** & 16.87 & **36.32** \\ \hline RE & DDI & **51.00** & 49.40 & 49.20 & **51.60** & 49.38 & **53.40** \\ \hline SA & Medical Drugs & 35.00 & **65.80** & 11.40 & **54.60** & 11.40 & **32.80** \\ \hline DC & HoC & 2.44 & **6.01** & **13.91** & 7.61 & **62.84** & 62.65 \\ \hline \multicolumn{2}{c}{Average} & 25.15 & **36.63** & 26.65 & **42.29** & 30.18 & **48.41** \\ \hline \hline \multicolumn{2}{c}{
\begin{tabular}{c} Rouge-L \\ **Task** \\ \end{tabular} } & \multicolumn{2}{c}{_k=0_} & \multicolumn{2}{c}{_k=1_} & \multicolumn{2}{c}{_k=5_} \\ \cline{2-8} & **Dataset** & **Base** & **Self-Specialized** & **Base** & **Self-Specialized** & **Base** & **Self-Specialized** \\ \hline \hline \multirow{4}{*}{QA} & BioASQ-Factoid & 30.70 & **37.31** & 47.35 & **54.71** & 51.81 & **57.48** \\ & BioASQ-List & 41.07 & **40.65** & **42.38** & 38.50 & 30.40 & **36.24** \\ & BioASQ-Yesno & 21.20 & **85.27** & 10.80 & **94.00** & 8.80 & **95.20** \\ & PubMedQA & 9.15 & **18.88** & **22.78** & 18.52 & 24.56 & **24.77** \\ \cline{2-8} & Average & 25.53 & **45.53** & 30.83 & **51.43** & 28.89 & **53.42** \\ \hline \multirow{4}{*}{NER} & AnatEM & 8.65 & **10.69** & 6.67 & **13.83** & 6.07 & **19.24** \\ & BioNLP13CG & **20.41** & 20.34 & 19.02 & **27.54** & 22.53 & **35.07** \\ & NCBI & **17.94** & 13.75 & 25.22 & **39.27** & 16.60 & **41.55** \\ \cline{1-1} \cline{2-8} & Average & **15.86** & 14.93 & 16.97 & **26.88** & 15.07 & **31.95** \\ \hline RE & DDI & **51.00** & 49.40 & 49.20 & **51.60** & 49.38 & **53.40** \\ \hline SA & Medical Drugs & 35.00 & **65.80** & 11.40 & **54.60** & 11.40 & **32.80** \\ \hline DC & HoC & 2.42 & **5.83** & **13.88** & 7.61 & **62.84** & 62.61 \\ \hline \multicolumn{2}{c}{Average} & 23.75 & **34.79** & 24.87 & **40.02** & 28.44 & **45.84** \\ \hline \end{tabular}
\end{table}
Table 1: Comparative results of the base LM (MPT-30B) and self-specialized one (30B) on a biomedical domain. Performances are reported using \(F_{1}\)-Score on the top and Rouge-L on the bottom. \(k\) indicates the number of demonstrations in a prompt.
parameters but also underscores the effectiveness of our (data and parameter efficient) approach to model specialization. Moreover, the efficiency and practicality of our simple self-specialization are further reinforced by the fact that the model is trained using only 5K2 instruction data self-produced with minimal (only \(80^{3}\)) seeds. This training process, facilitated by the incorporation of QLoRA, which adds only 0.14% trainable parameters to an otherwise frozen model, only takes a few hours on a single (A100 80GB) GPU.
Footnote 2: 52K for Alpaca and 360K for Dromedary
### Ablations & Analyses
Effect of external knowledge.We investigate the influence of incorporating domain-specific corpus like PubMed in the response generation phase, which enriches the model with pertinent biomedical information. As observed in the "1st Iter." section of Table 2, there is a notable variation in performance depending on the number of documents incorporated. Our findings indicate that the use of the top-5 documents yields the best results. Interestingly, incorporating only the top-1 document appears to degrade the performance, a phenomenon we conjecture is due to the noise originating from an imperfect retriever. Conversely, employing top-5 documents with probability marginalization (eq. 2) seems to mitigate this issue, enabling the model to exploit informative knowledge.
Effect of iterative self-specialization.In Section 3.5, we discussed the potential of employing an iterative process by leveraging the self-specialized model instead of the base model throughout the generation process. As evidenced in Table 2, initiating a "2nd Iter." of self-specialization results in further performance enhancement. Additionally, we consider two scenarios differentiated by whether the same instruction set is used to train the self-specialized model in the first and the second iterations. Our findings show that employing a distinct set of instructions for the second iteration in response generation is more effective. This could potentially be attributed to the limitation of using a confined generated instruction set, which might limit the model's generalization capabilities.
Can self-specialization also be applied to a different model (or model size)?To demonstrate the applicability of self-specialization other than to the MPT model, we apply it to another open-source model, Falcon-40B. Figure 5 shows the results on five different datasets. Notably, we observe a trend of improvement analogous to that of MPT-30B when self-specialization was applied to Falcon-40B, thereby substantiating that the technique is not exclusive to the MPT model. Surprisingly, despite
\begin{table}
\begin{tabular}{l l c c} \hline \hline & **Model** & \(F_{1}\)-Score & Rouge-L \\ \hline \hline
2nd Iter. & Self-Specialized MPT-30B & **36.63** & **34.79** \\ & w/ Same Instruction Set & 35.82 & 34.20 \\ \hline \multirow{3}{*}{1st Iter.} & Top-5 Docs & 34.57 & 32.88 \\ & Top-1 Docs & 29.65 & 27.90 \\ \cline{1-1} & No Docs & 33.72 & 32.14 \\ \hline \multicolumn{3}{c}{Base MPT-30B} & 25.15 & 23.75 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation results on iterative self-specialization and on the contribution of retrieval from unlabeled domain-specific sources during self-specialization. Zero-shot (\(k=0\)) prompting is used for inference. Average performance over 10 tasks is reported.
Figure 4: Results of our self-specialized model based on MPT-30B compared to 65B models. 5-shot results using \(F_{1}\)-Score are presented.
its larger parameter size, the Self-Specialized Falcon-40B underperforms its MPT-30B counterpart, while at the same time significantly improving upon the base Falcon-40B.
How is the quality of synthetic self-specialization data?To quantitatively assess the quality of the data generated through self-specialization, we train a model using 3.7K instances of available human-labeled data in a multi-task learning setting and compare its performance to that of a model trained on 5K instances of generated synthetic self-specialization data, as depicted in Figure 5. Although the model trained on supervised data exhibits higher performance as expected, the performance gap between the two models is not large, further underscoring the effectiveness of the proposed self-specialization. In Figure 6, we showcase a qualitative visualization that analyzes the synthetic data generated through self-specialization. Additionally, some examples are provided in Table 4 & 5 in Appendix, offering insights into the quality of the self-generated specialization data.
## 5 Related Work
The goal of instruction-tuning and alignment of large language models (LLMs) is to achieve cross-task generalization or to align with human preferences. This can be accomplished by either training LLMs directly with human-labeled data (Ouyang et al., 2022; Wei et al., 2022; Mishra et al., 2022; Wang et al., 2022b) or data generated by larger models (i.e., distillation) (Taori et al., 2023; Chiang et al., 2023). Recent studies have shown that LLMs are self-instructors. Wang et al. (2022a) showed that with in-context prompts, GPT-3 (Brown et al., 2020) can generate high-quality instruction-responses pairs for its own alignment. Sun et al. (2023) further suggests that using principles can minimize human supervision while covering a broad spectrum of scenarios with the open-source model, LLaMA-65B (Touvron et al., 2023). While enhancing general alignment, according to our presented evidence, these approaches are unlikely to induce specialization in expert domains, leaving different domain expertise in "superposition" inside the model. To the best of our knowledge, we are
Figure 5: 5-shot results based on Falcon-40B and MPT-30B, showcasing the self-specialization gains. “Multi-Task Supervised” is a model trained on a large amount of human-labeled data in a multi-task setting and is provided _for reference_ as a (non-data-efficient, expensive) _upper bound_.
Figure 6: Statistics for instructions (left) and input context (right) generated through self-specialization. On the left, the inner circle illustrates prevalent verbs in the instructions, with the outer ring revealing associated entities. Conversely, the right side showcases the input context, highlighting the incorporation of diverse biomedical keywords. Best viewed in zoom and color.
the first to show the potential techniques for expert domain specialization through self-alignment, effectively "uncovering" a domain expert out of the model in a parameter- and data-efficient manner.
Recent studies highlight the benefits of employing instructions in different adaptation scenarios (Paramar et al., 2022). InstructOR (Su et al., 2022) illustrated the adaptability of instruction-based text embeddings to various tasks and domains, while InstrucTE (Bai et al., 2023) demonstrated that incorporating instructions with a schema can yield robust results for table extraction across diverse domains. However, these require the use of costly human labels or extensively tuned large models (e.g., 175B). Self-training has also been explored for different adaptation scenarios. For domain knowledge adapation, Shakeri et al. (2020) and Luo et al. (2022) proposed constructing synthetic data by generating in-domain question-answering data, but these data generators are trained with more than 80k human curated QA pairs and do not involve instructional ones that have the potential for cross-task generalization. Instruction-tuning has been shown to adapt pre-trained LLMs to different modalities, including vision (Liu et al., 2023), audio (Gong et al., 2023), and programs (Roziere et al., 2023), and enables the use of APIs (Schick et al., 2023) and search engines (Luo et al., 2023). Unlike these works, our work focuses on uncovering target domain expertise latent within LLMs while promoting cross-task generalization with minimal supervision.
## 6 Discussion
While our study provides encouraging insights into the capabilities of self-specialization, this is an initial step in opening up new opportunities. We recognize that there is much to learn and explore in this exciting direction as discussed below. The promising results, achieved even with the proposed simple scheme, suggest that further refinement of this approach and exploration across diverse specialized domains could be pivotal, contributing to the ongoing efforts to uncover the embedded expertise of (generalist) large language models. In what follows, we discuss a few noteworthy considerations and potential directions.
On the scale of a base model and data.The study focuses on employing a large base model size (i.e., 30B) for the initial exploration, motivated by the success of preceding research (Wang et al., 2022; Sun et al., 2023) that employed general self-alignment with even larger models (e.g., 65B, 175B). Nonetheless, we believe exploring the potential of smaller-scale models (e.g., 7B) as a base for self-specialization presents an additional intriguing avenue for future research, possibly enhancing the practical feasibility. Regarding the quantity of synthetic data generated through self-specialization, we constrained it to 5K for the sake of simplicity and efficiency (Zhou et al., 2023). While this led us to highlight the data efficiency of our model, future studies could investigate the extent to which increasing the data could further enhance the expertise.
Additional domains.While our study primarily focuses on the biomedical domain, the applicability and effectiveness of self-specialization in other specialized domains, such as sports, remain an open avenue for exploration. As an initial effort, we present a case study of a self-specialized model on sports in Table 6 & 7, along with the visualization of generated data in Figure 7. We hope that this could offer insights into the versatility of self-specialization, although the model is not yet perfect, and thorough evaluations are required in future work. Different domains inherently pose unique requirements and nuances, and understanding how self-specialization adapts to these variations is a valuable direction for future work.
Mixture-of-self-aligned-experts.The potential demonstrated by self-specialization in our study paves the way for another fascinating research direction: the combination of distinct self-specialized models. This would involve integrating the expertise of various self-specialized models, aiming to create a more comprehensive and adaptable solution. Investigating the synergies and challenges in combining different specialized models (e.g., lightweight specialized LoRA layers) can lead to the development of a model with enhanced concurrent proficiency across a broader spectrum of specialized domains (at once), offering a holistic approach to maximizing the extraction and utilization of the latent expertise of LLMs.
## 7 Conclusion
Our exploration into self-specialization, drawing inspiration from the recent achievements in general-purpose self-alignment (Wang et al., 2022; Sun et al., 2023), aimed to elucidate the latent expertise within large language models (LLMs) with very limited human supervision. This scheme,
which incorporates a few domain-specific seeds and external knowledge into the synthetic data generation process, demonstrated promising results in a specialized domain. The self-specialized model (30B) exhibited remarkable performance, outshining its base model, MPT-30B, and even surpassing larger existing generally aligned models (65B). This illuminates the intrinsic challenges of encoding vast general knowledge into limited parameters and underscores the efficiency of self-specialization. Remarkably, the model's efficient training, marked by minimal data usage and the integration of QLoRA (Dettmers et al., 2023), adds another layer to its practicality in terms of parameter and data efficiency. These findings signify a promising advancement in the field, suggesting a pathway for leveraging inherent domain-specific expertise in LLMs and offering a large variety of exciting opportunities for future work in self-specialization.
|
2309.09999 | A renewal approach to prove the Four Color Theorem unplugged, Part II:
R/G/B Kempe chains in an extremum non-4-colorable MPG | This is the second part of three episodes to demonstrate a renewal approach
for proving the Four Color Theorem without checking by a computer. The first
and the third episodes have subtitles: ``RGB-tilings on maximal planar graphs''
and ``Diamond routes, canal lines and $\Sigma$-adjustments,'' where R/G/B stand
for red, green and blue colors to paint on edges and an MPG stands for a
maximal planar graph. We focus on an extremum non-4-colorable MPG $EP$ in the
whole paper. In this second part, we refresh the false proof on $EP$ by Kempe
for the Four Color Theorem. And then using single color tilings or RGB-tilings
on $EP$, we offer a renewal point of view through R/G/B Kempe chains to enhance
our coloring skill, either in vertex-colorings or in edge-colorings. We
discover many fundamental theorems associated with R-/RGB-tilings and
4-colorability; an adventure study on One Piece, which is either an MPG or an
$n$-semi-MPG; many if-and-only-if statements for $EP-\{e\}$ by using Type A or
Type B $e$-diamond and Kempe chains. This work started on May 31, 2018 and was
first announced by the author~\cite{Liu2020} on Jan.\ 22, 2020, when the
pandemic just occurred. | Shu-Chung Liu | 2023-09-17T05:20:05Z | http://arxiv.org/abs/2309.09999v1 | # A renewal approach to prove
###### Abstract.
This is the second part of three episodes to demonstrate a renewal approach for proving the Four Color Theorem without checking by a computer. The first and the third episodes have subtitles: "RGB-tilings on maximal planar graphs" and "Diamond routes, canal lines and \(\Sigma\)-adjustments," where R/G/B stand for red, green and blue colors to paint on edges and an MPG stands for a maximal planar graph. We focus on an extremum non-4-colorable MPG \(EP\) in the whole paper. In this second part, we refresh the false proof on \(EP\) by Kempe for the Four Color Theorem. And then using single color tilings or RGB-tilings on \(EP\), we offer a renewal point of view through R/G/B Kempe chains to enhance our coloring skill, either in vertex-colorings or in edge-colorings. We discover many fundamental theorems associated with R-/RGB-tilings and 4-colorability; an adventure study on One Piece, which is either an MPG or an \(n\)-semi-MPG; many if-and-only-if statements for \(EP-\{e\}\) by using Type A or Type B \(e\)-diamond and Kempe chains. This work started on May 31, 2018 and was first announced by the author [1] on Jan. 22, 2020, when the pandemic just occurred.
Key words and phrases:Four Color Theorem; Kempe chain; triangulation; edge-coloring; RGB-tiling; \(e\)-diamond 2020 Mathematics Subject Classification: Primary 05C10; 05C15
## 9. R/G/B Kempe chains, a renewal point of view
Given \(EP\in e\mathcal{MPGN}4\), there are at least 12 vertices of degree 5. Let \(v_{0}\in V(EP)\) with \(\deg(v_{0})=5\). Kempe's classical proof used this fixed vertex \(v_{0}\) and its five neighbors \(v_{1},v_{2},\ldots,v_{5}\) to perform vertex-color-switching for vertices in two
sets, \(rC\) and \(gC\), where \(rC/gC\) is a red-/green-connected component linking by 2-4/2-3 edges1. Here we use R/G/B _Kempe chains_ to review the old proof by Kempe and discover some important things missed before.
Footnote 1: Precisely \(rC\) and \(gC\) will be denoted by \(rC(v_{2})\) and \(gC(v_{5})\) in Subsection 10.1, because they contain vertex \(v_{2}\) and \(v_{5}\).
Two vertices \(u,v\in rC\) (a set of red-connected component) must have a red chain (or path) to connect each other. As a red chain or a red-connected component, it could be all 1-3 edges or all 2-4 edge; never mixed. Also it is not just one \(u-v\) red chain, but a _cluster_ of red chains; however we choose the rightmost chain or the leftmost one to demonstrate the main structure of our target graphs.
To represent \(EP\) with \(\deg(v_{0})=5\) on a flat surface in Figure 15, we show the major part of \(EP\) including the five neighbors of \(v_{0}\). Because \(EP\) is extremum, \(EP-\{v_{0}\}\) is 4-colorable (Please see Theorem 4.3(b) in Part I of this paper) and \(EP-\{v_{0}\}\) has a 4-coloring function \(f\); but \(f(v_{0})=5\) is inevitable. That means the five neighbors of \(v_{0}\) must using all 4 different colors. Without loss of generality, we draw colors on the neighbors as in Figure 15.
In addition, the two graphs in Figure 15 show a 2-4 red path \(K_{r}|_{v_{2}}^{v_{4}}\) and a 2-3 green path \(K_{g}|_{v_{3}}^{v_{5}}\) respectively. Because of \(K_{r}|_{v_{2}}^{v_{4}}\) or \(K_{g}|_{v_{3}}^{v_{5}}\), these two graphs are definitely not \(EP\). On the left graph, the red path \(K_{r}|_{v_{2}}^{v_{4}}\), blocks any 1-3 red line from \(v_{1}\) to
\(v_{3}\) through \(EP-\{v_{0}\}\). Due to the 1-3 disconnection between \(v_{1}\) and \(v_{3}\), we can perform _vertex-color-switching_ on the 1-3 red-connected component containing \(v_{1}\), so that \(f(v_{1})=1\) turns to be \(f(v_{1})=3\) without changing the colors of the other four neighbors of \(v_{0}\). Then we can set \(f(v_{5})=1\) and a 4-coloring function on \(EP\) comes out; so \(EP\) is not an extremum. Therefore, \(EP\) cannot have \(K_{r}|_{v_{2}}^{v_{4}}\). The argument are the same for \(K_{g}|_{v_{3}}^{v_{5}}\) in the right graph.
Last paragraph and Figure 15 show two forbidden red and green chains for any extremum \(EP\). Then there must be a red _Kempe chain_ connecting \(v_{1}\) and \(v_{3}\), denoted by \(K_{r}|_{v_{1}}^{v_{3}}\) (or \(K_{r}\) for short), and a green _Kempe chain_ connecting \(v_{1}\) and \(v_{4}\), denoted by \(K_{g}|_{v_{1}}^{v_{4}}\) (or \(K_{g}\) for short). See the left graph in Figure 16.
The existence of \(K_{r}|_{v_{1}}^{v_{3}}\) and \(K_{r}|_{v_{2}}^{v_{4}}\) is exclusive, i.e., exactly one of them exists, so is the existence of \(K_{g}|_{v_{1}}^{v_{4}}\) and \(K_{r}|_{v_{3}}^{v_{5}}\). There are many known reasons and one was done by Kempe. Our reason dues to Lemma 7.2(b) and (c). Especially the red canal system on \(EP-\{v_{0}\}\) creates a non-crossing matching among black (green/blue) edges along the outer facet \(\Omega:=v_{1}\)-\(v_{2}\)-\(\ldots\)-\(v_{5}\)-\(v_{1}\).
Also there are probably many (a _cluster_ of) such red chains \(K_{r}|_{v_{1}}^{v_{3}}\) and green chains \(K_{g}|_{v_{1}}^{v_{4}}\). We shall choose the red chain \(K_{r}\) closest to \(v_{2}\) (the rightmost) and the green chain \(K_{g}\) closest to \(v_{5}\) (the leftmost). This mandatory but temporary choice concerns the error of Kempe's original proof. Following this choice or ideal, we will claim the _tangling property_ that is important to our renewal approach.
By the same technique which had been applied on the two graphs in Figure 15, Kempe used vertex-color-switching method to get a new coloring as \(f(v_{2})=4\) and
Figure 16. Kempe’s proof and the bug in Kempe’s paper.
\(f(v_{5})=3\) without changing the colors of the other three neighbors of \(v_{0}\), namely \(f(v_{1})=1\), \(f(v_{3})=3\) and \(f(v_{4})=4\). Then Kempe finished the proof by setting \(f(v_{0})=2\). This proof seems perfect by referring the left graph in Figure 16; otherwise the referees at that time would not pass and then let it publish. Unfortunately, Percy Heawood found the bug in Kempe's paper after 11 years. Briefly we cannot do two vertex-color-switching processes w.r.t. a red-connect component and a green-connect component at the same time. The right two graphs in Figure 16 are what really happens in \(EP\); otherwise it is not a real \(EP\): Two Kempe chains \(K_{r}|_{v_{1}}^{v_{3}}\) and \(K_{g}|_{v_{1}}^{v_{4}}\) cross each other shown by the middle graph in Figure 16. Not only this crossing, once we do the first vertex-color-switching process on the red-connected component containing \(v_{2}\), which associates with the red Kempe chain \(K_{r}|_{v_{1}}^{v_{3}}\), to get \(f(v_{2})=4\), immediately the green Kempe chain \(K_{g}|_{v_{1}}^{v_{4}}\) will be destroyed and a new green Kempe chain \(K_{g}|_{v_{3}}^{v_{5}}\) will show up; then the second vertex-color-switching process claimed by Kempe cannot fulfill. See the right two graphs in Figure 16. We use two double-lines and a double-circle to highlight the change on the pentagon.
Symmetrically, provided middle graph in Figure 16, if we first perform vertex-color-switching process on the green-connected component contain \(v_{5}\), then the red Kempe chain \(K_{r}|_{v_{1}}^{v_{3}}\) of the middle graph in Figure 16 will be destroyed and a new red Kempe chain \(K_{r}|_{v_{2}}^{v_{4}}\) turns out. Again, the second second vertex-color-switching process claim by Kempe cannot be done. Sorry, we do not offer the two graphs of changing before and after.
_Remark 9.1_.: Percy Heawood used the same idea from Kempe's paper but only perform one vertex-color-switching process to prove Five Color Theorem.
## 10. The tangling property w.r.t. a degree 5 vertex in \(Ep\)
In this section we always set \(v_{0}\in V(EP)\) with \(\deg(v_{0})=5\). The existence of the dual Kempe chains and the tangling property w.r.t. \((EP;v_{0})\) are the starting point to transform Kempe's method to our renewal approach.
**Definition 10.1**.: Let \(EP\in e\mathcal{MPGN}4\) and \(v_{0}\in EP\) with \(\deg(v_{0})=5\). The pair \((K_{r},K_{g})\) of two crucial chains demonstrated in Figure 17, i.e., two solid red and green curves, are called the _dual Kempe chains_ w.r.t. \((EP;v_{0})\) provided \(\deg(v_{0})=5\). Precisely we see the dual Kempe chains \((K_{r}|_{v_{1}}^{v_{3}},K_{g}|_{v_{1}}^{v_{4}})\) w.r.t. \((EP;v_{0})\). (Please, ignore all dashed lines at this moment.) The subgraph \(EP-\{v_{0}\}\) is a 5-semi-MPG with its pentagon outer facet \(\Omega:=v_{1}\)-\(v_{2}\)-\(\ldots\)-\(v_{5}\)-\(v_{1}\). By \(\Omega\), this \(EP\) is partitioned into two regions: \(\Sigma\) (inside) and \(\Sigma^{\prime}\) (outside) with \(\Sigma\cap\Sigma^{\prime}=\Omega\).
**Definition 10.2**.: Please, continue with the setting of Definition 10.1 which is demonstrated by Figure 17 and the right two graphs in in Figure 16. Now we define the _tangling property_ that happens when we perform vertex-color-switching on \(v_{2}\) (or \(v_{5}\)) _along_ the current \(K_{r}|_{v_{1}}^{v_{3}}\) (or \(K_{g}|_{v_{1}}^{v_{4}}\)). The main code of the tangling property is: After vertex-color-switching, (A) \(K_{g}|_{v_{1}}^{v_{4}}\) (or \(K_{r}|_{v_{1}}^{v_{3}}\) respectively) will be destroyed and (B) a new Kempe chain \(K_{g}|_{v_{2}}^{v_{3}}\) (or \(K_{r}|_{v_{2}}^{v_{4}}\)) will be created. The property guarantees that new dual Kempe chains \((K_{r},K_{g})\) w.r.t. \((EP;v_{0})\) still exist. Please refer to the right two graphs, which are before switching and after, in Figure 16.
The general setting for \((EP;v_{0})\) and the five neighbors of \(v_{0}\) in Figure 17 is mandatory, where "general" means symmetry of vertex colors \(1/2/3/4\) as well as
Figure 17. The dual Kempe chains; ignoring all dashed lines first.
edge colors R/G/B and any rotation of the pentagon outer facet \(\Omega\). Here we list the key properties for \((EP;v_{0})\) that we observed in the last section:
1. \(EP-\{v_{0}\}\) is 4-colorable; so any 4-coloring function on \(EP-\{v_{0}\}\) must has four different color assign to \(v_{1},\ldots,v_{5}\). (By Theorem 4.3(b))
2. Due to (1), the edge coloring on \(\Omega\) must be \(T_{rgb}|_{\Omega}:=[\)red-blue-blue-green-blue\(]\) or its symmetry. Notice that \([\)red-blue-blue-green\(]\) makes \(EP\) 4-colorable and it is a contradiction and impossible.
3. There must exist the dual Kempe chains \((K_{r},K_{g})\).
4. The tangling property holds w.r.t. \((EP;v_{0})\) and \((K_{r},K_{g})\).
5. The most important thing is that \(T_{rgb}|_{\Omega}\) and \((K_{r},K_{g})\) must match each other.
With help of \(K_{r}\) or \(K_{r}\), one can perform _vertex-color-switching_ according to Kempe's method. We are going to transfer Kempe's method to our new method: _edge-color-switching_.
**Definition 10.3**.: Given an MPG or semi-MPG \(M\) with an RGB-tiling \(T_{rgb}=(T_{r},T_{g},T_{b})\) (coexisting triple), the process of _edge-color-switching_ on a red canal line \(rCL\) of \(T_{r}\) (or _along_ the left/right canal bank \(rCL^{l}/rCL^{r}\)) is to exchange edge-colors green and blue in between \(rCL^{l}\) and \(rCL^{r}\). After this process, we obtain a new and legal RGB-tiling \(T^{\prime}_{rgb}\) without changing \(T_{r}\) and they are still coexisting.
Let us use acronyms VCS and ECS to stand for "vertex-color-switching" and "edge-color-switching" respectively. In some circumstance, one VCS is equivalent to a combination of multiple ECS, and vice versa. We will explain this equivalence behind.
In the last section, we claimed to choose the closest red chain \(K_{r}|_{v_{1}}^{v_{3}}\) to \(v_{2}\), and choose the closest green chain \(K_{g}|_{v_{1}}^{v_{5}}\) to \(v_{5}\). The ones we choose are drew by two solid red/green lines and they intersect each other. Due to "closest", the thin red dashed-line connecting \(y\) and \(v_{1}\) does not exist; especially it has no intersection with
\(K_{g}|_{v_{1}}^{v_{5}}\) and no intersection will force \(EP\) 4-colorable by Kempe's proof. We have the second meaning for "closest": Once these two closest dual chains intersect each other, any two red and green chains of same end-points shall intersect. Intersection is the minimum requirement to obey the tangling property, especially (A). The detail proof will be offered later.
The author leave an important question: Besides degree 5, does there any other situation have tangling property?
### Vertex-color-switching vs edge-color-switching
Basically Kempe used red-connected component containing \(v_{2}\) to perform VCS. Actually any red Kempe chain from \(v_{1}\) to \(v_{3}\) separate the two _major_ red-connected components that contain \(v_{2}\) and \(v_{4}/v_{5}\) respectively. Let us denoted these two components by \(rC(v_{2})\) and \(rC(v_{4};v_{5})\). Any different \(K_{r}\) from \(v_{1}\) to \(v_{3}\) in its own cluster can be a boundary working zone for VCS/ECS and \(K_{r}\) surrounds \(rC(v_{2})\) tightly or loosely. For instance, the original \(K_{r}|_{v_{1}}^{v_{3}}\) or \(K_{r}^{\prime}:=v_{1}\)-(dashed red line)-\(x\)-\(y\)-(solid red line)-\(v_{3}\) surrounds \(rC(v_{2})\) tightly or loosely respectively.
**Lemma 10.4**.: _Let \((K_{r},K_{g})\) be any dual Kempe chains w.r.t. \((EP;v_{0})\) provided \(\deg(v_{0})=5\). Then \(K_{r}\) and \(K_{g}\) must intersect each other. (We shall ignore their common endpoint \(v_{1}\).)_
Proof.: Suppose there are \(K_{r}|_{v_{1}}^{v_{3}}\) and \(K_{g}|_{v_{1}}^{v_{4}}\) without intersection. Because \(K_{r}\) and \(K_{g}\) are boundaries of \(rC(v_{2})\) and \(gC(v_{4})\) respectively, no intersection means \(V(rC(v_{2}))\cap V(gC(v_{4}))=\emptyset\). And then Kempe's proof works; hence \(EP\notin e\mathcal{MPGN}4\); this is a contradiction.
The non-empty overlapping area \(V(rC(v_{2}))\cap V(gC(v_{4}))\) is complicate and hard to study, or even hard to draw it due to the tangling property.
Let us focus on \(Q:=EP-\{v_{0}\}\) with R-tiling \(T_{r}(Q)\). By \(T_{r}\), we find three _major_ red-connected components: \(rC(v_{2})\), \(rC(v_{1};v_{3})\) and \(rC(v_{4};v_{5})\); and also two _major_ red canal lines \(rCL(v_{1}v_{2})\) and \(rCL(v_{3}v_{4})\). Please, refer to Figure 18 for the definition of these major parts. The pattern on Figure 18 for \((Q;T_{r})\) is mandatory
and offers a new point of view to see four-coloring problems through the method of edge-color-switching. Besides these three major red-connected components, there are also some minor red-connected components \(rC_{1i}\), \(rC_{2j}\) and \(rC_{3k}\) inside \(rC(v_{2})\), \(rC(v_{1};v_{3})\) and \(rC(v_{4};v_{5})\) respectively.
Furthermore, let us build a new graph call the _red block graph_ from \(Q:=EP-\{v\}\) and \(T_{rgb}\), denoted by \(rBG(Q;T_{rgb})\) or \(rBG(Q)\) for short, where the block (or vertex) set \(V(rBG(Q))\) consists of all red-connected components \(rC(*)\) and \(rC_{ij}\) of \(T_{r}\) and link (or edge) set \(V(rBG(Q))\) consists of all red canal lines, such that for each red canal line \(rCL(*)\) links the two sides of red-connected components that contain \(rCL^{l}(*)\) and \(rCL^{r}(*)\) respectively. Please, see Figure 19 for example. In Figure 18, we do draw some details inside \(rC(v_{1};v_{3})\), where \(rC_{2j}\) for \(j=1,2,3\) are inside \(rC(v_{1};v_{3})\). We have \(rC_{21}\) and \(rC_{22}\) inside and near by \(rC(v_{1};v_{3})\); also \(rC_{23}\) is inside \(rC_{22}\). Please, refer to Figure 19 for the the consequence of these four blocks shown in \(rBG(Q;T_{rgb})\).
We admit that \(rBG(Q;T_{rgb})\) is a tree first. The tree property is very important but we postpone the proof a while. Comparing with the original \(rBG(Q;T_{rgb})\) in Figure 19, we use the following two graphs to demonstrate VCS on \(rC(v_{2})\) and ECS on \(rCL(v_{1}v_{2})\) respectively, where we use doubleli
between \(1/3\) (or \(2/4\)) for vertices in \(Q\) as well as switching between green/blue for edges in \(Q\). With help from Figure 20, we have two important observations as follows:
1. When we perform VCS on a single red-connected component \(rC\), not only \(rC\) has to change but also the links \(rCL_{i}\) that incident to this block \(rC\) need to change by ECS. This observation tells us: VCS operation can be replaced by ECS operation.
2. When we perform ECS on a single red canal line \(rCL\), not only \(rCL\) has to change but also all blocks \(rC_{j}\) that are on one side of \(rCL\) need to change
Figure 20. Top line: VCS on \(rC(v_{2})\); Bottom line: ECS on \(rCL(v_{1}v_{2})\)
by VCS. Notice that a graph without loop have perfect meaning of "two sides" of any single link or edge. Therefore, when \(rBG(Q)\) is a tree, ECS operation can be replaced by VCS operation.
This observation is just what we said "one VCS is equivalent to a combination of multiple ECS, and vice versa" when \(rBG(Q)\) is a tree.
**Lemma 10.5**.: _Let \(Q\) be an MPG or an \(n\)-semi-MPG (not just \(Q:=EP-\{v\}\)) with an R-tiling \(T_{r}\). The red block graph \(rBG(Q;T_{rgb})\) must be a tree._
Proof.: If a graph is bisected by any edge, then it must be a tree. Let \(rCL\) be any edge of \(rBG(Q;T_{rgb})\). This \(rCL\) divide \(Q\) into two disconnect regions, because \(Q\) is a planar graph and \(rCL\) is either a red canal ring or a red canal line which star at and end at a same outer facet.
For counterexamples in an \((n_{1},n_{2})\)-semi-MPG, please refer to Figures 8(B1), 9(B2) and 9(B3). The observations (V) and (E) tell us: VCS and ECS are exchangeable if \(rBG(Q;T_{rgb})\) is a tree. Since we focus on One Piece, we nearly assume the block graphs in our discussion are all trees. In the following discussion and example, we will demonstrate that ECS is much more convenient.
In view of chip-firing games on \(rBG(Q;T_{rgb})\) of a tree structure:
1. A block is selected to chip-fire, then this block and its adjacent link shall switch between singleline and doubleline.
2. A link is selected to chip-fire, then this link and all blocks on one side of this link shall switch between singleline and doubleline. We will use Example 10.7 to explain that choosing different side just causes another equivalent RGB-tiling.
### Synonym, equivalence and congruence
The three different red block graphs \(rBG(Q;T_{rgb})\), \(rBG(Q;T^{\prime}_{rgb})\) and \(rBG(Q;T^{\prime\prime}_{rgb})\) in the last subsection have no difference on their structure, because they share the same \(T_{r}\). But they do have difference on G-/B-tilings. Comparing with the original \(rBG(Q;T_{rgb})\), we use doubtelines to indicate the changes by VCS on some \(rC_{i}\) and ECS on some
for \(rBG(Q;T^{\prime}_{rgb})\) and \(rBG(Q;T^{\prime\prime}_{rgb})\). Even though the two operations given in Figure 20 have different affects on some blocks \(rC_{i}\) and links \(rCL_{j}\), the two results of change are the same in some sense, i.e., they both transform \((EP-\{v_{0}\},T_{rgb})\) shown as Figure 19 to be the two right graphs in Figure 20. Let us use \(T^{\prime}_{rgb}\) and \((K_{r},K^{\prime}_{g})\) to denote the new things created by this VCS operation and also \(T^{\prime\prime}_{rgb}\) and \((K_{r},K^{\prime\prime}_{g})\) created by this ECS operation. These two corresponding things are different in details; however both \(K^{\prime}_{g}\) and \(K^{\prime\prime}_{g}\) are definitely from \(v_{3}\) to \(v_{5}\). In other words, \(gBG(Q;T^{\prime}_{rgb})\) and \(gBG(Q;T^{\prime\prime}_{rgb})\) are the same in some sense; but they are totally different from the original green block graph \(gBG(Q;T_{rgb})\) due to the tangling property.
We are now discussing about "the same in some sense" or "the difference in certain levels" that involves three general definitions. Let \(M\) be an MPG or a semi-MPG, and \(\mathcal{RGBT}(M)\) be the set of all RGB-tilings on \(M\).
* Synonym: Any \(T_{rgb}\in\mathcal{RGBT}(M)\) has six _synonyms_, including itself, by interchanging among R/G/B over whole graph \(M\). This relation of synonym, denoted by \(\overset{\mathrm{syn}}{=}\), is the most basic idea and it is too trivial to mention most of the time. In addition, any kind of synonyms caused by permutations of R/G/B shall be also denoted by \(\overset{\mathrm{syn}}{=}\). We also use \(\langle T_{rgb}\rangle\) to denote the set of six synonyms of \(T_{rgb}\). But sometimes we will even skip \(\langle\cdot\rangle\).
* Equivalence: First, we shall accept the fundamental base on synonyms. The most important parts of \((EP;v_{0})\) with \(\deg(v_{0})=5\) are \((K_{r},K_{g})\) and \(T_{rgb}|_{\Omega}\). This two parts are the major _skeleton_ of any kind of \(T_{rgb}(EP;v_{0})\). In general, any two \(T^{A}_{rgb},T^{B}_{rgb}\in\mathcal{RGBT}(M)\) are _equivalent_, denoted by \(T^{A}_{rgb}\equiv T^{B}_{rgb}\), if they share the same skeleton such as the graph in Figure 17 (ignoring all dashed lines), i.e., the same sketch for \((K_{r},K_{g})\) and \(T_{rgb}|_{\Omega}\). The two right graphs in Figure 20 do have same skeleton and they provide another example: \(T^{\prime}_{rgb}\equiv T^{\prime\prime}_{rgb}\). **It is important that this equivalence relation shall involve with a given \(\Omega\) and Kempe chains in \(\Sigma^{\prime}\). Different \(\Omega\)'s will establish different equivalence relations.** We will talk about this
"difference" then. We will use \([T_{rgb}]\) to denote the equivalence class that \(\langle T_{rgb}\rangle\) belongs to. There is supplemental definition of equivalence given behind in Remark 10.13.
* Congruence: First, this new relation bases on accepting \(\equiv\) or \(\overset{\mathrm{syn}}{=}\). We have seen an example: \(T_{rgb}\) and \(T^{\prime}_{rgb}\) (or \(T_{rgb}\) and \(T^{\prime\prime}_{rgb}\)) in the last paragraph and also in the last subsection. The congruence relationship has its operational definition: In the working domain \(\mathcal{RGBT}(M)\), two RGB-tilings \(T^{A}_{rgb}\) and \(T^{B}_{rgb}\) are _congruent_, denoted by \(T^{A}_{rgb}\cong T^{B}_{rgb}\), if \(T^{B}_{rgb}\) can be obtained from \(T^{A}_{rgb}\) by performing a sequence of VCS's and ECS's. How to make VCS and ECS executable and closed in \(\mathcal{RGBT}(M)\)? For instance, we need to require any \(rC\) (or \(gC\), \(bC\)) having no odd-cycles to perform VCS. Also, after operation the result should still be an element in the working domain \(\mathcal{RGBT}(M)\). We need to set up a stronger requirement on \(\mathcal{RGBT}(M)\) and choosing a proper \(M\) is what we need to do. 1. Let \(M\) be One Piece which is an MPG or an \(n\)-semi-MPG. This is a good choice due to Theorem 6.7 which is our First Fundamental Theorem v1. 2. Let \(M\) be an MPG or an semi-MPG. This choice due to Theorem 7.12 which is our First Fundamental Theorem v2. For this setting, we also need to restrict a new domain set \(\mathcal{RGBT}^{+}(M)\) that consists of all \(T_{rgb}\) such that along every \(n_{i}\)-gon outer facet the numbers of red, green and blue edges are all even if \(n_{i}\) is even, and all odd if \(n_{i}\) is odd (Theorem 7.12(c)). 3. Let \(M=EP-\{*\}\), where \(\{*\}\) is dynamic and consists of several edges that are variable. Precisely we have a fixed cycle \(\Omega\) in \(EP\), and \(\Sigma\) and \(\Sigma^{\prime}\) are two regions of \(EP\) partitioned by \(\Omega\) with \(\Sigma\cap\Sigma^{\prime}=\Omega\). In addition, the variable edge set \(\{*\}\) is always inside \(\Sigma\). Since \(\Sigma^{\prime}\) is One Piece, all rules shall follow item (1); Since \(\Sigma\) has multiple outer facets,
all rules shall follow item (2); Of course some special rules due to the combination of (1) and (2). We will talk about it then.
_Remark 10.6_.: (Very important) The foundation of synonym relation can be any 4-colorable graph \(M\) and \(\mathcal{RGBT}(M)\). The foundation of equivalence relation need to establish a certain skeleton; here we use \((K_{r},K_{g})\) on planar graph \(M=EP-\{v_{0}\}\) and \(T_{rgb}(M)|_{\Omega}\). The foundation of congruence relation is a 4-colorable _planar_ graph \(M\) and \(\mathcal{RGBT}(M)\); here we still use \(M=EP-\{v_{0}\}\). Both synonym and congruence relations are defined by certain ECS operations, but equivalence relation is defined by the way (skeleton) we draw for \((K_{r},K_{g})\) and \(T_{rgb}|_{\Omega}\). The crucial question come out as follows:
\[\begin{array}{llll}T^{A}_{rgb}&&\equiv&&T^{B}_{rgb}\\ \Downarrow,\,\cong&&\text{corresponding ECS}&&\Downarrow,\,\cong\\ T^{A^{\prime}}_{rgb}&&\equiv?&&T^{B^{\prime}}_{rgb}\end{array}\]
The answer is yes and we use the next subsection to persuade the reader. Notice that the corresponding ECS's has three different groups: (1) ECS on \(rCL(v_{1},v_{2})\), (2) ECS on \(rCL(v_{3},v_{4})\), and (3) any ECS on \(rCL_{k}\) which is inside \(rC(v_{2})\), \(rC(v_{1};v_{3})\) or \(rC(v_{4};v_{5})\). It is interesting that after performing (3) the new RGB-tiling is equivalent to old one, not just congruent.
### Let us learn ECS by examples
The right two graphs in Figure 16 are closer to reality, but it is not easy (actually no way) to draw the real graph for any dual Kempe chains \((K_{r},K_{g})\) w.r.t. \((EP;v_{0})\). In the rest of this paper, we will simply draw \((K_{r},K_{g})\) like the left graph in Figure 16 to show the red-/green-connected property. However, we always pretend that the real \(K_{r}\) and \(K_{g}\) behind the graph do intersect each other and the tangling property always working.
For the following two examples, we star with \((Q;T_{rgb})\) in Figure 18 as well as \(rBG(Q;T_{rgb})\) in Figure 19 and perform many different ways of ECS.
**Example 10.7**.: Observation (E) shows: performing ECS on \(rCL(v_{1}v_{2})\) shall simultaneously perform VCS on all blocks \(rC_{j}\) on one side of \(rCL(v_{1}v_{2})\). In Figure 20 we
choose the left side to simultaneously perform VCS. How about we choose the right side? No problem, the result \(T^{1}_{rgb}\) shown in Figure 21 tell us \(T^{1}_{rgb}\equiv T^{\prime}_{rgb}\equiv T^{\prime\prime}_{rgb}\).
We said that the blocks \(rC(v_{2})\), \(rC(v_{1};v_{3})\), \(rC(v_{4};v_{5})\), and the links \(rCL(v_{1}v_{2})\), \(rCL(v_{3}v_{4})\) are five major parts of \(rBG(EP-\{v\})\) and they forms a line of length \(2\). Besides these major fives, \(rC_{ij}\) and \(rCL_{k}\) are inside \(rC(v_{2})\), \(rC(v_{1};v_{3})\) or \(rC(v_{4};v_{5})\).
**Lemma 10.8**.: _Given \(rBG(E-\{v_{0}\};T_{rgb})\), if we perform any combination of VCS on \(rC_{ij}\), together with any combination of ECS on \(rCL_{k}\) and obtain a new \(T^{\prime}_{rgb}\) on \(E-\{v_{0}\}\), then the original \(T_{rgb}|_{\Omega}\) and \(T^{\prime}_{rgb}|_{\Omega}\) are the same and then \((K^{\prime}_{r},K^{\prime}_{g})\) and \((K^{\prime}_{r},K^{\prime}_{g})\) of the same kind. Precisely \(T_{rgb}\equiv T^{\prime}_{rgb}\)._
This lemma is the main reason that we only draw \(K_{r}|_{v_{1}}^{v_{3}}\) for red-connectivity, instead of \(rC(v_{1};v_{3})\) as a component. We also relax the previous mandatory but temporary choice, because now we know what we really is red-/green-connectivity and all kind of \((K_{r},K_{g})\)'s follow the tangling property for \((EP;v_{0})\).
**Example 10.9**.: We are curious about ECS on \(rCL(v_{1}v_{2})\) or \(rCL(v_{3}v_{4})\), or on them together. Performing ECS on \(rCL(v_{1}v_{2})\) is given in Subsection 10.1 and we obtain \(T^{\prime\prime}_{rgb}\) in Figure 20. There are two demonstrations in Figure 22, where we perform ECS on \(rCL(v_{3}v_{4})\) and perform ECS on both \(rCL(v_{1}v_{2})\) and \(rCL(v_{3}v_{4})\). The second operation is equivalent to perform VCS on \(rC(v_{1};v_{3})\) by Lemma 10.8. Clearly, \(T^{2}_{rgb}\equiv T^{1}_{rgb}\equiv T^{\prime}_{rgb}\equiv T^{\prime\prime}_ {rgb}\), even though we have \(v_{3}\) and \(v_{5}\) blue-connected
Figure 21. ECS on \(rCL(v_{1}v_{2})\) again, but affecting the other side
rather than green-connected. (Please refer to synonym in the last subsection.) As for the second operation, we obtain \(T^{3}_{rgb}\) and clearly \(T^{3}_{rgb}\equiv T_{rgb}\).
**Example 10.10**.: Notice that \(T^{3}_{rgb}\) and \(T_{rgb}\) in the last example are not synonyms, because the real synonym of \(T_{rgb}\) with \(T_{r}\) fixed need to perform ECS on all \(rCL(*)\) and \(rCL_{*}\). In Figure 23 we do offer \(T^{4}\) to be such a synonym of \(T_{rgb}\).
With these examples, we can make a conclusion on any provided RGB-tiling \(T_{rgb}\) on \(EP-\{v_{0}\}\) with \(\deg(v_{0})=5\) and \(rBG(EP-\{v_{0}\};T_{rgb})\) shown in Figure 18 as follows:
Figure 22. The red block graphs for VCS on \(rC(v_{2})\) and ECS on \(rCL(v_{1}v_{2})\)
1. Since VCS and ECS can be substituted by each other, we could only focus on ECS. If we fixed a \(T_{r}\) without red odd-cycles, then there are \(2^{N}\) different coexisting RGB-tiling induced by this R-tiling, where \(N\) is the total number of red canal lines \(rCL_{i}\) including both rings and paths. For \(EP-\{v_{0}\}\), we have \(N\geq 2\).
2. Among these \(2^{N}\) different coexisting RGB-tilings w.r.t. our fixed \(T_{r}\), which is generated by the original \(T_{rgb}\), we are interested in congruence classes. For congruence "\(\cong\)", if we fixed \(T_{r}\), then \(T_{rgb}\) only has the other congruent \(T^{\prime}_{rgb}\). For all we met such as \(T^{\prime\prime}_{rgb},T^{1}_{rgb},\ldots,T^{4}_{rgb}\), they are either synonyms of \(T_{rgb}\) or \(T^{\prime}_{rgb}\) or in equivalence "\(\equiv\)".
3. Only performing ECS on \(rCL(v_{1}v_{2})\) or \(rCL(v_{3}v_{4})\), we can exchange between \([T_{rgb}]\) and \([T^{\prime}_{rgb}]\). However, performing ECS on both \(rCLv_{1}v_{2}\) and \(rCLv_{3}v_{4}\) exchanges nothing between \([T_{rgb}]\) and \([T^{\prime}_{rgb}]\). This is why we only have two congruence classes if \(T_{r}\) is fixed. This result provides the final answer for Remark 10.6.
4. Provided RGB-tiling \(T_{rgb}\), we can also draw \(gBG(EP-\{v_{0}\};T_{rgb})\), which is a symmetric graph of \(rBG(EP-\{v_{0}\};T_{rgb})\) in Figures 17 and 18. So (1), (2)and (3) hold for green version.
5. Provided Figures 17, there is no corresponding blue version, because the edge coloring of \(T_{rgb}|_{\Omega}\) shows blue is unique w.r.t. red and green.
### Our next step: \(EP-\{e\}\) vs \(EP-\{v_{0}\}\)
Now we use three graphs in Figure 24 to extend the idea of R/G/B Kempe chains. These three graphs are special enough to demonstrate the benefit obtained from the new concept using RGB-tilings.
The original Kempe's point of view focus on \(EP-\{v_{0}\}\) which is a 5-semi-MPG with a pentagon outer facet \(\Omega:=v_{1}\)-\(v_{2}\)-\(v_{3}\)-\(v_{4}\)-\(v_{5}\)-\(v_{1}\) showed as the right graph in Figure 16. By \(\Omega\), this \(EP\) is partitioned into two regions: \(\Sigma\) (inside) and \(\Sigma^{\prime}\) (outside) with \(\Sigma\cap\Sigma^{\prime}=\Omega\). Both \(\Sigma\) and \(\Sigma^{\prime}\) are 5-semi-MPG's. By the previous general setting Co[\(v_{1}\):1, \(v_{2}\):2, \(v_{3}\):3, \(v_{4}\):4, \(v_{5}\):2] (Co and \(f\) are the same thing),
vertex \(v_{0}\) to color. If we follow the rule of map-coloring, then we must color \(v_{0}\) by the unwelcome color \(5\). However, this time we choose to obey the rule of only four colors by ignoring a particular edge inside \(\Sigma\), while everything in \(\Sigma^{\prime}\) is unchanged. We set the the first graph in Figure 24 with Co\([v_{0}{:}1]\) and then obtain an RGB-tiling on \(EP-\{v_{0}v_{1}\}\), where the yellow double-line2, namely \(v_{0}v_{1}\), is the _abandoned edge_ at this moment. Notice that the four edges surrounding \(v_{0}v_{1}\) are all blue, then we name this \(v_{0}v_{1}\)-diamond _Type A_. Here we demonstrate a new way to realize the dual Kempe chains, namely \((K_{r}|_{v_{0}}^{v_{1}},K_{g}|_{v_{0}}^{v_{1}})\), which are a little bit longer than the corresponding pairs described in Definition 10.1. By assigning red, green or blue color to that yellow double-line, we will create at least an odd-cycle of the same color, namely \(K_{r}\cup\{v_{0}v_{1}\}\), \(K_{g}\cup\{v_{0}v_{1}\}\) or two triangles of blue color. Triangles are trivial odd-cycles, so we ignore them most of time and only focus on non-trivial odd-cycles.
Footnote 2: This double-line is actually orange color because yellow color in not easy to see for publications.
The rest two graphs in Figure 24 are obtained by treating \(v_{0}v_{3}\) and \(v_{0}v_{4}\) as _abandoned edges_ respectively. A little bit different is that the four edges surrounding \(v_{0}v_{3}\) (or \(v_{0}v_{4}\)) are two blue and two green (red respectively). We name this kind of \(v_{0}v_{3}\)-diamond as well as \(v_{0}v_{4}\)-diamond _Type B_. The crucial concept is that the middle graph has one Kempe chain \(K_{r}|_{v_{0}}^{v_{3}}\) and the right graph has one Kempe chain \(K_{g}|_{v_{0}}^{v_{4}}\).
**Definition 10.11**.: Let \(EP\in e\mathcal{MPGAN}4\) with \(\deg(v_{0})=5\). Referring to the first graph in Figure 24, we define \((K_{r}|_{v_{0}}^{v_{1}},K_{g}|_{v_{0}}^{v_{1}})\) to be the _dual Kempe chains_ w.r.t.
Figure 24. Three renewal graphs for R/G/B Kempe chains and \((EP;v_{0})\)
\((EP;v_{0}v_{1})\) in _Type A_. Without change edge coloring in \(\Sigma^{\prime}\), referring to the second graph in Figure 24, we define \(K_{r}|_{v_{0}}^{v_{3}}\) to be the _Kempe chain_ w.r.t. \((EP;v_{0}v_{3})\) in _Type B_. For each of the three graphs, we call the diamond with yellow double-line the _\(e\)-diamond_ in \(EP\).
_Remark 10.12_.: The surrounding four edges of \(e\)-diamond being same color is the main characteristic of Type A. Type B has two different colors for the surround four edges of \(e\)-diamond: two edges in the north-\(\wedge\) and the other two in the south-\(\vee\) that have same color.
_Remark 10.13_.: (Important) Because 4-semi-MPG \(Q:=EP-\{e\}\) is 4-colorable for any \(e\in E(EP)\), there exists at least an RGB-tiling on \(Q\). By Figure 24, we see one Type A and two Type B RGB-tilings on \(Q:=EP-\{*\}\), where \(\{*\}\) consists of only one edge as variable \(e\). We see \(e\) can be \(v_{0}v_{1}\), \(v_{0}v_{3}\) and \(v_{0}v_{4}\). These three graphs coexist; so, are they three synonyms? Are they equivalent? Definitely they do not involve congruence. For a fixed \(T_{rgb}|\Omega\) and a fixed edge-color-skeleton \((K_{r},K_{g})\), we shall say these three graphs equivalent. Now we shall use this supplement to claim our standard operating procedure to build on relation of equivalence:
1. Let \(EP\in e\mathcal{MPGN}4\), \(\Omega\) be a cycle in \(EP\), and \(\Sigma\), \(\Sigma^{\prime}\) defined as usual. First we pick any \(e_{0}\) inside \(E(\Sigma)\) as well as an \(e_{0}\)-diamond, and then develop a Type A RGB-tiling \(T_{rgb}\) on \(EP-\{e_{0}\}\). It is better that one of \(e_{0}\)'s two end vertices is degree 5 or 6. This is exactly the left graph in Figure 24. Now we have at least a dual Kempe chains \((K_{r},K_{g})\) and also \(T_{rgb}|_{\Omega}\) is now fixed.
2. According to this fixed \(T_{rgb}|_{\Omega}\), we can develop new RGB-tilings on \(T_{rgb}^{i}\) on \(\Sigma-\{*\}_{i}\) for \(\{*\}_{i}\) consisting of a single edge \(e_{i}\) or even more edges from \(E(\Sigma)\). This is exactly the right two graphs in Figure 24 and Figure 25 behind.
3. Notice that we must have \(T_{rgb}^{i}|_{\Omega}=T_{rgb}|_{\Omega}\). For \(T_{rgb}^{i}(\Sigma-\{*\}_{i})\), we might develop some new R/G/B Kempe chain in \(\Sigma^{\prime}\) who should not contradict
with each other, especially with the original \((K_{r},K_{g})\). Sorry! No new R/G/B Kempe chains appear in Figures 24 and 25.
4. All together, we have the _skeleton_: \(T_{rgb}|_{\Omega}\), all feasible R/G/B Kempe chains in \(\Sigma^{\prime}\) and \(T_{rgb}|_{\Sigma}\), \(T_{rgb}^{i}|_{\Sigma}\) to form an equivalent class, denote by \([T_{rgb}]\)
Why we need equivalence relation? Because all properties or proofs in this paper depend on the skeleton of \(T_{rgb}\). If it is right for \(T_{rgb}\), then it is right for \([T_{rgb}]\).
_Remark 10.14_.: Without change edge coloring in \(\Sigma^{\prime}\), let us set Co[\(v_{0}\):2]. Please see Figure 25. This way will creates two abandoned edges, namely \(v_{0}v_{2}\) and \(v_{0}v_{5}\).
Notice that \(v_{0}v_{2}\)- and \(v_{0}v_{5}\)-diamonds are both Type B. However, the surround edges are involving 3 different edge-colors and this time no non-trivial odd-cycles come out by assigning both abandoned edges either red, green or blue, or even mixed with \(v_{0}v_{2}\) red and \(v_{0}v_{5}\) green. Thus, we obtain no benefit by setting Co[\(v_{0}\):2] in this case. However, it does not mean that having more abandoned edges at the same time is worthless. What we really care about is any Kempe chain that crosses odd number of abandoned edges.
_Remark 10.15_.: When we have a Type A \(e\)-diamond as the left graph in Figure 24, we might want to replace the yellow double-line by red color (or green). We actually treat the provided RGB-tiling on \(Q:=EP-\{e\}\) as an R-tiling \(T_{r}(EP-\{e\})\) which definitely has no odd-cycles. At this moment, green and blue colors are treated as black. Replacing yellow by red color will create a new red odd-cycle, because now
Figure 25. Two yellow double-lines in \(\Sigma\)
\(T_{r}\) is well defined on \(EP\) as an MPG, and then let us refer to Theorem 6.7(c). As for this Type A \(e\)-diamond, we would not replace the yellow double-line by blue. Even though doing this will reach two trivial blue triangles, they reveal no extra information. As for the middle (right) graph in Figure 24, we have a Type B \(e\)-diamond; this time we might want to replace the yellow double-line only by red color (or only by green).
The original Kempe chains is w.r.t. \((EP;v)\) for a vertex \(v\) with \(\deg(v)=5\) and our renewal Kempe chains is w.r.t. \((EP;e)\) for any edge \(e\) in \(EP\), while the two end vertices of \(e\) need no extra requirement due to Theorem 4.3(b). This subsection or this whole section has paid attention on the connection of Kempe's method and our renewal way. In the next section and in the rest of our study, we will exam more detail and give more properties about R/G/B Kempe chains as well as Type A and Type B \(e\)-diamonds.
## 11. \(e\)-diamond everywhere in \(Ep\)
In this section we investigate a general \(e\)-diamond with any fixed \(e\in E(EP)\). Theorem 4.3 and Theorem 6.7 are the top guidelines of this section. As the author reviewed and re-wrote this article \(n\) times, the tune of "Everybody wants to rule the world" by Tears for Fears, a pop rock band from England, was resonating. Yes, the main theme of this section is "Every \(e\)-diamond can rule its world: \(EP\)."
Figure 26. \(e\)-diamond, and RGB-tilings of Types C, D on \(EP-\{e\}\)
Let us denote the four vertices surrounding the \(e\)-diamond by \(a,b,N,S\) and \(e:=ab\). Around this \(e\)-diamond, the main structure of \(EP\) look like the first graph in Figure 26. Now we try to arrange an RGB-tiling on \(Q:=EP-\{e\}\) which definitely exists by Theorem 4.3(b). According to Lemma 6.2(b), an RGB-tiling on this 4-semi-MPG \(Q\) shall present only one edge-color or two different edge-colors in pairs along the outer facet \(\Omega:=N\)-\(a\)-\(S\)-\(b\)-\(N\). By symmetry or synonym relation, it does matter which one or which two colors are presented.
_Remark 11.1_.: Every claim, property or theorem must consider all synonyms behind, i.e., red, green and blue are symmetric and exchangeable. When it comes to synonym relation, we shall also exam the new equivalence relation for this new and general situation. Here we have \(\Omega:=N\)-\(a\)-\(S\)-\(b\)-\(N\) and \(\Sigma\) is exactly the \(e\)-diamond. Please, refer to Remark 10.13 for more details about building up equivalence relation.
First things first, we exclude the two types of RGB-tilings shown as the right two graphs in Figure 26 from \(\mathcal{RGBT}(EP-\{e\})\), because they are impossible for \(EP\) as an extremum. We simply assign green color to replace the yellow double-line, then we get an RGB-tiling on \(EP\). Assigning \(e\) green color causes no green odd-cycle. The reason comes from Lemma 6.2(b) applying on this RGB-tiling for 4-semi-MPG \(Q\) (not for \(EP\)). Suppose there is a green path \(P_{g}|_{a}^{b}\). This path together with the 2-path \(a\)-\(N\)-\(b\) form a cycle in \(Q\). The numbers of red, green and blue edges along this cycle are all odd; so the length of \(P_{g}|_{a}^{b}\) is odd and \(P_{g}|_{a}^{b}\cup\{e\}\) is an even-cycle. Actually, we have another simple way to prove it. Just consider these two graphs with \(e\) colored by green as R-tilings on \(EP\) (not just for \(Q\)) because the two red edges together with their two red-triangles (red half-tiles) perfectly share \(e\)-diamond, and then follow Theorem 6.7(a) and (c).
After ruling out the above two types, there are the rest two types of RGB-tilings for 4-semi-MPG \(Q\) remained. We call the two remained ones by _Types A and B_ (see Figure 27), and we call the ones ruled out by _Types C and D_ (see Figure 26). All these Types have their own synonyms and equivalence classes; while the four
graphs in the two figures are just representations. Now we shall investigate Types A and B.
Is there any blue non-trivial odd-cycle? We don't know and most of time we don't need to know. In this Type A, the red path and the green path of even length are so called the _dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) w.r.t. \((EP;e)\). Do \((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) have tangling property? They do if \(\deg(a)=5\) or \(\deg(b)=5\). We are unsure about this question if \(\deg(a)>5\) and \(\deg(b)>5\). As a representative of Type B, the right graph has only one _Kempe chain_\(K_{r}|_{a}^{b}\) guaranteed.
**Theorem 11.2** (The primitive Theorem: \(e\)-iamond of Type A or Type B).: _Given \(EP\in e\mathcal{MPGN}\), each of the following properties is a necessary condition for \((EP;e)\), where \(e:=ab\) be any edge in \(EP\) and \(Q:=EP-\{e\}\)._
1. _All RGB-tilings_3 _on 4-semi-MPG_ \(Q\) _can be sorted into two types (or equivalence classes): Type A and Type B shown as in Figure_ 27_._ Footnote 3: Theorem 4.3(b) guarantees this set non-empty.
2. _The main characteristics of Type A including: (b1) All four edges surrounding_ \(e\) _are the same color, say blue. (b2) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 4: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 5: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 6: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 7: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 8: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 9: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 10: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 11: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 12: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 13: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 14: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 15: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 16: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 17: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 18: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 19: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 20: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 21: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 22: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 23: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) _w.r.t._\((EP;e)\)_. (b3) The lengths of_ \(K_{r}|_{a}^{b}\) _and_ \(K_{g}|_{a}^{b}\) _are both even._ Footnote 24: The main characteristics of Type A are the same color, say blue. (b3) There are the dual Kempe chains_\((K
_._
3. _The main characteristics of Type B including: (c1) The four edges surrounding_ \(e\) _have two colors, say green and blue, with same color on the north-_\(\wedge\) _and the south-_\(\vee\) _of_ \(e\)_. (c2) There is a single Kempe chain_ \(K_{r}|_{a}^{b}\) _w.r.t._ \((EP;e)\)_. (c3) The length of_ \(K_{r}|_{a}^{b}\) _is even._
Proof.: (a): Since \(Q:=EP-\{e\}\) is 4-colorable and then an RGB-tiling \(T_{rgb}(Q)\) must exist. We have already rule out Types C and D; so only Types A and B remained under synonym relation. Also the claims (b1) and (c1) are true.
(c2) and (c3): For Type B, we can replace the yellow double with a red edge and then obtain an extended R-tiling on whole \(EP\). Now this new R-tiling cannot induce a 4-coloring function on \(EP\), so there must a red odd-cycle passing \(e\). Therefore, (c2) and (c3) are true.
(b2) and (b3): We have the same way to these two. Additionally, we can replace the yellow double with a green edge.
_Remark 11.3_.: Let us refer to the two graphs in Figure 27. A Kempe chain \(K_{*}\) is not as simple as a single path. It is possible that \(K_{r}|_{\alpha}^{\beta}\) represents a bunch of red paths from \(\alpha\) to \(\beta\) and then many red canal rings \(rCL\) lay inside \(K_{r}|_{\alpha}^{\beta}\). For instance, Both \(K_{r}|_{a}^{b}\) in Type A and Type B graphs actually represent a red-connected component that contains both vertices \(a\) and \(b\). In view of components, we shall denote it by \(rC(a;b)\), and there are also components \(rC(N)\), \(rC(S)\) and \(rC(S)_{ij}\). The two major red canal lines are \(rCL(aN)\) and \(rCL(aS)\). All these blocks and links will make up the red block graph. Of course, there is the green block graph for Type A, but not for Type B according to the two graphs in Figure 27.
**Theorem 11.4**.: _Given \((EP;e)\) as in Theorem 11.2, both RGB-tilings of Types A and B exist. Furthermore, any Type A RGB-tiling is congruent to a Type B one, and vice versa. Therefore,_
\[\#\{T_{rgb}(EP-\{e\})\text{ of Type A}\}\ =\ \#\{T_{rgb}(EP-\{e\})\text{ of Type B}\}.\]
Proof.: To explain this, let us make the two RGB-tilings a little bit precise as following two graphs. We start with Type A which is the left graph in Figure 28.
Without loss of generality, let us focus on the red Kempe chain \(K_{r}\) from \(a\) to \(b\). The bounded region enclosed by \(K_{r}\cup e\) is a good place to perform ECS between green and blue. Then we obtain a Type B shown as the right graph. Notice that the original green Kempe chain from \(a\) to \(b\) of the left graph is now destroyed.
The last paragraph is just one direction. To prove the other direction, we cannot use the right graph in Figure 28 who has an additional green-blue path crossing red path \(K_{r}|_{a}^{b}\) in a particular way. The initial Type B has no information about this green-blue path. The correct way is to use the right graph in Figure 27 which is the general Type B for \(EP-\{e\}\) and it only has one Kempe chain \(K_{r}|_{a}^{b}\). But the proving process is still reversed: We perform ECS in the a bounded region enclosed by \(K_{r}\cup e\) for this right graph, and then the green color of the edges \(aN\) and \(bN\) turns blue, i.e., all four edges along the outer facet of \(Q\) are of same color that is the main character (b1) of Type A given in Theorem 11.2. Thus the necessary conditions (b1), (b2) and (b3) shall come all together, because we assume \(EP\in e\mathcal{MPGN}\). Now there must be two Kempe chains \(K_{r}\) and \(K_{g}\) as the character (b2) of Type A.
Figure 28. Congruent partnership between Type A and Type B
_Remark 11.5_.: The existence of Type A \(e\)-diamond for every \(e\in EP\) and the picture of Type A provide a new proof for Corollary 4.4(b).
_Remark 11.6_.: There is another way to prove this theorem by using the concept of block graphs. Given the right graph in Figure 28, we can build the red block graph from \(EP-\{e\}\) as the first line in Figure 20. Using what we just learned in Subsection 10.3, we have
\[T^{a}_{rgb}\quad\text{(Type A)} \overset{\text{ECS on $rCL(aN)$}}{\Longleftrightarrow} T^{b}_{rgb}\quad\text{(Type B)}\] \[\overset{\text{ECS on $rCL(aS)$}}{\Longleftrightarrow} T^{c}_{rgb}\ \equiv\ T^{a}_{rgb}\quad\text{(Type A)},\]
where \(T^{c}_{rgb}\) has the four surrounding edges of \(e\) all green. In the last line in Figure 20, we skip \(rBG(EP-\{e\};T^{c}_{rgb})\), but we show more details about exchange between \(T^{b}_{rgb}\) and \(T^{c}_{rgb}\).
Theorem 11.2 nearly offers sufficient conditions for \(EP\in e\mathcal{MPGAN}\). We will complete these if-and-only-if conditions as our final goal.
Here is a direct consequence of Theorem 11.4.
**Theorem 11.7** (Important).: _Let \((EP;e)\) and the \(e\)-diamond set generally as in Theorem 11.2._
* _The vertices_ \(N\) _and_ \(S\) _are not adjacent in_ \(EP\)_._
* _Not only_ \(EP-\{e\}\) _but also_ \(EP-\{e\}\cup\{NS\}\) _is 4-colorable._
Proof.: (a): If \(N\) and \(S\) are adjacent in \(EP\), then either \(EP=K_{4}\) or the triangle \(N\)-\(a\)-\(S\)-\(N\) forms a non-trivial 3-cycle in \(EP\). Both are impossible. Please, refer to Lemma 4.5. Thus, \(N\) and \(S\) are not adjacent in \(EP\).
(b): Just look at Type B. Let us assign the edge \(NS\) red. Because the pre-existing red Kempe chain prevents a new red cycle passing through \(NS\), this new R-tiling on \(EP-\{e\}\cup\{NS\}\) has no red odd-cycle. Therefore, \(EP-\{e\}\cup\{NS\}\) is 4-colorable.
It can be easily prove by induction that any MPG, say \(G\), has \(3|G|-6\) edges. Let \(\omega:=|EP|\). The next corollary is just for fun.
**Corollary 11.8**.: _Through the modification \(EP-\{e\}\cup\{NS\}\), there are \(3\omega-6\) MPG's which are 4-colorable and only different from \(EP\) with only one single edge._
## 12. Necessary and sufficient conditions for \(Ep\)
In addition to our discussion in the last two sections, the Type A and Type B still have some other new characters for \(EP\) to explore. We plan to write a much precise statement for these characters. The most important thing is to accomplish the title of this section.
Given a \(2n\)-semi-MPG, \(M\), an R-tiling (or G-/B-tiling) is _perfect_ if no edge along the outer facet \(2n\)-gon is red. We use the word _perfect_ because no red half-tile is
Figure 29. Top line: VCS on \(rC(v_{2})\); Bottom line: ECS on \(rCL(v_{1}v_{2})\)
used, i.e., the tiling is made all by red diamonds. Briefly we use "R-tiling\({}^{*}\)" as the abbreviation of "R-tiling without any red odd-cycle." Particularly we would like discuss a 4-semi-MPG, \(Q\), with it outer facet \(\Omega:=N\)-\(a\)-\(S\)-\(b\)-\(N\). Notice that most of time we set \(Q:=EP-\{ab\}\) for any fixed \(e:=ab\in E(EP)\), but now we assume \(Q\) a general 4-semi-MPG with \(|Q|\leq\omega\).
Let us recall the notation \(T_{r}(Q)\) and \(T_{rgb}(Q)\) of an R-tiling and an RGB-tiling on \(Q\), where \(T_{rgb}(Q)\) means the coexistence of R-, G- and B-tilings. If we obtain an R-tiling\({}^{*}\)\(T_{r}(Q)\) first, then we can extend it to a \(T_{rgb}(Q)\). We can also use \(T_{rg}(Q)\), \(T_{rb}(Q)\) and \(T_{gb}(Q)\); however they are no different from \(T_{rgb}(Q)\), because once two tilings coexist a tiling of the third color is immediately ready. Because \(Q\) is a 4-semi-MPG and an R-tiling\({}^{*}\)\(T_{r}(Q)\) on One Piece is always grand, a coexisting \(T_{rgb}(Q)\) extended from \(T_{r}(Q)\) must induce a 4-coloring function on \(Q\). For the detail, refer to Theorem 6.7 and Theorem 7.12.
Let variables \(\mathcal{X}\) and \(y\) denote brief names of one edge-color from red, blue and blue, and most of time \(\mathcal{X}\) and \(y\) are a same color. We define the following collections of tilings on \(Q\) (not on \(EP\)). These collections have a general notation \(\mathcal{XT}_{ky}(Q)\) or simply \(\mathcal{XT}_{ky}\) with \(Q\) assigned already, where \(k\in\{0,2,4\}\). Clearly, if \(\{\mathcal{X},y\}\stackrel{{\mathrm{syn}}}{{=}}\{\mathcal{X}^{ \prime},y^{\prime}\}\) (either both one color or both two colors) then \(\mathcal{XT}_{ky}\stackrel{{\mathrm{syn}}}{{=}}\mathcal{X}^{ \prime}\mathcal{T}_{ky^{\prime}}\), where \(\stackrel{{\mathrm{syn}}}{{=}}\) is the equivalence relation of synonym.
\[\mathcal{RT}_{0r} = \{T_{r}(Q):\text{a perfect R-tiling}^{*},\text{ i.e., all edges of }\Omega\text{ is black}\};\] \[\mathcal{GT}_{2g} = \{T_{g}(Q):\text{a G-tiling}^{*}\text{ s.t. }\Omega\text{ has two green and two black}\};\] \[\mathcal{BT}_{4b} = \{T_{b}(Q):\text{a B-tiling}^{*}\text{ with all four edges along }\Omega\text{ blue}\}.\]
We can define the corresponding collections that are extended from the last three:
\[\mathcal{RGBT}_{0r} = \{T_{rgb}(Q):\text{an RGB-tiling with no red along }\Omega\};\] \[\mathcal{RGBT}_{2g} = \{T_{rgb}(Q):\text{an RGB-tiling s.t. }\Omega\text{ has two green}\};\] \[\mathcal{RGBT}_{4b} = \{T_{rgb}(Q):\text{an RGB-tiling with all four edges along }\Omega\text{ blue}\}.\]
Also recall the definition of the north-\(\wedge\) edges and the south-\(\vee\) edges of \(\Omega\). Additionally we define the east-\(<\) to be \(\{aN,aS\}\), the west-\(>\) to be \(\{bN,bS\}\), the double-slash-\(//\) to be \(\{aN,bS\}\), the double-backslash-\(\backslash\backslash\) to be \(\{aS,bN\}\). According to these six different pair of edge sets, we can divide \(\mathcal{GT}_{2g}\) into six sub-collections. In the following us just pick three of them to write definition precisely.
\[\mathcal{GT}_{2g}^{\wedge} = \{T_{g}(Q)\in\mathcal{GT}_{2g}:\text{only the north-$\wedge$ edges are green}\};\] \[\mathcal{GT}_{2g}^{<} = \{T_{g}(Q)\in\mathcal{GT}_{2g}:\text{only the east-$<$ edges are green}\};\] \[\mathcal{GT}_{2g}^{//} = \{T_{g}(Q)\in\mathcal{GT}_{2g}:\text{only the double-slash-// edges are green}\}\]
Clearly, \(\mathcal{GT}_{2g}=\bigcup_{x\in D}\mathcal{GT}_{2g}^{\ x}\) where \(D=\{\wedge,\vee,<,>,//,\backslash\backslash\}\). Let us use \(\langle\cdot\rangle\) to denote the group of synonyms, for instance \(\langle\mathcal{RT}_{0r}\rangle=\mathcal{RT}_{0r}\cup\mathcal{GT}_{0g}\cup \mathcal{BT}_{0b}\) and \(\langle\mathcal{GT}_{2r}\rangle=\mathcal{RT}_{2g}\cup\mathcal{RT}_{2b}\cup \mathcal{GT}_{2r}\cup\mathcal{GT}_{2b}\cup\mathcal{BT}_{2r}\cup\mathcal{BT}_{ 2g}\). According to the discussion in the last two sections, Type A associates with \(\langle\mathcal{BT}_{4b}\rangle\), Type B associates with \(\langle\mathcal{GT}_{2g}^{\wedge}\rangle=\langle\mathcal{GT}_{2g}^{\vee}\rangle\) and the union set of Type A and Type B associates with \(\langle\mathcal{RT}_{0r}\rangle\), i.e., \(\langle\mathcal{RT}_{0r}\rangle=\langle\mathcal{BT}_{4b}\rangle\cup\langle \mathcal{GT}_{2g}^{\wedge}\rangle\). The impossible Type C for \(EP-\{ab\}\) is \(\langle\mathcal{GT}_{2g}^{<}\rangle=\langle\mathcal{GT}_{2g}^{\geq}\rangle\) and also impossible Type D is \(\langle\mathcal{GT}_{2g}^{//}\rangle=\langle\mathcal{GT}_{2g}^{\backslash}\rangle\). We also let \(\mathcal{RGBT}_{2g}^{x}\) to be the extension of \(\mathcal{GT}_{2g}^{x}\) for \(x\in\{\wedge,\vee,<,>,//,\backslash\backslash\}\)
**Theorem 12.1** (The Second Fundamental Theorem v1).: _Let \(M\) be an MPG and \(e=ab\in E(M)\); also let \(Q:=M-\{e\}\). The graph \(M\) is 4-colorable if and only if \(\mathcal{GT}^{<}_{2g}\cup\mathcal{GT}^{//}_{2g}\) is non-empty._
Proof.: Without loss of generality, this 4-coloring function is Co[\(a\):1, \(N\):4, \(b\):3, \(S\):2 or 4]. We have \(\mathcal{GT}^{//}_{2g}\) non-empty if and only if Co[\(S\):2]; also we have \(\mathcal{GT}^{<}_{2g}\) non-empty if and only if Co[\(S\):4]. The proof is complete.
This theorem is very simple. However, practically this condition appears too rare to be encountered and checked. Still we I have great respect for this theorem as a background of the coming new properties.
**Corollary 12.2**.: _Let \(M\) be an MPG with \(|M|\leq\omega\) and \(e=ab\in E(M)\); also let \(Q:=M-\{e\}\). The graph \(M\in e\mathcal{MPGN}\) if and only if \(\mathcal{GT}^{<}_{2g}\cup\mathcal{GT}^{//}_{2g}\) is empty._
Proof.: We need \(|M|\leq\omega\) because we need \(Q:=M-\{e\}\) 4-colorable and the domain \(\mathcal{RGBT}(M)\) to check is always non-empty.
Let us temporary assume \(Q:=EP-\{e\}\). Referring to the right graph in Figure 30, we see all four edges along \(\Omega\) blue. That means diamonds \(aN\) and \(bN\) are overlapping on \(\triangle abN\), and so are diamonds \(bS\) and \(bS\) overlapping on \(\triangle abS\). If we extended this \(T_{b}\) to a \(T_{rgb}\), then we shall have Kempe chains \(K_{r}|_{a}^{b}\) and \(K_{g}|_{a}^{b}\) due to Theorem 11.2(b). Referring to the left graph in Figure 30, we see all four edges along \(\Omega\) black. If we extended this \(T_{r}\) to a \(T_{rgb}\), then there are two possible coloring along along \(\Omega\): either Type A or Type B. By Theorem 11.2(b) and (c), a Kempe chain \(K_{r}|_{a}^{b}\) is guaranteed. Finally let us refer to the middle graph in Figure 30 which is an element in \(\mathcal{GT}^{\wedge}_{2g}\). If we extended this \(T_{g}\) to a \(T_{rgb}\), then there are two possible coloring on edges \(aS\) and \(bS\): either both red or both blue. Both red implies a Kempe chain \(K_{b}|_{a}^{b}\), and both blue implies a Kempe chain \(K_{r}|_{a}^{b}\).
**Theorem 12.3** (The Second Fundamental Theorem v2: the surrounding four edges of \(e\)-diamond; some necessary conditions).: _Let \(EP\in e\mathcal{MPGN}\) and any \(e=ab\in E(EP)\), where \(e\)-diamond has its 4-cycle \(\Omega:N\)-\(a\)-\(S\)-\(b\)-\(N\) around. Let \(Q=E-\{ab\}\)
_which is a 4-semi-MPG with its outer facet \(\Omega\). All the following statements are true:_
1. _The sets_ \(\mathcal{RT}_{0r}(Q)\)_,_ \(\mathcal{GT}_{2g}^{\wedge}(Q)\) _and_ \(\mathcal{BT}_{4b}(Q)\) _are all non-empty. Also we have_ \(\langle\mathcal{GT}_{2g}^{\wedge}(Q)\rangle=\langle\mathcal{GT}_{2g}(Q)\rangle\)_, i.e.,_ \(\langle\mathcal{GT}_{2g}^{<}(Q)\rangle\) _and_ \(\langle\mathcal{GT}_{2g}^{//}(Q)\rangle\) _are empty._
2. _For every_ \(T_{r}\in\mathcal{RT}_{0r}(Q)\)_, there exist a red path of even length from_ \(a\) _to_ \(b\)_._
3. _For every_ \(T_{g}\in\mathcal{GT}_{2g}^{\wedge}(Q)\)_, there must exist an extension_ \(T_{rgb}\) _of_ \(T_{g}\) _with the south-_\(\vee\) _edges blue (or red by symmetry) and then there must exist a red (blue) path of even length from_ \(a\) _to_ \(b\)_._
4. _For every_ \(T_{b}\in\mathcal{BT}_{4b}(Q)\)_, there must exist an extension_ \(T_{rgb}\) _of_ \(T_{b}\)_, and then there must exist a red path and a green path from_ \(a\) _to_ \(b\) _(that are both even length)._
Proof.: The background of these four claims are Theorem 11.2.
(a): Notice that \(\langle\mathcal{BT}_{4b}\rangle=\mathcal{RT}_{4r}\cup\mathcal{GT}_{4g}\cup \mathcal{BT}_{4b}\). By Theorem 11.2(a), \(\langle\mathcal{BT}_{4b}\rangle\) is non-empty, so is \(\mathcal{BT}_{4b}\). The same argument works for \(\mathcal{RT}_{2r}^{\wedge}\), \(\mathcal{GT}_{2g}^{\wedge}\), \(\mathcal{BT}_{2b}^{\wedge}\), \(\mathcal{RT}_{2r}^{\vee}\), \(\mathcal{GT}_{2g}^{\vee}\), and \(\mathcal{BT}_{2b}^{\vee}\), because all together of them form \(\langle\mathcal{GT}_{2g}^{\wedge}\rangle\) which consists of all RGB-tilings of Type B w.r.t. \((EP;e)\).
Clearly tilings in \(\mathcal{GT}_{2g}^{<}\) and \(\mathcal{GT}_{2g}^{>}\) are Type C and tilings in \(\mathcal{GT}_{2g}^{//}\) and \(\mathcal{GT}_{2g}^{\setminus\setminus}\) are Type D. All these four sets as well as Types C, D are empty sets. Therefore, \(\langle\mathcal{GT}_{2g}^{<}(Q)\rangle\) and \(\langle\mathcal{GT}_{2g}^{//}(Q)\rangle\) are empty.
(b), (c), (d): By Lemma 7.9, the tilings \(T_{r}\) in (b), \(T_{g}\) in (c) and \(T_{b}\) in (c) must be grand. Additionally, no odd-cycle make these three single color tilings inducing a 4-colorable function by Lemma 7.8. Thus, \(T_{r}\) in (b), \(T_{g}\) in (c) and \(T_{b}\) can extend to their own RGB-tilings.
At this moment, to ensure \(EP\) is extremum we need a proper Kempe chain \(K_{*}\) such that \(K_{*}\cup\{e\}\) is a non-trivial odd-cycle. Part (b) only need a red \(K_{r}\), so it is not necessary to extend a coexisting RGB-tiling. Part (c) really need a coexisting RGB-tiling \(T_{rgb}\) and by symmetry we assume the south-\(\vee\) edges blue; so that we must have \(K_{r}|_{a}^{b}\) of even length and then \(K_{r}|_{a}^{b}\cup\{e\}\) is a red odd-cycle. Part (d) also need a coexisting RGB-tiling, and then the dual Kempe chains \((K_{r},K_{g})\) exist.
_Remark 12.4_.: (1) What is difference between Theorem 12.3 and Theorem 11.2? Both of them provide necessary conditions for \(EP\in e\mathcal{MPGN}\), but Theorem 12.3 starts with an R, G or B-tiling rather than an RGB-tiling on \(E-\{e\}\). Also Theorem 12.3(b), which combines Types A and B, provides a new situation. (2) In order to claim sufficient conditions for \(EP\in e\mathcal{MPGN}\), we show the following lemma first. This lemma also drops a hint: \(\mathcal{BT}_{4b}(Q)\) might be empty for a general 4-semi-MPG \(Q\) with \(|Q|\leq\omega\). We shall to careful to check something on an empty set, because nothing to check means everything is true. Therefore, we leave this particular \(\mathcal{BT}_{4b}\) to the next section to discuss. (3) Our next goal is: Given an MPG \(M\) with \(|M|\leq\omega\), how to recognize \(M\) being non-4-colorable by offering only \(\mathcal{XT}_{ky}(M-\{e\})\)?
For item (d), why we put "both even length" in parentheses? The next lemma is an answer.
**Lemma 12.5**.: _In One Piece with \(T_{rgb}\) provided, the existence of chains \((K_{r}|_{\alpha}^{\beta},K_{g}|_{\alpha}^{\beta})\) implies both chains are even length._
Proof.: We have One Piece, so R-tiling is grand. In a grand R-tiling, a red path \(K_{r}|_{\alpha}^{\beta}\) guarantees that either \(\alpha,\beta\in V_{13}\) or \(\alpha,\beta\in V_{24}\). Then, refer to Lemma 7.5 to show \(K_{g}|_{\alpha}^{\beta}\) is even length. Similarly we can show \(K_{r}|_{\alpha}^{\beta}\) of even length. The given R-/G-tiling is grand is the key point of this lemma and the \(n\)-gon outer facet has minor impact.
**Lemma 12.6**.: _Let \(Q\) be a general 4-semi-MPG with \(|Q|\leq\omega\) and its outer facet \(\Omega:=N\)-\(a\)-\(S\)-\(b\)-\(N\). The sets \(\mathcal{RT}_{0r}(Q)\) and \(\mathcal{GT}_{2g}(Q)\) are both non-empty._
Proof.: Let us consider three cases: I. \(Q\cup\{ab\}\in e\mathcal{MPGN}4\); II. \(Q\cup\{NS\}\in e\mathcal{MPGN}4\); III. neither I nor II.
[I and II]: See Theorem 12.3(a).
[III]: Along the outer facet \(\Omega\) (a 4-cycle), at least a pair of vertices on the opposite are non-incident in \(Q\). Without loss of generality, we say \(N\) and \(S\) non-adjacent,
then \(Q^{\prime}:=Q\cup\{NS\}\) is an MPG and \(Q\cup\{NS\}\notin e\mathcal{MPGN}4\) by the hypothesis. Since \(|Q^{\prime}|=|Q|\leq\omega\), we know \(Q^{\prime}\) is 4-colorable. Without loss of generality we have a 4-coloring function \(f:V(Q^{\prime})\rightarrow\{1,2,3,4\}\) with \(f(N)=1\) and \(f(S)=3\), i.e., the edge \(NS\) is red and \(\Omega\cup\{NS\}\) is a red diamond. The corresponding edge-coloring induced by \(f\) is an RGB-tiling on \(Q^{\prime}\) and then an R-tiling\({}^{*}\) on \(Q^{\prime}\) with \(NS\) red. So, \(\mathcal{RT}_{0r}\) is non-empty. An RGB-tiling on \(Q^{\prime}\) with \(NS\) red also induce a \(T_{g}\in\mathcal{GT}_{2g}\). So, \(\mathcal{GT}_{2g}\) is non-empty.
_Remark 12.7_.: We get benefit from \(Q\) as One Piece: any R-/G-/B-tiling\({}^{*}\) on \(Q\) can extend to a RGB-tiling on \(Q\) and then a 4-coloring function. Once \(\mathcal{GT}_{2g}(Q)\) is non-empty, so is \(\mathcal{RGBT}_{2g2b}(Q)\). Notice that \(\mathcal{RGBT}_{0r}(Q)=\mathcal{RGBT}_{4g}(Q)\cup\mathcal{RGBT}_{4b}(Q)\cup \mathcal{RGBT}_{2g2b}(Q)\); so \(\mathcal{RGBT}_{0r}(Q)\) is non-empty. Finally, \(\mathcal{RT}_{0r}(Q)\) is non-empty. Now we can say that \(\mathcal{GT}_{2g}(Q)\) plays the key role of Lemma 12.6.
**Example 12.8**.: Consider \(K_{4}\) with vertex set \(\{a,b,N,S\}\) and let \(Q:=K_{4}-\{ab\}\). Clearly \(\mathcal{RT}_{0r}\) and \(\mathcal{GT}_{2g}\) are both non-empty. But \(\mathcal{BT}_{4b}\) is empty.
### Types B, C and D: two greens and two blues
Two greens and two blues along the four surrounding edges of \(e\)-diamond is the commend character of Types B, C and D. Thus the next theorem comes up naturally.
**Theorem 12.9** (The Second Fundamental Theorem v3: necessary and sufficient conditions).: _Given any MPG, denoted by \(M\), with \(|M|\leq\omega\), here is a necessary and sufficient condition for \(M\in e\mathcal{MPGN}4\):_ **{There exists an edge (a sufficient condition)/ It is true for every edge (a necessary condition)}**_\(e=ab\) in \(M\) such that the 4-semi-MPG \(Q:=M-\{e\}\) satisfies one of the following items:_
* _For every_ \(T_{r}\in\mathcal{RT}_{0r}(Q)\)_, there is a red_ \(a\)_-_\(b\) _path_ \(K_{r}\) _of even length in_ \(T_{r}\)_._
* _For every_ \(T_{g}\in\mathcal{GT}_{2g}(Q)\)_, let us first extend it to an RGB-tiling_ \(T_{rgb}(Q)\)_. By symmetry or synonym relation, we assume that the four surrounding edges of_ \(e\) _are two green and two blue; then there exists a non-trivial red_ \(a\)_-_\(b\) _path_ \(K_{r}\) _of even length in_ \(T_{rgb}\)
_._
* _For every_ \(T_{g}\in\mathcal{GT}_{2g}(Q)\)_, the_ \(e\)_-diamond must be Type B after we extend this_ \(T_{g}\) _to be any possible_ \(T_{rgb}\)_. In other word, we have_ \(\langle\mathcal{GT}_{2g}(Q)\rangle=\langle\mathcal{GT}_{2g2b}^{\wedge}(Q)\rangle\) _(i.e.,_ \(\langle\mathcal{GT}_{2g}^{<}(Q)\rangle=\langle\mathcal{GT}_{2g}^{\vee}(Q) \rangle=\emptyset\)_) and we obtain_ \(K_{r}|_{a}^{b}\)_._
_Remark 12.10_.: First of all, the statement of Theorem 12.9(ii') does not mention that \(K_{r}|_{a}^{b}\) is even length. This fact is automatically true for \(a\)-\(N\)-\(b\) being a same color and by Lemma 6.2(b). The situation and the reason this time are different from Lemma 12.5. Second, Theorem 12.3(a) and (b) (as well as (a) and (c); (a) and (d) respectively) offer a necessary condition for \(EP\). Clearly (a) and (b) associate with item (i) here; (a) and (c) associate with item (ii) and (ii'); but (a) and (d) have no corresponding item here, because this corresponding and powerful item is a big project which need a whole new section to study and explain.
_Remark 12.11_.: We write **{There exists an edge (sufficient)/ It is true for every edge (necessary)}** in this theorem. It is weird to see that a sufficient condition is weaker than a necessary condition. How this theorem comes to be if-and-only-if conditions? The reason or this phenomenon is kind of "one (diamond) for all and all for one" as follows:
A particular \(e\)-diamond satisfies any item of Theorem 12.9.
\[\Rightarrow EP\in e\mathcal{MPGN}4.\] \[\Rightarrow \text{Any edge in $EP$ plays the same role as this $e$-diamond by Theorem \ref{thm:1}(a)}.\] \[\Rightarrow \text{It is true for every edge in $EP$ such that... (all necessary conditions)}.\]
_Remark 12.12_.: The hypothesis \(|M|\leq\omega\) is important, because we need \(Q:=M-\{e\}\) 4-colorable and the domain \(\mathcal{RGBT}(M)\) to check is always non-empty.
Proof.: First of all, this MPG \(M\) with \(|M|\leq\omega\) implies \(Q:=M-\{e\}\) 4-colorable. By Lemma 12.6, the sets \(\mathcal{RT}_{0r}(Q)\) and \(\mathcal{GT}_{2g}(Q)\) are both non-empty. Also by extending single color tilings to RGB-tiling, we have \(\langle\mathcal{RT}_{0r}(Q)\rangle=\langle\mathcal{GT}_{2g}(Q)\rangle\cup \langle\mathcal{BT}_{4b}(Q)\rangle\), even though we do not know whether \(\mathcal{BT}_{4b}(Q)\) is empty or not.
(i): By Theorem 12.3(a) and (b), this item is a necessary condition. Now let us prove this item is a sufficient condition. By Lemma 12.6, \(\mathcal{RT}_{0r}(Q)\) is non-empty, thus discussing a \(T_{r}\in\mathcal{RT}_{0r}(Q)\) is logically reasonable to go ahead. To possibly 4-color \(M\) we shall try every R-tiling in \(\mathcal{RT}_{0r}(M-\{e\})\) first in order to try the last red \(e\)-diamond. Yes, the last \(e\)-diamond is the final judge: Whether the only possible red odd-cycle turns out? It does turn out if and only if we see a \(K_{r}|_{a}^{b}\) of even length for every \(T_{r}\in\mathcal{RT}_{0r}(Q)\).
(ii'): By Theorem 12.3(a) and (c), this item is a necessary condition. Now let us prove this item is a sufficient condition. Actually Corollary 12.2 is a good reference for if-and-only-if. Item (ii') provides \(\langle\mathcal{GT}_{2g}(Q)\rangle=\langle\mathcal{GT}_{2g2b}^{\wedge}(Q)\rangle\) (i.e., \(\langle\mathcal{GT}_{2g}^{<}(Q)\rangle=\langle\mathcal{GT}_{2g}^{//}(Q) \rangle=\emptyset\)) that is enough for sufficient condition. But without \(K_{r}|_{a}^{b}\) means there exists \(K_{r}|_{S}^{N}\) which will transform \(T_{g}^{\wedge}\) to be \(T_{g}^{<}\).
(ii') \(\Leftrightarrow\) (ii): The direction (ii') \(\Rightarrow\) (ii) is trivial. Now let us show (ii') \(\Leftarrow\) (ii). We need only show the position of two green and two blue along \(Omega\). The reason dues to the even length \(K_{r}|_{a}^{b}\). For even length, these two green edges must be either the north-\(\wedge\) edges or the south-\(\vee\) edges. Therefore, the \(e\)-diamond must be Type B. \(\Box\)
_Remark 12.13_.: If we assume \(e\mathcal{MPGN}4\) non-empty, then \(\omega\) is well-defined and \(|M|<\omega\) implies \(M\) 4-colorable. So all these sufficient conditions are good to check (to distinguish) the two kinds of \(M\) with \(|M|=\omega\). However, we still consider it is good description to including every \(M\) with \(|M|<\omega\) in this theorem; because once it satisfies any item of Theorem 12.9 we can conclude that either this \(M\) should not exist or the set \(\mathcal{XT}_{ky}(Q)\) of single color tilings should be empty. Any contradiction is always what we hope for. How about we remove the requirement \(|M|\leq\omega\) and set no limit on \(|M|\). The problem is that we have no idea about a general MPG, which is not an \(EP\), in \(\mathcal{N}4\). We believe that those necessary conditions in Theorem 12.3 and Theorem 12.9 would not work for this general MPG in \(\mathcal{N}4\). So \(|M|\leq\omega\) is important and cannot be relaxed.
_Remark 12.14_.: Here is a interesting question: For item (ii'), what happens if we have \(\langle\mathcal{GT}^{<}_{2g2b}(Q)\rangle=\langle\mathcal{GT}^{//}_{2g2b}(Q)\rangle=\emptyset\) but know nothing about the existence of \(K_{r}|_{a}^{b}\)? The argument is easy. We claim that if \(\langle\mathcal{GT}^{<}_{2g2b}(Q)\rangle=\langle\mathcal{GT}^{//}_{2g2b}(Q) \rangle=\emptyset\), then \(K_{r}|_{S}^{N}\) is impossible to appear in any RGB-tiling on \(EP-\{e\}\). Because given \(T^{\wedge}_{rgb}(Q)_{2g2b}\) and \(K_{r}|_{S}^{N}\), we will then have \(T^{//}_{rgb}(Q)_{2g2b}\); given \(T_{rgb}(Q)_{4b}\) and \(K_{r}|_{S}^{N}\), we will then have \(T^{<}_{rgb}(Q)_{2g2b}\). Therefore, \(K_{r}|_{a}^{b}\) must exists for any \(T_{rgb}(Q)\). However, the hypothesis that **given any \(T_{rgb}(Q)_{4b}\) and then we always see \((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\)** is just a necessary condition but not enough to be a sufficient condition for \(Q\) non-4-colorable. We will show two counterexamples in the next section.
### If-and-only-if condition by odd-cycle and some conjectures
It is nice to achieve several if-and-only-if conditions for \(EP\in e\mathcal{MPGN}4\) by Theorems 12.2 and 12.9. However, the first one is easy to check but rare to encounter and the second one is hard to check "every... must be" but we see it a lot of times. The following corollary claim three conditions involving odd-cycles.
**Corollary 12.15**.: _Let \(M\) be an MPG with at least an R-tiling. (We exclude out that case that \(M\) has no R-tiling. In this case \(M\) is definitely non-4-colotable.)_
1. _A sufficient and necessary condition for_ \(M\) _non-4-colorable is that any R-tiling on_ \(M\) _has at least one red odd-cycle._
2. _Based on (a),_ \(M\in e\mathcal{MPGN}4\) _if and only if there an R-tiling on_ \(M\) _who has exactly one red odd-cycle (or_ \(oc(T_{r})=1\)_)._
3. _Based on (b), if we fix any edge_ \(e=ab\in E(M)\)_, then there exists an R-tiling on_ \(M\) _whose single red odd-cycle passing through_ \(e\)_._
Proof.: Briefly, (a) \(\Leftrightarrow\) Theorem 6.7, and (c) \(\Leftrightarrow\) Theorem 4.3 together with Theorem 6.7.
As for part (b), we use an unclear concept. We would rather say that the following discussion is not a proof but a definition of "exactly one red odd-cycle" or the definition of \(oc(T_{r})\).
An independent cycle \(C\) means a collection \(E(C)\) of edges such that \(C\) is 2-connected and \(C-\{e\}\) is 1-connected for any \(e\in E(C)\). For example, we see two independent cycles in the first graph of Figure 31. They are one independent odd-cycle and one independent even cycle.
A combination of cycles looks like the second graph of Figure 31. We use this graph to explain. Obviously there are three cycles, namely
\[C_{1} := v_{0}\text{-}v_{1}\text{-}v_{2}\text{-}v_{c}\text{-}v_{7}\text{-}v_{ 0};\] \[C_{2} := v_{5}\text{-}v_{6}\text{-}v_{7}\text{-}v_{c}\text{-}v_{a}\text{-}v _{5};\] \[C_{3} := v_{0}\text{-}v_{1}\text{-}v_{2}\text{-}v_{c}\text{-}v_{a}\text{-}v _{5}\text{-}v_{6}\text{-}v_{7}\text{-}v_{0}.\]
Notice that \(C_{1}\) and \(C_{2}\) are odd-cycles, and \(C_{3}\) is an even-cycle. However, we wound not say this R-tiling has two odd-cycles. Actually this R-tiling has only one odd-cycle. We will explain the reason behind.
Let \(EC\) be the edge set of this combination and \(EC\) consists of 9 edges. Also define the pair \((\#o,\#e)_{EC}\) to be the numbers of odd-cycles and even-cycles made by \(EC\). Our example has \((\#o,\#e)_{EC}=(2,1)\).
Among the nine edges in out \(EC\), \(v_{7}v_{c}\) is different from the other eight; because \((\#o,\#e)_{EC-\{v_{7}v_{c}\}}=(0,1)\) and \((\#o,\#e)_{EC-\{v_{a}v_{y}\}}=(1,0)\) when \(v_{x}v_{y}\) is one of
Figure 31. Left: two independent cycles; Right: a combination of two cycles which has \(oc(T_{r})=1\).
the other eight. The pairs \((0,1)\) and \((1,0)\) indicate an independent cycle remained. Therefore, this combination is actually combined by two cycle. Also due to \((0,1)\) and \((1,0)\), it shall be combined by one odd-cycle and one even-cycle. Thus, we would say this combination is \(C_{1}\oplus C_{3}\) or \(C_{2}\oplus C_{3}\); but we would not say it is \(C_{1}\oplus C_{2}\). Now we can conclude that the combination in the second graph of Figure 31 has exactly one odd-cycle.
Please, refer to the formal definition behind. Because choosing any \(e\in EP\) we can have an RGB-tiling \(T_{rgb}\) on \(EP-\{e\}\) and \(T_{r}\cup\{e\}\) has exactly one red odd-cycle, where \(T_{r}\) is the R-tiling induced by \(T_{rgb}\).
**Definition 12.16**.: Let \(T_{r}\) be an R-tiling on an MPG \(M\). This \(T_{r}\) is also the set of all red edges. The number of odd cycles of \(T_{r}\) is defined to
\[oc(T_{r}):=\min\{|S|\mid S\subseteq T_{r}\text{ such that }(\#o,\#e)_{T_{r}-S}=(0, *)\}.\]
Similarly we can define the number of even cycles \(ec(T_{r})\).
Let us choose an \(e\)-diamond of \(EP\) and a fixed RGB-tiling \(T_{rgb}(EP-e)\), whether \(T_{rgb}\) is Type A or Type B. Also let \(FP\) and this \(e\)-diamond look like either graphs in Figure 27. There are a north and a south red canal rings, denoted by \(NrCL_{e}\) and \(SrCL_{e}\) respectively. Indeed, \(NrCL_{e}:=rCL(aN)\) and \(SrCL_{e}:=rCL(aS)\). Without loss of generality, we assume \(Co(a,b)=1\) and then we have a fixed direction of current along \(NrCL_{e}\) and \(SrCL_{e}\). We also use \(NrCL_{e}^{r}\) and \(SrCL_{e}^{r}\)to denote the right canal banks of these two currents. Please, refer to Definition 7.1.
The union \(NrCL_{e}^{r}\cup SrCL_{e}^{r}\) has three parts. (1): Let \(rNS(EP;e;T):=NrCL_{e}^{r}\cap SrCL_{e}^{r}\). Clearly \(e\in rB(EP;e;T)\); (2): Let \(rDJV(EP;e;T)\) consist of those _deja-vu_ edges for \(NrCL_{e}^{r}\) or for \(SrCL_{e}^{r}\); (3) Besides \(rNS(EP;e;T)\) and \(rDJV(EP;e;T)\), the rest of \(NrCL_{e}^{r}\cup SrCL_{e}^{r}\) forms some **red even-cycles**. Furthermore, these cycles are classified into three sub-parts. (3a): Cycles made by both edges from
\(NrCL_{e}^{r}\) and \(SrCL_{e}^{r}\); (none of them from \(rB(EP;e;T)\).) (3b): Cycles made by edges from \(NrCL_{e}^{r}\); (3c): Cycles made by both edges from \(SrCL_{e}^{r}\).
_Remark 12.17_.: For (1), every edge \(e^{\prime}\) in \(rNR(EP;e;T)\) can play the same role as \(e\) for this fixed \(T_{rgb}\). In other words, we can turn \(e\) from yellow to red and in the same time turn \(e^{\prime}\) from red to yellow. Of course, we need to perform ECS along \(NrCL_{e}\) and \(SrCL_{e}\) (not all but parts of them). Finally we have a new \(T_{rgb}^{\prime}\) with a Type A or Type B \(e^{\prime}\)-diamond. Usually Type A is good, because to determine at least a new \(K_{g}^{\prime}\) such that \(K_{g}^{\prime}\cup\{e^{\prime}\}\) is an odd-cycle.
_Remark 12.18_.: For (3) and any other red even-cycle in this \(T_{rgb}(EP-e)\) form their own normal red canal rings. Performing ECS on any of these red canal rings or even combination of them will of course do nothing on \(K_{r}\) in \(\Sigma^{\prime}\); However, amazingly these ECS might change the real shapes of \(K_{g}\) and \(K_{r}\), but nothing to do with the original green/blue connection in view of skeleton in \(\Sigma^{\prime}\).
_Remark 12.19_.: For (2), _deja-vu_ edges in \(NrCL_{e}^{r}\) (or \(SrCL_{e}^{r}\)) is a magic, because it can be a short cut of the current along \(NrCL_{e}^{r}\). If we cross a red deja-vu edge when we perform ECS designed in Remark 12.17, then we will get two new \(e\)-diamonds. Two \(e\)-diamonds at the same might offer some interesting results.
Fix a Type A \(e\)-diamond of \(EP\) and we concern about all kind of the RGB-tilings \(T_{i}(EP-e)\) who has exactly one red (green) odd-cycle if we replace the yellow double-line \(e\) by red (green).
Let us modify \(EP\) by merging \(a=b\) as well as merging \(aN=bN\) and \(aS=bS\), and we obtain a new MPG, denoted by \(EP^{a=b}\) (no more vertex \(b\)). Clearly \(\deg(a)\geq 6\) and \(T_{rgb}\)
such that once \(aN\) and \(aS\)
or Type B
These if-and-only-if conditions for \(EP\) will definitely make contribution in the further studies. Theorem 12.3 and Theorem 12.9 offer a new approach for proving the Four Color Theorem without checking by a computer. According to the discussion from Section 9 up to here, we summarize the main idea as follows.
Compared with the classical Kempe proof that considered 4-colorable
\(EP-\{v\}\) with \(\deg(v)=5\), our new approach study 4-colorable
\(EP-\{e\}\) for any edge \(e\) in \(EP\).
## 13. Type A is just a syndrome
From the successful if-and-only conditions provided in the last section, the next property is highly recommended.
**False Conjecture 13.1**.: _Given any MPG, denoted by \(M\) with \(|M|\leq\omega\), and provided \(\mathcal{BT}_{4b}(Q)\)_**non-empty**_, here is a necessary and sufficient condition for \(M\in e\mathcal{MPGN}4\): There exists an edge \(e=ab\) in \(M\) such that the 4-semi-MPG \(Q:=M-\{e\}\) such that for every \(T_{b}\in\mathcal{BT}_{4b}\) with any extended \(T_{rgb}\) there exist the dual Kempe chains \((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) and both of them are even length._
Unfortunately, we are going to demonstrate two simple but critical counterexamples as follows.
**Example 13.2**.: The left graphs in Figures 32 and 33 are \(M_{1},M_{2}\in e\mathcal{MPGN}4\). Both graphs show an \(e\)-diamond of A-type. Notice that we draw \(\Sigma^{\prime}\) inside and \(\Sigma\) outside \(\Omega:=N\)-\(a\)-\(S\)-\(b\)-\(N\) this particular time. Given the four edges along \(\Omega\) are all blue, the edges that link any two inner vertices of \(\Sigma^{\prime}\) must be all blue, in order to fulfill the unique blue canal line. The two left graphs show the only possible RGB-tiling on \(Q_{1}/Q_{2}\) under synonyms w.r.t. red and green. So \(|\mathcal{BT}_{4b}(Q_{1})|=|\mathcal{BT}_{4b}(Q_{2})|=1\) are non-empty and we do find out \((K_{r}|_{a}^{b},K_{g}|_{a}^{b})\) in these two RGB-tilings. Unfortunately the right graphs in Figures 32 and 33 show that \(M_{1},M_{2}\) are 4-colorable if we color edge \(e\) by red.
As the title of this section "Type A is just a syndrome", it dose not mean useless of this syndrome. Some thing might happen behind a syndrome. How to remedy or diagnose this particular syndrome? We suggest checking by Theorem 12.1 directly. Actually Type A syndrome still offers good reason to simplify the check method of Theorem 12.1.
_Remark 13.3_ (To be or not to be; the limit of Type A).: Conjecture 13.1 is incorrect but it still reveals some information for our further study. At the end of Part III of this paper (Section 21), we will provided a false proof that concerns two adjacent vertices of degree 5. That proof is incorrect because we did use Conjecture 13.1.
Due to that false proof, we learn more about the limit of Type A. The author has lunched a new research to find the possible approach to conquer this dilemma of to-be-or-not-to-be.
## 14. 4-cycles in \(Ep\)
This is an independent section, but the result is very important. As an extremum MPG, \(EP\)\(ine\mathcal{MPGN}4\) has many properties that other MPG's do not, i.e., some properties that neither 4-color graphs nor non-extremum non-4-color ones have.
For instance, adding an extra vertex \(v\) into the middle of any triangle, say \(w\)-\(x\)-\(y\)-\(w\), of \(EP\) and linking new edges from three neighbors to \(v\) to get a new MPG. This new MPG, denoted \(EP+v\), is of course non-4-colorable and its order is \(\omega+1\). Interestingly \(\deg(v)=3\). Please, refer to Theorem 3.1 and Theorem 8.1 as for \(EP\). Also \(w\)-\(x\)-\(y\)-\(w\), a 3-facet in \(EP\), is now a non-trivial triangle of \(EP+v\). Please, refer to Lemma 4.5. Does \(G-\{vw\}\) follow those necessary conditions given in Theorem 12.3? The answer is no, because \(Q:=G-\{vw\}\) is non-4-colorable and then the sets \(\mathcal{RT}_{0r}\), \(\mathcal{GT}_{2g}\), \(\mathcal{GT}_{2g}^{\wedge}\) and \(\mathcal{BT}_{4b}\) are all empty. Thus, Theorem 12.3(a) fails for \(G-\{vw\}\), not to say Theorem 12.3(b), (c) and (d).
Rather than adding just an extra vertex \(v\) to \(EP\), we can glue up any new MPG \(M\) with \(|M|\geq 5\) onto \(EP\). Just let them share a common triangle \(w\)-\(x\)-\(y\)-\(w\). Notice that the case in the last paragraph have \(V(M)=\{v,w,x,y\}\). This new non-4-colorable MPG, denoted by \(EP\oplus M\), has a non-trivial triangle \(w\)-\(x\)-\(y\)-\(w\). However, Lemma 4.5 said that every triangle in \(EP\) must be trivial. So, that lemma is only for \(EP\) as an extremum, but not for \(EP\oplus M\). In \(EP\oplus M\), it is also easy to find three non-trivial 4-cycles, where we define a _trivial 4-cycle_ associating with four surrounding edges of a diamond.
In the following, let us focus on 4-cycles in \(EP\). A trivial 3-cycle means a 3-facet, and a trivial 4-cycle forms a diamond; both ideas of "trivial" indicate the kinds of cycles that have no vertex inside.
**Theorem 14.1** (Important).: _If \(\Omega:=a\)-\(b\)-\(c\)-\(d\)-\(a\) is a 4-cycle in \(EP\), then either \(ac\) or \(bd\) is an edge of \(EP\), and \(a,b,c,d\) induce a single diamond in \(EP\)._
_Remark 14.2_.: This is again a special property for \(EP\), but not for a non-4-colorable MPG with order greater than \(\omega=|EP|\). The 4-cycle \(\Omega\) separate \(EP\) into two regions: \(\Sigma\) (inside) and \(\Sigma^{\prime}\) (outside) with \(\Sigma\cap\Sigma^{\prime}=\Omega\). Both \(\Sigma\) and \(\Sigma^{\prime}\) are 4-semi-MPG's, and \(|\Sigma|\), \(|\Sigma^{\prime}|\) are less than \(|EP|\) if \(\Omega\) is non-trivial. Otherwise, a trivial 4-cycle \(\Omega\) will make the inside of either \(\Sigma\) or \(\Sigma^{\prime}\) no vertex; as a part of MPG \(EP\), this empty inside shall have either the edge \(ac\) or \(bd\). On the other hand, if \(ac\) exists inside \(\Sigma\) as well as crosses \(\Omega\), then \(\Sigma\) must be a diamond by Lemma 4.5. If both \(ac\) and \(bd\) exist in \(EP\), then by Lemma 4.5 we must have \(EP=K_{4}\), which is a contradiction. This is also the reason for "either \(ac\) or \(bd\)" is an edge of \(EP\).
Proof.: Let us adopt the notation in the remark above. We will prove by contrapositive by assuming both sides of \(\Omega\) have some vertices. By this assumption, neither the edge \(ac\) nor \(bd\) exists in \(\Sigma\) and in \(\Sigma^{\prime}\), because the existence of \(ac\) inside \(\Sigma\) set the existence of the triangles \(a\)-\(b\)-\(c\)-\(a\) and \(a\)-\(d\)-\(c\)-\(a\), and then force \(\Sigma\) to be a diamond by Lemma 4.5. Because \(\Sigma\cup\{ac\}\) (draw the edge \(ac\) outside \(\Omega\)) and \(\Sigma^{\prime}\cup\{ac\}\) (draw the edge \(ac\) inside \(\Omega\)) are MGPs with \(|\Sigma|\) and \(|\Sigma^{\prime}|\) less then \(|EP|\), they are 4-colorable and then \(\Sigma\) and \(\Sigma^{\prime}\) are also 4-colorable.
If every 4-coloring function on \(\Sigma\) makes \(a,b,c,d\) exactly four different colors then the new graph \(G:=\Sigma\cup\{au,bu,cu,du\}\) forms a non-4-colorable MPG. The result either contradicts to \(EP\) being an extremum, or \(G=EP\) which contradicts to Theorem 8.1 for \(\deg(u)=4\). Therefore, there is at least a 4-coloring of \(\Sigma\) that makes \(a,b,c,d\) at most three different colors. This argument also works for \(\Sigma^{\prime}\), so there is at least a 4-coloring of \(\Sigma^{\prime}\) that makes \(a,b,c,d\) at most three different colors.
Suppose both \(\Sigma\) and \(\Sigma^{\prime}\) have a 4-coloring making \(a,b,c,d\) only two colors. Then \(EP=\Sigma\cup\Sigma^{\prime}\) is 4-colorable. So, it is a contradiction. So, at least one of \(\Sigma\) and \(\Sigma^{\prime}\) has all its 4-colorings making \(a,b,c,d\) only two colors.
Without loss of generality, we assume that every 4-coloring of \(\Sigma\) never making \(a,b,c,d\) only two colors. However, we also know that there is at least a 4-coloring
of \(\Sigma\) that makes \(a,b,c,d\) at most three different colors. So, there is a 4-coloring function of \(\Sigma\), say \(f\), that makes \(f(a)=1\), \(f(b)=2\), \(f(c)=3\), \(f(d)=2\). Notice that there exists either a 1-3 Kempe chain connecting \(a\) and \(c\), or a 2-4 Kempe chain connecting \(b\) and \(d\). However, the existence of that 2-4 Kempe chain will cause vertex \(f(c)=1\) by vertex-coloring-switching applying on the 1-3 connected component containing \(c\). And then we get a new 4-coloring of \(\Sigma\) making \(a,b,c,d\) only two colors; so a contradiction! Thus, the only possible is 1-3 Kempe chain connecting \(a\) and \(c\), and \(b\) and \(d\) belong to two different 2-4 connected components. Now we back to the 4-colorable MGP \(\Sigma^{\prime}\cup ac\). Without loss of generality, we have 4-coloring function \(f^{\prime}:V(\Sigma)\rightarrow\{1,2,3,4\}\) with \(f^{\prime}(a)=1\), \(f(b)=2\), \(f(c)=3\), \(f(d)=x\) where \(x=2\) or \(4\). If \(x=2\) then once again \(EP=\Sigma\cup\Sigma^{\prime}\) is 4-colorable; so a contradiction! If \(x=4\), let us go back to \(\Sigma\) and do vertex-coloring-switching on the 2-4 connected component containing \(d\) and make \(f(d)=4\), i.e., we makes \(a,b,c,d\) colored by \(1,2,3,4\) respectively. Once again \(EP=\Sigma\cup\Sigma^{\prime}\) is 4-colorable; so a contradiction. For all these contradictions we conclude that \(a,b,c,d\) induce a single diamond in \(EP\) which is equivalent to the fact that either \(ac\) or \(bd\) is an edge of \(EP\).
At the end of Part II, we leave a question: What are the possible shapes for a 5-cycle in an \(EP\)?
|
2309.10210 | ProtoKD: Learning from Extremely Scarce Data for Parasite Ova
Recognition | Developing reliable computational frameworks for early parasite detection,
particularly at the ova (or egg) stage is crucial for advancing healthcare and
effectively managing potential public health crises. While deep learning has
significantly assisted human workers in various tasks, its application and
diagnostics has been constrained by the need for extensive datasets. The
ability to learn from an extremely scarce training dataset, i.e., when fewer
than 5 examples per class are present, is essential for scaling deep learning
models in biomedical applications where large-scale data collection and
annotation can be expensive or not possible (in case of novel or unknown
infectious agents). In this study, we introduce ProtoKD, one of the first
approaches to tackle the problem of multi-class parasitic ova recognition using
extremely scarce data. Combining the principles of prototypical networks and
self-distillation, we can learn robust representations from only one sample per
class. Furthermore, we establish a new benchmark to drive research in this
critical direction and validate that the proposed ProtoKD framework achieves
state-of-the-art performance. Additionally, we evaluate the framework's
generalizability to other downstream tasks by assessing its performance on a
large-scale taxonomic profiling task based on metagenomes sequenced from
real-world clinical data. | Shubham Trehan, Udhav Ramachandran, Ruth Scimeca, Sathyanarayanan N. Aakur | 2023-09-18T23:49:04Z | http://arxiv.org/abs/2309.10210v1 | # ProtoKD: Learning from Extremely Scarce Data for Parasite Ova Recognition
###### Abstract
Developing reliable computational frameworks for early parasite detection, particularly at the ova (or egg) stage, is crucial for advancing healthcare and effectively managing potential public health crises. While deep learning has significantly assisted human workers in various tasks, its application in diagnostics has been constrained by the need for extensive datasets. The ability to learn from an extremely scarce training dataset, i.e., when fewer than \(5\) examples per class are present, is essential for scaling deep learning models in biomedical applications where large-scale data collection and annotation can be expensive or not possible (in case of novel or unknown infectious agents). In this study, we introduce ProtoKD, one of the first approaches to tackle the problem of multi-class parasitic ova recognition using extremely scarce data. Combining the principles of prototypical networks and self-distillation, we can learn robust representations from only one sample per class. Furthermore, we establish a new benchmark to drive research in this critical direction and validate that the proposed ProtoKD framework achieves state-of-the-art performance. Additionally, we evaluate the framework's generalizability to other downstream tasks by assessing its performance on a large-scale taxonomic profiling task based on metagenomes sequenced from real-world clinical data.
Learning from Extremely Scarce Data, Ova Detection, Microscope Image Analysis
## I Introduction
Parasitic infections pose a significant threat to human and animal health, often leading to severe illness and even fatalities. These infections can be transmitted through various means, including contaminated food and water sources and common disease vectors such as mosquitoes. For instance, certain zoonotic diseases can be transmitted to humans through the consumption of infected livestock, such as cows and pigs. Many infections, particularly gastrointestinal, can cross the species barrier, and can further amplify the public health risks associated with them. Detecting these parasites early, especially at the ova stage, is crucial for preventing outbreaks of parasitic diseases. Parasitic ova (eggs or cysts) have unique characteristics that allow them to be a distinguishing factor between different kinds of parasitic infections. The typical identification process requires the isolation of the egg from fecal samples collected from the infected host which are subsequently analyzed by a parasitologist under a microscope. Genus-level identification can enable the development of treatments to prevent severe infections and large-scale outbreaks.
Deep learning has enabled the development of "smart" systems that have made immense progress in many fields. However, enormous amounts of historical data (hundreds if not thousands of samples per class) have been necessary to derive insights for downstream applications. Advances in disease diagnostics have been sparse due to the need to learn associations from highly limited, constrained data. Acquiring clinical data to train such deep learning models can be expensive, both in terms of the data acquisition cost and the time and effort of highly skilled human users to provide high-quality, large-scale annotations, which curbs the applications of deep learning models trained under a traditional supervised setting. Transfer learning [1] and few-shot learning [2, 3, 4] have somewhat alleviated this dependency but still require significant amounts of quality data. Generative models such as diffusion models [5] and generative adversarial networks (GANs) [6] show potential in generating synthetic samples for training data augmentation but have been prone to memorizing and replicating training data [7], and prone to hallucinating artifacts [8]. Hence, their application in biomedical settings is inhibited due to privacy concerns [9], where patient identity and data integrity could be compromised.
In this work, we present _ProtoKD_, a framework designed to work with extremely scarce data, e.g., where less than \(5\) training samples are present per class. Such settings are common in biomedical and diagnostic applications where labeled training data can be expensive. There is a need for rapid learning from a few samples for novel or unseen classes of interest. An illustration of the approach is shown in Figure 1. Our approach was based on the idea that learning robust representations of limited training data and using _domain-specific_ augmentations to create an auxiliary dataset that can, together, capture intra-class variation. Through a cyclical, two-phase process, we aim to align representations from original and augmented images to capture variations in the decision boundary across closely related classes. First, a matching loss is introduced to learn robust representations to distinguish _between_ classes using a prototypical network. Second, a self-distillation loss is introduced to help capture the intra-class variations in the data by presenting the network with heavily augmented data and training with pseudo-labels generated by the prototypical network. This step has a two-fold effect: (i) it adds a level of regularization that prevents the networks from overfitting, and (ii) it allows us to introduce other learning losses that help discriminate between fine-grained representations. By extending the idea of prototypical networks and self-distillation, we learned robust representations from extremely sparse data that was capable of capturing the intra-class variations through
domain-specific data augmentation.
The **contributions** of our work are three-fold: (i) we present one of the first works to address the problem of multi-class parasitic ova recognition from microscopic images, (ii) we develop a framework to learn from extremely scarce data (from a single example per class) in a multi-class classification setting, and finally (iii) we demonstrate its generalization to other biomedical tasks by evaluating on metagenome profiling.
### _Related Work_
There have been very few **automatic parasitic ova detection** frameworks explored in literature. Supervised transfer learning has been explored in detecting and classifying seven species of _Eimeria_ spp. in chickens [10] through the analysis of curated large-scale microscopic imagery [11, 12]. Data augmentation techniques, such as image flipping, adding Gaussian noise and histogram normalization, and transfer learning from ImageNet [13] pre-trained models have enabled the training of large deep learning models for ova detection. However, the dependency on large-scale training data was not alleviated, which limits their generalization to other biomedical applications. Weakly supervised approaches such as those based on Multiple Objects Feature Fusion (MOFF) [14] and traditional image processing techniques [15] have been used to reduce the dependency on densely annotated data, yet still assume access to large-scale datasets for learning associations between the input and target classifications. Advances in generative models such as GANs provided a viable mechanism to generate additional training data for **learning from limited data.** DADA [16] explores training with small samples from Cifar-10 and SVNH. They use GANs to augment the original dataset for more diversity with the same labels. Barz _et al._[17] showed that the Cosine Loss is better than categorical cross-entropy whenever only a few samples are present per class. Ishikawa _et al._[18] use conditional GAN for augmentation to improve efficiency in generating training samples but increase the computational cost. Brigato _et al._[19] use Auxiliary-Classifier GANs for image synthesis and classification in low data settings. Meta-learning approaches such as MAML [2] and Prototypical Networks [4, 20] have enabled _few-shot_ learning where only a few samples per class are required for making inferences. However, such or similar approaches [20, 21, 22] assume a reasonably large training corpus exists to create "meta-tasks" for learning robust representations for downstream classification.
## II ProtoKD: Learning from Scarce Data
**Problem Statement.** In this work, we consider the task of learning from extremely scarce samples \(n\leq 5\) per class. During training, the model can access a set of \(N\) training examples \(X\)=\(\{x_{1},x_{2},x_{3},\dots x_{N}\}\) drawn from \(C\) classes with \(n\) samples each. The model is presented with samples from any of the \(C\) classes at test time. In contrast, meta-learning approaches [4, 3, 2] consider the training and evaluation phases to consist of \(c\)-way classification tasks, where \(c\ll C\). Our setup is more challenging since we have a \(C\)-way classification task and have access to extremely scarce data.
#### Ii-1 Prototypical Networks for Scarce Data
Prototypical networks aim to construct a \(D\)-dimensional vector representation of each class, called the _prototype_, that captures its underlying characteristics. Ideally, each prototype \(p_{j}\) represents the typical example from that class and captures the intra-class variations through an embedding function \(f_{\theta}:x_{i}\longrightarrow F_{i}\) that projects a sample \(x_{i}\) to a \(D\)-dimensional vector \(F_{i}\). The prototype is computed as the mean representation of all \(n\) training samples in the class \(c_{j}\in C\). In a typical meta
Fig. 1: The **overall architecture** of the proposed ProtoKD is illustrated. Based on prototypical networks and self-distillation, a two-stage process captures the intra-class variations that can occur in biological data in a robust representation.
learning setup, a _support_ set \(S\) consisting of \(k\) examples from \(c\) classes is sampled to construct these prototypes. A function \(\phi\) provides the distance between each example in the _query_ set \(Q\) and each prototype. A softmax function over these distances provides a probability distribution to label each example \(x_{i}\in Q\). This setup assumes that (i) each sample in the query comes from the \(c\) classes sampled in the episode and (ii) there exists a reasonably large dataset to sample the support and query sets during training to learn good representations for \(c\)-way classification. However, in biomedical applications, the number of training samples can be sparse due to the high data acquisition cost or the need to adapt rapidly to a changing scenario. Similarly, assuming a smaller subset of classes at inference is unrealistic since this requires prior knowledge about each test sample.
We extend the prototypical network formulation to the extremely scarce data regime to overcome these limitations by proposing to learn the embedding function \(f_{\theta}\) through domain-specific data augmentation. Given a support set \(S_{i}\), each example \(x_{i}\) is augmented through a controlled, pre-determined scheme to create \(k\) samples \(\{x^{\prime}_{1},x^{\prime}_{2},x^{\prime}_{3},\dots x^{\prime}_{k}\}\) that become part of the query set. Hence, each minibatch consists of examples from _all_\(C\) classes, with the prototypes constructed from original, uncorrupted samples and the query samples providing samples that cover the plausible intra-class variations for each class. The training objective is to minimize the negative log probability of each query sample \(x^{\prime}_{i}\) belonging to its true class \(c_{j}\) represented by its prototype \(p_{j}\). Hence, we minimize the matching loss (\(\mathcal{L}_{m}\)) given by
\[-logp(y=j|x^{\prime}_{i},\theta)=-log\left(\frac{-exp(\phi(f_{\theta}(x^{\prime }_{i},p_{j})))}{\sum_{k}^{C}exp(\phi(f_{\theta}(x^{\prime}_{i},p_{k}))}\right) \tag{1}\]
where \(x^{\prime}_{i}\) is an example from the query set \(Q\); \(\phi(\cdot)\) is a distance function that provides the distance (in the range \([0,+\infty)\)) between each sample \(x^{\prime}_{i}\) and a prototype \(p_{j}\); and \(f_{\theta}\) is an embedding function, defined as a Wide ResNet [23], and \(\phi(\cdot)\) is the Euclidean distance. The data augmentation scheme is specific to each use case and must be designed based on prior, domain-specific knowledge. In our experiments with the parasite ova data, we apply augmentation mechanisms such as random zoom, rotation, contrast change, flip, shear, and solarization. However, these are not plausible augmentation schemes for the genome data to capture the intra-class variations. Hence, we design genome-specific augmentation schemes based on base flipping to simulate observation error [24]. We add noise drawn from a normal distribution (with \(0\) mean and variance of \(1\)) to \(5\%\) of the pixels randomly selected symmetrically along the diagonal. This augmentation mechanism mimics the observation errors commonly found in genome sequencing [24, 25] and provides a natural augmentation scheme to capture the intra-class variation and additionally helps preserve the symmetry of the pseudo-image-based k-mer representations. We refer the reader to [26] for more details on the pseudo-imaging for genomics.
#### Ii-B2 Learning Intra-class Variations with Self-Distillation
The second step is to enhance the prototypes \(P{=}\{p_{1},p_{2},p_{3},\dots p_{C}\}\) to capture the intra-class variations. First, we generate _pseudo-labels_ for each query sample \(x^{\prime}_{i}\in Q\) using the probability distribution defined in Equation 1 to create labels for the augmented samples. This _self-labeled_ data is then used to fine-tune the encoder network using a distillation loss given by
\[\mathcal{L}_{s}=D_{KL}(p_{t}(y=j|x_{i},\theta_{k})/\tau,p_{s}(y=j|x_{i},\theta _{s})) \tag{2}\]
where \(D_{KL}(\cdot)\) is the Kullback-Leibler divergence [27] between the probability distributions of the prototypical network defined in Section II-1 and a linear layer trained on top of representations \(F_{i}\) from \(f_{\theta}\); \(\tau\) is a temperature parameter that controls how closely the linear layer's output should match that of the prototypical network; and \(\theta_{t}\) and \(\theta_{s}\) are the trainable parameters of the prototypical network (i.e., the embedding function \(f_{\theta}\)) and the linear layer, respectively. We set \(\tau\) to 5, chosen based on a grid search between \(0\) and \(10\). This formulation could be extended to leverage unlabeled data into a semi-supervised learning setting in future works. We leave
Fig. 2: **Dataset Characteristics.** Our parasite ova dataset has \(1573\) instances across \(14\) ova classes, imaged from \(594\) clinical samples. An exemplar image from each class and the number of samples in each class are presented. It is highly imbalanced, with high inter- and intra-class variation.
that exploration to future work and focus only on the extremely scarce data setting.
In addition to the distillation loss described above, we introduce a simple discriminative loss to help increase the separability of the decision boundary between the classes. To this end, we employ a similarity-based contrastive loss that reduces the difference between the features of each query sample \(x^{\prime}_{i}\) and its corresponding prototype \(p_{j}\) while increasing the distance to other prototypes. The intuition behind this loss is that increasing the distance between the query sample and other prototypes can capture the variability within each class since we widen its decision boundary based on its prototype \(p_{j}\). We define this to be a discriminative loss given by
\[\mathcal{L}_{d}=-log\left(\frac{exp(p_{j}^{T}\cdot F_{i})}{\sum_{k=1}^{C} \openone_{j\neq k}exp(p_{k}^{T}\cdot F_{i})}\right) \tag{3}\]
where \(F_{i}\) is the feature representation of a query sample \(x^{\prime}_{i}\); \(\openone_{1}\in\{0,1\}\) indicates whether the sample is from its true class \(c_{j}\); and both \(p_{j}\) and \(F_{i}\) are \(\ell_{2}\) normalized vectors. This formulation allows us to leverage our proposed domain-specific augmentation setup for a contrastive learning mechanism without having to sample new positive and anchor examples or triplet mining, as with other contrastive learning mechanisms such as SimCLR [28] or triplet losses [29].
**Implementation Details.** The framework is trained end-to-end in two alternating phases. Every epoch consists of \(10\) iterations of training only with the matching loss, followed by \(10\) iterations of training using both distillation and discriminative losses. The matching loss cycle allows us to build prototypes of each class, while the self-distillation phase allows us to refine the prototypes by capturing the intra-class variation explicitly. We use a Wide-ResNet [23], trained from scratch, as the backbone for our mapping function, with a depth of \(28\) layers, a width of \(2\), and a dropout rate of \(0.3\). The features are projected to a \(128\)-dimension vector using a linear layer. The images are resized to \(128\times 128\) and pre-processed as done in ResNet [30]. All networks are trained for 100 epochs with a support set size of \(1\) and a query set of \(5\) and converge in 90 minutes. All experiments were conducted on a workstation server with an AMD ThreadRipper CPU with 64 cores, 128 GB RAM, and an NVIDIA Titan RTX (24GB).
## III Experimental Evaluation
In this section, we present the experimental setup and evaluation results for the proposed ProtoKD approach. We begin by describing the data collection process to curate the parasitic ox recognition dataset, one of the first attempts at a comprehensive benchmark for the task. We discuss the metrics and baselines used for evaluation and present the quantitative results. We conclude by demonstrating the generalization capabilities of the proposed ProtoKD framework to other biomedical applications, such as genome classification.
**Data Collection.** We collected 1573 ova examples, curated from 594 clinical samples at a local [redacted for anonymity] diagnostics laboratory. Each type of egg was collected by performing a centrifugal fecal flotation on samples from various hosts infected with different parasites and were subsequently observed under a microscope by a certified parasitologist. Following identification, images were captured using the Olympus BX43 microscope (Tokyo, Japan) and the Olympus Cell Sens Entry software v1.18. Based on their frequency of occurrence in samples received at both local and national diagnostics labs, the following 14 parasitic ox were considered: Capillarids, _Cystoisospora_ spp., _Dipyidium cannium_, _Eimeria_ spp., _Giardia_ spp., _Monieza_ spp., _Nematodirus_ spp., _Parascaris_ spp., Strongyles, _Taenild eggs_, _Toxascaris leonina_, _Toxocara_ spp., Trichostrosyles, and _Trichuris_ spp. Figure 2 provides examples and statistics. Each parasite and its relative size determined where the magnification would be 40x, 20x, or 10x objective. Most of the images captured are at 10x objective magnification which is available on a large majority of microscopes. Clinical samples were collected until each class had at least 25 examples to provide a comprehensive benchmark for evaluating machine learning frameworks for ova recognition with extremely scarce data.
**Metrics and Baselines.** Due to the highly imbalanced nature of the data, we choose precision and recall per class as our evaluation metric since accuracy can be highly skewed towards classes with more examples. We evaluate the performance of each algorithm with a training set with \(1\) example per class and report the mean results from \(10\) random trials to avoid conflating the results due to the choice of the example from each class. We choose a fully supervised Wide-ResNet as our backbone network for all baselines, including a fully
supervised one, the modified ProtoNet from Section II-1, and ProtoKD. All hyperparameters for each baseline are kept constant for each trial and trained for \(100\) epochs with early stopping based on a validation set of \(5\) samples per class.
**Performance on Parasite Ova Data.** We first evaluate and compare our approach against baselines on the parasite ova dataset. Table I summarizes the results when training with only one example per class. The proposed ProtoKD approach performs well across classes, with an average precision of \(0.523\) and recall of \(0.544\), outperforming the supervised and ProtoNet baselines. Of particular interest is the performance of the baselines on the two classes with the least intra-class variation (_Giardia_ spp.) and the highest intra-class variation (_Nematodirus_ spp.). With _Nematodirus_ spp., ProtoKD's recall (\(0.790\)) was almost \(1.5\) times that of ProtoNet (\(0.464\)) and the supervised (\(0.414\)) baselines, which indicates that the self-distillation and discriminative losses played their role in learning robust prototypes. All three baselines performed well on Giardia, with ProtoKD achieving \(100\%\) recall with a high precision of \(0.945\). On average, we find that the ProtoKD achieves high recall at the cost of precision, particularly in the case of classes with a large number of samples, such as _Parascaris_ spp. and _Trichuris_ spp. We hypothesize that this is an effect of the contrastive, discriminative loss intended to expand each class's decision boundary. Interestingly, the ProtoKD severely fails on _Dipylidium caninum_. Upon close inspection, _Dipylidium caninum_ was highly confused with Trichostrongle eggs, which, though visually similar (see Figure 2), are functionally different. The strong augmentation scheme resulted in overlapping decision boundaries due to the extreme variation in Trichostrongle examples. We anticipate using super-resolution mechanisms [31, 32] to enhance the images will help find better representations.
**Ablation Studies.** We perform ablation studies to systematically evaluate the impact of the different components of the approach on its performance. The three loss functions, defined in Equations 1, 2, and 3, are the major components of the approach. Hence, we ablate over the impact of using \(\mathcal{L}_{s}\) and \(\mathcal{L}_{d}\) in combination with \(\mathcal{L}_{m}\) and report results in Table II. Using only the matching loss (Equation 1), the approach degenerates to a standard ProtoNet, one of our baselines (Row 1). When \(\tau\) in \(\mathcal{L}_{s}\) (defined in Equation 2) is set to 1, it becomes the standard cross-entropy loss and hence is equivalent to our fully supervised baseline (Row 2). It can be seen that using knowledge distillation loss (\(\mathcal{L}_{c}\)) alone or matching loss (\(\mathcal{L}_{m}\)) performs reasonably well, although not as much as the proposed ProtoKD. Adding the discriminative loss (\(\mathcal{L}_{d}\)) with the matching loss (\(\mathcal{L}_{m}\)) provides a higher increase in performance than combining the matching loss (\(\mathcal{L}_{m}\)) with self-distillation (\(\mathcal{L}_{s}\)). Combining all three provides a higher increase overall, indicating the subtle balance between learning inter-class and intra-class variations provided by the alternating training methodology proposed in ProtoKD. Note that this formulation can naturally be extended to semi-supervised learning where the self-distillation loss (\(\mathcal{L}_{m}\)) can be used to train on unlabeled data. We leave that to future work since our focus is on tackling the problem of learning from scarcely available (\(<5\) samples per class) training data.
**Extension to Other Biomedical Applications.** In addition to our experiments on parasite ova recognition, we evaluate the generalizability of the proposed ProtoKD formalism to other biomedical applications by evaluating its ability to learn representations from an entirely separate application: metagenome sequences. We evaluate the ProtoKD framework on the data provided by MG-NET, which has \(31,580\) sequence reads across seven classes - Bovine (host), _B. trehalosi_, _H. sommi_, _M. bovis_, _M. haemolytica_, _P. multocida_ and _T. pyogenes_. Specifically, we use the pseudo-images generated by the MG-NET framework as input and evaluate it by training with varying samples per class. The test set was fixed with 8192 samples for a fair comparison with MG-NET in all the scenarios. Average performance from 10 trials is reported. Table III summarizes the result. We can see that the proposed ProtoKD framework and ProtoNet outperform the supervised MG-NET at very low samples, i.e., less than 25 samples per class. It takes MG-NET at least 500 samples per class to outperform ProtoKD with 25 samples, achieving an overall host F1-score of \(0.906\) and an average pathogen F1-score of \(0.319\). Interestingly, the performance initially reduces as the number of samples per class is increased. We attribute this phenomenon to the fact that fine-grained recognition requires _highly distinct_ samples for learning robust features. Genomes between closely related species have shown to have similar genome sequences [33]; hence, larger amounts of data do not necessarily translate into better performance. For example, ProtoNet achieves a higher pathogen F1-score (\(0.200\)) than ProtoKD (\(0.193\)) at 25 samples. It has \(100\%\) precision and \(0\%\) recall on two pathogen classes, indicating that the model does not make balanced predictions and fails on edge cases. We anticipate that including structural information [26, 33] and other metadata will improve the performance.
## IV Conclusion and Future Work
In this work, we presented _ProtoKD_, one of the first works to tackle the problem of learning from extremely scarce training samples. Using a benchmark dataset of parasitic ova, we demonstrate its strong ability to learn robust representations from just one example per class. Experiments on large-scale metagenome-based taxonomic profiling data demonstrated its generalizability to other downstream applications. We anticipate using super-resolution to enhance the images will help find better representations. We aim to extend this framework
for scaling deep learning frameworks to work with highly constrained data typical in biomedical applications such as disease diagnostics and the Internet of Medical Things.
**Acknowledgement.** This work was partially supported by the US National Science Foundation (NSF) grant IIS 1955230.
|
2305.19687 | Free Energy of Anisotropic Strangeon Stars | Can pulsar-like compact objects release further huge free energy besides the
kinematic energy of rotation? This is actually relevant to the equation of
state of cold supra-nuclear matter, which is still under hot debate. Enormous
energy is surely needed to understand various observations, such as
$\gamma-$ray bursts, fast radio bursts and soft $\gamma-$ray repeaters. In this
paper, the elastic/gravitational free energy of solid strangeon star is
revisited for strangeon stars, with two anisotropic models to calculate in
general relativity. It is found that huge free energy (> $10^{46}$ erg) could
be released via starquakes, given an extremely small anisotropy ($(p_{\rm
t}-p_{\rm r})/p_{\rm r} \sim 10^{-4}$, with $p_{\rm t}$/$p_{\rm r}$ the
tangential/radial pressure), implying pulsar-like stars could have great
potential of free energy release without extremely strong magnetic fields in
solid strangeon star model. | Shichuan Chen, Yong Gao, Enping Zhou, Renxin Xu | 2023-05-31T09:30:01Z | http://arxiv.org/abs/2305.19687v3 | # Free Energy of Anisotropic Strangeon Stars
###### Abstract
Can pulsar-like compact objects release further huge free energy besides the kinematic energy of rotation? This is actually relevant to the equation of states of cold supra-nuclear matter, which is still under hot debate. Enormous energy is surely needed to understand various observations, such as \(\gamma-\)ray bursts, fast radio bursts and soft \(\gamma-\)ray repeaters. The elastic/gravitational-free energy of solid strangeon star is revisited, with two approaches to calculate in general relativity. It is found that huge free energy (\(>10^{46}\) erg) could be released via starquakes, given an extremely small anisotropy (\((p_{\rm t}-p_{\rm r})/p_{\rm r}\sim 10^{-4}\), with \(p_{\rm t}/p_{\rm r}\) the tangential/radial pressures).
keywords: pulsars: general -- methods: numerical
## 1 Introduction
A compact object composed of dense matter at supra-nuclear density forms after stopping the release of nuclear free energy in massive stars, which was initially termed as "gigantic nucleus" by Landau (1932). Can this kind of compact stars release further huge free energy besides the rotational energy? This is an issue with a long history, to be relevant to the state equation of cold supra-nuclear matter, which is challenging in both physics and astronomy nowadays (Xu, 2023).
Observationally, an evolution of a post-burst relativistic fireball with free energy injection from the compact star through magnetic dipole radiation may provide a natural explanation for the plateau of \(\gamma\)-ray bursts (GRBs) (Dai & Lu, 1998; Zhang & Meszaros, 2001; Mei et al., 2022). As the companion piece of GRBs, fast radio bursts (FRBs), especially the repeating ones with high burst rate, are calling enormous free energy of compact central engines, which are most likely pulsar-like objects (Wang et al., 2018, 2022; Luo et al., 2020; Li et al., 2021; Xu et al., 2022). In addition, tremendous free energy is shown in the detections of the flares of galactic even extragalactic sources, so-called soft \(\gamma\)-ray repeaters, especially for the giant ones (Hurley et al., 2005; CHIME/FRB Collaboration et al., 2020; Fermi-LAT Collaboration et al., 2021), with extremely bright giant flares with energy of \(10^{44-47}\) erg (Hurley et al., 1999; Palmer et al., 2005).
Theoretically, though the possibility of a solid core (Ruderman, 1972; Canuto & Chitre, 1973) cannot yet be ruled out, a conventional neutron star (NS) is fluid-like except for a solid crust (i.e., similar to a raw egg), the free energy of which could be negligible but might be significant in case of a state strongly magnetized (Duncan & Thompson, 1992; Usov, 1992; Thompson & Duncan, 1993), so-called magnetars (Thompson & Duncan, 1995; Kouveliotou et al., 1998) with extremely strong magnetic fields (\(\sim 10^{13-15}\)G). Nevertheless, nucleon-like units with strangeness, called _strangeons_, may form in bulk supra-nuclear matter produced during core-collapse supernova, and a strangeon star (SS) (Xu, 2003; Lai & Xu, 2009; Lai et al., 2023) should be in a globally solid state (i.e., similar to a cooked egg) due to the large masses of and the strong coupling between strangeness. A calculation of the free energy for anisotropic SS was presented in Newtonian gravity, showing a huge amount of energy released via starquakes when stellar stresses reach a critical value (Xu et al., 2006), and an updated version with Einstein's gravity will be given in the present work.
The free energy of pulsar-like compact object depends certainly on the equation of state of bulk matter at supra-nuclear density, and it is generally thought that strangeness would play an important role in understanding the puzzling state, to be probably the first big problem solved in the era of gravitational-wave astronomy (Bodmer, 1971; Witten, 1984), see also Xu (2018) for a brief introduction. It is then suggested that pulsars could be strange quark stars (QSs), having similar mass and radius to that of normal NSs (Haensel et al., 1986; Alcock et al., 1986), which makes QS a possible candidate model for this kind of compact objects. The basic units of a strange star would be quarks for a QS, but could be strangeness if three-flavored quarks are localized in strangeons as for nucleons in the two-flavored case (Xu, 2003). The model of SS has been successful to explain many phenomena of pulsar-like stars, including the subpulse-drifting (Xu et al., 1999; Lu et al., 2019), the gitches interpreted with star-quakes (Zhou et al., 2004, 2014; Lai et al., 2018; Wang et al., 2021; Lu et al., 2023), the Optical/UV excess in X-ray dim isolated NSs (Wang et al., 2017), as well as massive pulsars (\(\sim~{}2\rm M_{\odot}\)) proposed before discoveries (Lai & Xu, 2009). The SS model is also consistent with the results of tidal deformability (Lai et al., 2019) of and the light curve (Lai et al., 2018) from GW170817. In addition, photon-driven mechanism might alleviate the current difficulty in core-collapse
supernovae by forming a strange star inside the collapsing core (Chen et al., 2007), producing more free energy injected into explosive shock wave than that of conventional neutrino-driven ones (Melson et al., 2015). The model could also be tested in the future by detecting gravitational-wave echos associated with strangeon stars (Zhang et al., 2023).
The free energy of solid SSs is focused in the paper, with numerical calculations of the strain energy release during a starquake within general relativity in a spherically symmetric spacetime.
## 2 The model
### TOV equations in the anisotropic case
For a spherically symmetric star modelled by perfect fluid in static equilibrium, the Tolman-Oppenheimer-Volkoff (TOV) equations constrain the structure of the star. But the isotropic star is only a common assumption. It's natural to believe that strongly interacting matter such as NSs should be described by locally anisotropic equation of states (EOS) (e.g. Ruderman, 1972; Bowers & Liang, 1974).
For simplicity, consider a static distribution of anisotropic matter in spherically symmetric spacetime. In Schwarzschild-like coordinates, the metric can be written as:
\[ds^{2}=-e^{2\alpha(r)}d^{2}+e^{2\beta(r)}dr^{2}+r^{2}(d\theta^{2}+\sin^{2} \theta d\phi^{2}) \tag{1}\]
The spherical symmetry spacetime also implies that the stress-energy tensor \(T_{\mu\nu}\) can be written as
\[T_{\mu\nu}=\mathrm{diag}(\rho,p_{\mathrm{r}},p_{\mathrm{r}},p_{\mathrm{r}}) \tag{2}\]
where \(\rho\) is the energy density, \(p_{\mathrm{r}}\) is the radial pressure and \(p_{\mathrm{t}}\) is the tangential pressure.
Combining with the Einstein equations, we have
\[e^{2\beta(r)}=(1-\frac{2m(r)}{r})^{-1} \tag{3}\]
\[\frac{d\alpha}{dr}=e^{2\beta(r)}(\frac{m(r)}{r}+2\pi r^{2}p_{\mathrm{r}}) \tag{4}\]
\[\frac{dp_{\mathrm{r}}}{dr}=-(p_{\mathrm{r}}+\rho)\frac{da}{dr}+\frac{2\Pi}{r} \tag{5}\]
where \(\Pi=p_{\mathrm{t}}-p_{\mathrm{r}}\) measures the local anisotropy and \(m(r)=\int_{0}^{r}4\pi r^{2}\rho dr\) is the mass within the radius \(r\).
Equation (3), (4), (5) are the generalized TOV equations in the anisotropic case. Compared to the normal TOV equations, equation (5) shows that the difference comes from the new variable \(\Pi=p_{\mathrm{t}}-p_{\mathrm{r}}\), which should be determined by a new relation assumed, explained in SS 2.2.
### Two approaches in the anisotropic models
One of the most important issues is how to determine the model that determines \(\Pi\), the difference between \(p_{\mathrm{t}}\) and \(p_{\mathrm{r}}\). Since it's very difficult to obtain \(\Pi\) on physical grounds from first principle, one could only guess some heuristic models. We assume the anisotropy is small so it doesn't change structures a lot, and the EOS only depends on \(p_{\mathrm{r}}\) not \(p_{\mathrm{t}}\).
There are some minimal conditions \(\Pi\) needs to meet to make solutions physically acceptable (Estevez-Delgado & Estevez-Delgado, 2018). In a nutshell, these conditions include: the interior solution should match continuously to the exterior Schwarzschild solution; the metric functions must be finite and non zero within the star; the density and pressure must be non-negative and finite everywhere, and must be monotonic decreasing with radius; the radial and tangential pressure at the origin must be the same; the energy conditions should be satisfied; the causality condition must be satisfied within the star, i.e. the speed of sound must be lower than the light speed.
For simplicity, we choose two models of \(\Pi\) in our calculation. The first model is \(\Pi=-\eta_{1}R_{1}\frac{dp_{\mathrm{r}}}{dr}\), where \(\eta_{1}\) is a dimensionless constant, and \(R_{1}\) is a constant with the dimension of length, to be \(R_{1}=10\) km for the typical radius of pulsars. The second one is the HB model with \(\Pi=-\eta_{2}r^{2}\frac{dp_{\mathrm{r}}}{dr}\)(Herrera & Barreto, 2013), where \(\eta_{2}\) is a dimensionless constant too. The constant \(\eta_{1}\) and \(\eta_{2}\) measure the anisotropy of the star, \(\eta_{1,2}=0\) implies that the star is isotropic and has no strain energy. Both the models satisfy the conditions above and are physically acceptable.
### Equation of states of strangeon matter
We choose the phenomenological Lennard-Jones model of SSs (Lai & Xu, 2009), which assumes an interaction potential between two strangeons of
\[u(r)=4\epsilon[(\frac{\sigma}{r})^{12}-(\frac{\sigma}{r})^{6}] \tag{6}\]
where \(\epsilon\) is the depth of the potential, \(\sigma\) is the distance where \(u(r)\) is zero.
The Lennard-Jones model was usually used as the interaction between molecules, with the property of long-range attraction and short-range repulsion. The lattice QCD show that there is a strong repulsive core of a few hundred MeV at short distances (\(r\leq 0.5\) fm) surrounded by an attractive well at medium and long distances (Ishii et al., 2007; Wilczek, 2007). This kind of potential helps quark matter crystallize and form solid strange stars.
We take simple cubic lattice structures, ignore the surface tension and vibration energy (small compared to the potential energy and the rest energy), the total energy density and pressure are
\[\rho=2\epsilon(A_{12}\sigma^{12}n^{5}-A_{6}\sigma^{6}n^{3})+nN_{q}m_{q}c^{2} \tag{7}\]
\[p=n^{2}\frac{d(\rho/n)}{dn}=4\epsilon(2A_{12}\sigma^{12}n^{5}-A_{6}\sigma^{6} n^{3}) \tag{8}\]
where \(A_{12}=6.2\), \(A_{6}=8.4\), \(N_{q}\) is the number of quarks in a strangeon,and \(n\) is the number density of strangeons.
We use three groups of parameters of this Lennard-Jones SS mode, which named after their maximum gravitational mass. These parameters are listed in table 1, where \(n_{s}=(A_{6}/2A_{12})^{1/2}N_{q}/3\sigma^{3}\) is the surface number density of baryons.
### To calculate the free energy
With the generalized TOV equations (3), (4), and (5) in the anisotropic case, Lennard-Jones SS EOS (7), (8) and the choice
\begin{table}
\begin{tabular}{l c c c c} \hline name & \(n_{\mathrm{s}}\) [fm\({}^{-3}\)] & \(N_{\mathrm{q}}\) & \(\epsilon\) [MeV] & \(M_{\mathrm{max}}\) \\ \hline LJ25 & 0.48 & 18 & 20 & \(2.5M_{\odot}\) \\ LJ30 & 0.36 & 18 & 30 & \(3.0M_{\odot}\) \\ LJ35 & 0.30 & 18 & 40 & \(3.5M_{\odot}\) \\ \hline \end{tabular}
\end{table}
Table 1: The parameters of Lennard-Jones SS model we used in the calculation.
of anisotropic model, either \(\Pi=-\eta_{1}R_{1}\frac{dP_{\rm t}}{dr}\) or \(\Pi=-\eta_{2}r\frac{dP_{\rm t}}{dr}\), we have the complete equations to solve out the whole system. Once given the central density \(\rho_{c}\), one can integrate the generalized TOV equations from the center to the surface, and obtain the radius, the gravitational mass \(M_{\rm g}\) and the baryon mass \(M_{\rm b}\) of the SS, which can be calculated as
\[M_{\rm g}=\int_{0}^{R}4\pi\rho r^{2}dr \tag{9}\]
\[M_{\rm b}=930{\rm MeV}/c^{2}\int_{0}^{R}4\pi nr^{2}\rho^{(r)}dr \tag{10}\]
The binding energy of the star can be calculated as \(E_{\rm b}=(M_{\rm b}-M_{\rm g})c^{2}\). Starquakes may cause the sudden change of \(\Pi\), with a release of the gravitational energy as well as the strain energy. The difference of binding energy \(\Delta E_{\rm b}=E_{\rm b}(\eta_{1,2})-E_{\rm b}(\eta_{1,2}=0)\) between the star with \(\eta_{1,2}\neq 0\) and \(\eta_{1,2}=0\) may imply the free energy the star can release during the starquakes.
### Results
The main results of our calculations are shown in Figure 1, 2, 3 and 4. Figure 1 and 2 show the difference of binding energy as a function of gravitational mass, implying the possible free energy the SSs may release via starquakes with different values of \(M_{\rm g}\), \(\eta_{1,2}\) and different equations of states. Figure 3 and 4 show the value of \(\Pi/p_{\rm r}\) as a function of radius, which measures the local anisotropy within the stars with different values of \(M_{\rm g}\), \(\eta_{1,2}\) and different equations of states.
From Figure 1 and 2, it is shown that for the model \(\Pi=-\eta_{1}R_{1}\frac{dP_{\rm t}}{dr}\) with \(\eta_{1}=10^{-4}-10^{-3}\) or for the model \(\Pi=-\eta_{2}rdp_{\rm t}/dr\) with \(\eta_{2}=10^{-4}-10^{-3}\), the difference of binding energy \(\Delta E_{\rm b}\) is comparable to the typical energy of giant flare \(\sim 10^{44-47}\) erg. From Figure 3 and 4, we can see that under these situations, the absolute value of ratio of \(\Pi=p_{\rm t}-p_{\rm r}\) to \(p_{\rm r}\) is approximately \(10^{-5}-10^{-3}\).
## 3 Discussions and Conclusions
The free energy of a SS would come from a release of the gravitational energy and the strain energy during starquakes, and extremely high magnetic fields might not be necessary in the case of SSs in order to understand various bursting events in astrophysics. The value of this free energy can be estimated as the difference of binding energy between the star with \(\eta\neq 0\) and \(\eta=0\), where \(\eta\) is a constant measures the strength of local anisotropy, and \(\eta=0\) means the star is isotropic and has no strain energy. In this paper, we calculate this kind of free energy of SSs in general relativity, and find that a small degree of anisotropy (\(\Pi/p_{\rm r}\sim 10^{-4}\)) can account for a large amount of free energy, comparable to the typical energy of giant flares (\(\sim 10^{44-47}\)erg), as has already been illustrated in Newtonian gravity (Xu et al., 2006).
Since we can not determine the anisotropic model on physical grounds from first principle, we choose two heuristic models by guess in this paper, \(\Pi=-\eta_{1}R_{1}dp_{\rm t}/dr\) and \(\Pi=-\eta_{2}rdp_{\rm t}/dr\). Though we don't know the true form of the anisotropic model, these two toy models can at least show how anisotropy influence the free energy qualitatively. The influence of the anisotropy on the modified TOV equations is through \(\Pi=p_{\rm t}-p_{\rm r}\), which only appears in equation (5). So, if the values of \(\Pi\) within the star of two anisotropic models are similar, the value of free energy should be close too. Take the two
Figure 1: The difference of binding energy as a function of gravitational mass with the anisotropic model \(\Pi=-\eta_{1}R_{1}d\rho_{\rm t}/dr\). Three different line styles (or colors) correspond to three choices of EOS, which are listed in table 1. The lines in the same line style (or colors) from top to bottom have different \(\eta_{1}\), which are labelled in the graph.
Figure 2: Same as Figure 1, except for the anisotropic model \(\Pi=-\eta_{2}rdp_{\rm t}/dr\).
models in the paper as an example. In the model \(\Pi=-\eta_{2}rdp_{t}/dr\), there is a dimensionless constant \(\eta_{2}\) which measure the intensity of anisotropy. In the model \(\Pi=-\eta_{1}R_{1}dp_{t}/dr\), we use the typical radius of pulsars \(R_{1}=10\)km to define a dimensionless constant \(\eta_{1}\). Since we have two dimensionless constants \(\eta_{1}\) and \(\eta_{2}\) which measure the anisotropy, we can compare them. From figure 3 and 4, we can see that, when \(\eta_{1}\) and \(\eta_{2}\) have the same order of magnitude, \(\Pi/p_{t}\) also have the same order of magnitude, so is the value of the free energy \(\Delta E_{\rm b}\). For \(\eta_{1}\sim 10^{-3}\) and \(\eta_{2}\sim 10^{-3}\), \(\Pi/p_{t}\) is around \(10^{-4}-10^{-3}\) in the most part of the star except the center and the surface. Furthermore, as long as the anisotropic model can let \(\Pi/p_{t}\) to be over \(10^{-4}-10^{-3}\) in the most part of the star, the free energy the star could release via starabakes can be over \(10^{46}\)erg, comparable to that of the giant flares. And from Figure 1, 2, 3 and 4, we can roughly guess that the increase of an order of magnitude in \(\Pi/p_{t}\) could make the free energy \(\Delta E_{\rm b}\) increase by two orders of magnitude.
## Acknowledgements
The authors would like to thank those involved in the continuous discussions in the pulsar group at Peking University. This work is supported by the National SKA Program of China (2020SKA0120100). E. Zhou is supported by NSFC Grant NO. 12203017.
|
2309.06472 | Flows for Flows: Morphing one Dataset into another with Maximum
Likelihood Estimation | Many components of data analysis in high energy physics and beyond require
morphing one dataset into another. This is commonly solved via reweighting, but
there are many advantages of preserving weights and shifting the data points
instead. Normalizing flows are machine learning models with impressive
precision on a variety of particle physics tasks. Naively, normalizing flows
cannot be used for morphing because they require knowledge of the probability
density of the starting dataset. In most cases in particle physics, we can
generate more examples, but we do not know densities explicitly. We propose a
protocol called flows for flows for training normalizing flows to morph one
dataset into another even if the underlying probability density of neither
dataset is known explicitly. This enables a morphing strategy trained with
maximum likelihood estimation, a setup that has been shown to be highly
effective in related tasks. We study variations on this protocol to explore how
far the data points are moved to statistically match the two datasets.
Furthermore, we show how to condition the learned flows on particular features
in order to create a morphing function for every value of the conditioning
feature. For illustration, we demonstrate flows for flows for toy examples as
well as a collider physics example involving dijet events | Tobias Golling, Samuel Klein, Radha Mastandrea, Benjamin Nachman, John Andrew Raine | 2023-09-12T18:00:01Z | http://arxiv.org/abs/2309.06472v1 | # Flows for Flows: Morphing one Dataset into another
###### Abstract
Many components of data analysis in high energy physics and beyond require morphing one dataset into another. This is commonly solved via reweighting, but there are many advantages of preserving weights and shifting the data points instead. Normalizing flows are machine learning models with impressive precision on a variety of particle physics tasks. Naively, normalizing flows cannot be used for morphing because they require knowledge of the probability density of the starting dataset. In most cases in particle physics, we can generate more examples, but we do not know densities explicitly. We propose a protocol called **flows for flows** for training normalizing flows to morph one dataset into another even if the underlying probability density of neither dataset is known explicitly. This enables a morphing strategy trained with maximum likelihood estimation, a setup that has been shown to be highly effective in related tasks. We study variations on this protocol to explore how far the data points are moved to statistically match the two datasets. Furthermore, we show how to condition the learned flows on particular features in order to create a morphing function for every value of the conditioning feature. For illustration, we demonstrate flows for flows for toy examples as well as a collider physics example involving dijet events.
## I Introduction
One common data analysis task in high energy physics and beyond is to take a reference set of examples \(R\) and modify them to be statistically identical to a target set of examples \(T\). In this setting, we do not have access to the probability density of \(x\in\mathbb{R}^{N}\) responsible for \(R\) or \(T\) (i.e. \(p_{T}\) and \(p_{R}\)), but we can sample from both by running an experiment or simulator. Examples of this task include shifting simulation to match data for detector calibrations, morphing experimental or simulated calibration data to match backgrounds in signal-sensitive regions of phase space for background estimation or anomaly detection, and tweaking simulated examples with one set of parameters to match another set for parameter inference.
A well-studied way to achieve dataset morphing is to assign importance weights \(w\) so that \(w(x)\approx p_{T}(x)/p_{R}(x)\). This likelihood ratio can be constructed using machine learning-based classifiers (see e.g. [1; 2]) to readily accommodate \(N\gg 1\) without ever needing to estimate \(p_{T}\) or \(p_{R}\) directly. While highly effective, likelihood-ratio methods also have a number of fundamental challenges. With non-unity weights, the statistical power of a dataset is diluted. Furthermore, even small regions of non-overlapping support between \(p_{T}\) and \(p_{R}\) can cause estimation strategies for \(w\) to fail.
A complementary strategy to importance weights is direct feature morphing. In this case, the goal is to find a map \(f:\mathbb{R}^{N}\to\mathbb{R}^{N}\) from the reference to the target space such that the probability density of \(f(x\sim p_{R})\) matches \(p_{T}\). Unlike the importance sampling scenario, \(f\) is not unique. The goal of this paper is to study how to construct \(f\) as a _normalizing flow_[3; 4] - a type of invertible deep neural network most often used for density estimation or sample generation. Normalizing flows have proven to be highly effective generative models, which motivates their use as morphing functions. Traditionally, normalizing flows are trained in the setting where \(p_{R}\) is known explicitly (e.g. a Gaussian distribution). Here we explore how to use flows when neither \(p_{R}\) or \(p_{T}\) are known explicitly. We call our method **flows for flows**. This approach naturally allows for the morphing to be conditional on some feature, such as a mass variable [5; 6; 7]. Approaches similar to flows for flows have been performed for variational autoencoders [8] and, recently, diffusion models [9].
In many cases in physics, \(p_{R}\) is close to \(p_{T}\), and so \(f\) should not be far from the identity map. For example, \(R\) might be a simulation of data \(T\), or \(R\) might be close to \(T\) in phase space. In order to assess how well suited normalizing flows are for this case, we also study how much \(x\) is moved via the morphing. An effective morphing map need not move the features minimally, but models that include this inductive bias may be more robust than those that do not. There is also a connection with optimal transport, which would be exciting to study in the future.
This paper is organized as follows. Section II briefly reviews normalizing flows and introduces all of the flows for flows variations we study. Next, Sec. III presents a simple application of the flows for flows variations on two-dimensional synthetic datasets. Sec. IV gives a more realistic application of the transport variations to sets of simulated particle collision data. We summarize the results and conclude in Sec. V.
Methods
### Normalizing flows as transfer functions
Normalizing flows are classically defined by a parameteric diffeomorphism \(f_{\phi}\) and a base density \(p_{\theta}\) for which the density is known. Using the change of variables formula, the log likelihood (paramaterized by both \(\theta\) and \(\phi\)) of a data point \(x\sim p_{D}\) under a normalizing flow is given by
\[\log p_{\theta,\phi}(x)=\log p_{\theta}(f_{\phi}^{-1}(x))-\log\left|\det\!\left( J_{f_{\phi}^{-1}(x)}\right)\right|, \tag{1}\]
where \(J\) is the Jacobian of \(f_{\phi}\). Training the model to maximise the likelihood of data samples results in a map \(f_{\phi}^{-1}\) between the data distribution \(p_{D}(x)\) and the base density \(p_{\theta}\). As the base density should have a known distribution, it is usually taken to be a normal distribution of the same dimensionality as the data (which motivates the name "normalizing" flow).
At this point, we can introduce the first transfer method from a reference distribution \(p_{R}\) to a target distribution \(p_{T}\), the **base transfer**. For this method, we train two normalizing flows with two different maps from the same base density. If \(f_{\phi_{1}}\) constitutes a map to the reference density \(p_{R}\) and \(f_{\phi_{2}}\) is a map to the target density \(p_{T}\), then the composition \(f_{\phi_{2}}^{-1}\circ f_{\phi_{1}}\) is a transfer map \(f:R\to T\). In other words, the transfer method routes from reference to target via some base density intermediary.
It is also possible to use a learned base density, such as another normalizing flow, instead of some known base distribution. This is our second method, **unidirectional transfer**. Given samples from two data distributions \(p_{R}\) and \(p_{T}\) of the same dimensionality, a map \(f_{\gamma}:R\to T\) between these distributions can be found by estimating a density \(p_{\phi,R}\) for \(R\) to use as the base density in the construction of another normalizing flow. In practice, this involves first training a normalizing flow to learn the density \(p_{R}\) by constructing the map \(f_{\phi}^{-1}\) from a base density \(p_{\theta}\) to \(p_{R}\).
Training of the two normalizing flows (the first for the base density, the second for the transport) is done by maximising the log likelihood of the data under the densities defined by the change of variables formula and given by
\[\max_{\gamma}\mathop{\mathbb{E}}_{y\sim p_{T}}\left[\log p_{ \theta,\phi,\gamma}(y)\right]\] \[=\max_{\gamma}\mathop{\mathbb{E}}_{y\sim p_{T}}\left[\log p_{ \theta,\phi}(f_{\gamma}^{-1}(y))-\log\left|\det\!\left(J_{f_{\gamma}^{-1}(y)} \right)\right|\right];\] \[\max_{\phi}\mathop{\mathbb{E}}_{x\sim p_{R}}\left[\log p_{ \theta,\phi}(x)\right]\] \[=\max_{\phi}\mathop{\mathbb{E}}_{x\sim p_{R}}\left[\log p_{ \theta}(f_{\phi}^{-1}(x))-\log\left|\det\!\left(J_{f_{\phi}^{-1}(x)}\right) \right|\right].\]
As a direct extension of the unidirectional training method: defining densities on both the reference and the target distributions, \(p_{\theta_{1},R}\) and \(p_{\theta_{2},T}\) allows both \(f_{\gamma}\) and \(f_{\gamma}^{-1}\) to be explicitly used by training in both directions, from \(R\) to \(T\) and \(T\) to \(R\). This comprises our third transfer method, **flows for flows**. A benefit of training in both directions is that the dependence of \(f_{\gamma}\) on the defined and learned densities \(p_{\theta_{1},R}\) and \(p_{\theta_{2},T}\) is reduced. A schematic of the flows for flows architecture is shown in Fig. 1.
The invertible network \(f_{\gamma}\) that is used to map between the two distributions may not have semantic meaning on its own as some invertible neural networks are known to be universal function approximators. This map can become interpretable if it is subject to additional constraints. In this work, we investigate two physically-motivated modifications to the flow training procedure: **movement penalty**, where we add an L1 loss term to the flow training loss, and **identity initialization**, where we initialize the flow architecture to the identity function. The L1 variation directly penalizes the average absolute value of the distance moved, while the idea for the identity initialization is that the model will converge on the first best solution that gives close to no movement. All five transfer methods introduce in this section are summarized in Tab. 1.
This entire setup can be made conditional by making the parameters of the invertible neural network dependent on some selected parameter (i.e. the "condition"). The log-likelihood for a normalizing flow conditioned on some variables \(c\) is defined by
\[\log p_{\theta,\phi}(x|c)=\log p_{\theta}(f_{\phi(c)}^{-1}(x)|c)-\log\left| \det\!\left(J_{f_{\phi(c)}^{-1}(x)}\right)\right|, \tag{2}\]
where the base density can also be conditionally dependent on \(c\). In the case of conditional distributions with continuous conditions, the distributions on data \(p_{D}(x|c)\) will often change smoothly as a function of the condition. For these situations, a flow that is explicitly parameter
Figure 1: A schematic of the flows for flows architecture.
\begin{table}
\begin{tabular}{|c|c|} \hline Method name & Training heuristic \\ \hline \hline Base transfer & \(p_{R}\rightarrow\mathcal{N}(0,1)\to p_{T}\) \\ \hline Unidirectional transfer & \(p_{R}\to p_{T}\) \\ \hline Flows for flows & \(p_{R}\longleftrightarrow p_{T}\) \\ \hline Movement penalty & \(p_{R}\overset{+L1}{\longleftrightarrow}p_{T}\) \\ \hline Identity initialization & \(p_{R}(\mathbbm{1}+\epsilon\leftrightarrow)p_{T}\) \\ \hline \end{tabular}
\end{table}
Table 1: We consider five transfer methods from a reference dataset to a target dataset, both with unknown distributions \(p_{R}\) and \(p_{T}\).
ized by a well-motivated choice of conditioning variable may have a cleaner physical interpretation. We provide an example of such a flow for our application to particle collision datasets in Sec. IV. In particular, conditional flows have been used often in high energy physics to develop "bump hunt" algorithms to search for new particles [10; 11; 5; 12; 10; 13; 14]. In such studies, the resulting flows perform well when interpolated to values of the conditioning variable not used in training.
A schematic of a conditional flows for flows model is shown in Fig. 2, where the conditioning function \(f_{\gamma(c_{x},c_{y})}\) can also take more restrictive forms, such as \(f_{\gamma(c_{x}-c_{y})}\) to ensure that the learned map is simple [5; 7]. Furthermore, the two conditional base distributions can be identical such that \(\phi_{1}=\phi_{2}\). Alternatively the base distributions can be different and instead a shared condition can be use \(c=c_{x}=c_{y}\).
### Network architecture
Throughout this work, we use two different flow architectures, one for the "standard" normalizing flow architecture (i.e. learning transformations from standard normal distributions to arbitrary distributions) and one for the flows for flows architecture (i.e. learning transformations between two nontrivial distributions).
For the former architecture type, the invertible neural networks are constructed from rational quadratic splines with four autoregressive (AR) layers [13]. Each spline transformation has eight bins and the parameters of the spline are defined using masked AR networks with two blocks and 128 nodes as defined in the nflows package [14]. For the latter architecture type, we use eight AR layers with splines of eight bins from 3 masked AR blocks of 128 nodes. This slightly more complex architecture is found to give better performance for the large shifts between the toy distributions that we consider. However, in cases where the reference and the target distributions are similar to each other, the architecture of the flows for flows model could in principle be simplified for faster training time while maintaining good performance.
An initial learning rate of \(10^{-4}\) is annealed to zero following a cosine schedule [15] over 60 epochs for the first flow type and 64 epochs for the second flow type. All trainings use a batch size of 128 and the norm of the gradients is clipped to five. For the toy distribution analyses in Sec. III, the training datasets all contain \(10^{6}\) samples.
## III Toy example results
In this section, we explore the performance of the five transfer methods for learning a mapping between nontrivial two-dimensional distributions. In general, we consider both the _accuracy_ of the transform - i.e. does the transfer method learn to successfully morph between the reference and the target distribution - and the _efficiency_ of the transform - i.e. does the method learn a morphing that is logical, and not unnecessarily circuitous.
### Base transfer vs. flows for flows
In Fig. 3, we show a transport task between two datasets drawn from a toy distribution of four overlapping circles. Here we are in some sense trying to learn the _identity_ mapping. We compare the action of the base transfer, which can be seen as the "default" method of mapping between two nontrivial distributions, against the flows for flows method. Both methods are able to successfully map the overall shape of the reference to the target distribution. However, the base transfer method tends not to keep points in the same circle when mapping them from reference to target, while the flows for flows method is more successful at keeping larger portions of each ring together.
In Fig. 4, we show a transport task between two different distributions, from four overlapping circles to a four-pointed star. As before, both the base transfer and flows for flows method are able to morph the shape of the reference distribution into the shape of the target distribution. Interestingly, the flows for flows method appears to distribute points from each of the four circles more equally among each point of the star.
### Evaluating multiple transfer methods.
In Fig. 5, we evaluate just the shape-morphing ability of the transfer methods. We consider six reference - target pairings, where the reference and target distributions are different1, and show the action of the base transfer, unidirectional transfer, flows for flows, movement penalty, and identity initialization methods on the reference distribution. We consider transports between three toy distribution types: four overlapping circles, a four-pointed star, and a checkerboard pattern. All of the
Figure 2: Schematic of a conditional flows for flows architecture.
transfer methods considered are able to successfully learn to map from reference to target, except for the unidirectional transfer, which exhibits a large amount of smearing in the final distribution. Overall, the base transfer, movement penalty, and identity initialization methods show the cleanest final-state distributions.
Another useful metric is the distance traveled by a sample that is mapped under a flow action. For many physical applications, a map that moves data the least is ideal, but we have only explicitly added an L1 loss term to the movement penalty method. Therefore it is interesting to consider how far, on average, all the methods move the features.
In Fig. 6, we show a histogram of the distances traveled, amassing all of the six transfer tasks shown in the rows of Fig. 5 so as to equalize over many types of starting and target shapes. The movement penalty method performs best, producing the shortest distances traveled from reference to target by a large margin compared with the other methods. Interestingly, the flows for flows and identity initialization methods have larger mean distances traveled than the base transfer method as well as larger standard deviations. This is somewhat counterintuitive given that the base transfer method does not explicitly link the reference and target distributions during the training procedure, but it may reflect the somewhat contrived nature of the toy examples (especially in light of the more intuitive results for the science datasets in Fig. 9). All methods except the unidirectional transfer perform more optimally than or on par with the expected baseline, which comes from computing the distances between two random, unrelated instantiations of each reference - target distribution pairing.
## IV Application: calibrating collider datasets
We now move to a physical example: mapping between distributions of scientific observables. Many analyses of collider data are geared towards finding evidence of new physics processes. One powerful search strategy is to compare a set of detected data with an _auxiliary_ dataset, where the auxiliary dataset is known to contains Standard Model-only physics. Any nontrivial difference between the detected and the auxiliary datasets could then be taken
Figure 4: Transport task between two different distributions. Individual samples have been color coded so as to make clear their paths assigned by the transport method.
Figure 3: Transport task between two instantiations of the same distribution. The first column shows the reference distribution; the second column shows the base transfer method acting on the reference distribution; the third column shows the flows for flows method. Individual samples have been color coded so as to make clear their paths assigned by the transport method.
as evidence for the existence of new physical phenomena.
The above analysis procedure is contingent upon the auxiliary dataset being a high-fidelity representation of Standard Model physics. However, such an assumption is not true for many datasets that would be, at first glance, ideal candidates for the auxiliary dataset, such as simulation of Standard Model processes or detected data from adjacent regions of phase space. Therefore it is necessary to _calibrate_ the auxiliary dataset such that it becomes ideal. Historically, this calibration task has been performed using importance weights estimated from ratios of histograms, either using data-driven approaches like the control region method or fully data-based alternatives. Recently, machine learning has enabled these approaches to be extended to the case of many dimensions and/or no binning - see e.g. Ref. [16] for a review.
With the flows for flows method, we can consider yet another calibration approach: to create an ideal auxiliary dataset (the target) by morphing the features from a less-ideal, imperfect auxiliary dataset (the reference). When the imperfect auxiliary dataset is chosen to be close to the ideal reference dataset, as would be true of the candidates listed in the previous paragraph, then the flows for flows method should simply be a perturbation on the identity map.2
Footnote 2: This procedure is the underlying motivation for the Flow-Enhanced Transportation for Anomaly Detection method [6]. Equally the ideal and reference could be defined using the same
Figure 5: Transport tasks between various choices of nonidentical reference and target toy distributions. The colorbar has been set to scale logarithmically, which can emphasize out-of-distribution points.
### Analysis procedure and dataset
We focus on the problem of _resonant_ anomaly detection, which assumes that given a resonant feature \(M\), a potential new particle will have \(|M-M_{0}|\lesssim s\) (which defines the _signal region_) for some unknown \(M_{0}\) and often knowable \(s\)[17]. The value of \(M_{0}\), which corresponds to the mass of the new particle, can be derived from theoretical assumptions on the model of new physics or can be found through a scan. Additional features \(X\in\mathbb{R}^{N}\) are chosen which can be used to distinguish the signal (the new particle) from background (Standard Model-like collisions), which can be done by comparing detected data with reference data within the signal region.
For our datasets, we use the LHC 2020 Olympics R&D dataset [18; 19] which consists of a large number (\(\sim 10^{6}\)) Standard Model simulation events. The events naturally live in a high-dimensional space, as each contains hundreds of particles with momenta in the \(x\), \(y\), and \(z\) directions. To reduce the dimensionality, the events are clustered into collimated sprays of particles called _jets_ using the FastJet[20; 21] package with the anti-\(k_{t}\) algorithm [22] (\(R=1\)). From these jets, we can pull a compressed feature space of only five dimensions; this set of features has been extensively studied in collider analyses. The jet features, along with the resonant feature \(M\), are displayed in Fig. 7. We take the band \(M\in[3.3,\,3.7]\) TeV as our signal region.
The LHC Olympics dataset contains two sets of Standard Model data generated from the different simulation toolkits Pythia 8.219 [23; 24] and Herwig++[25]. We use the former as a stand-in for detected collider data. The latter is used as the reference dataset, the less-than-ideal auxiliary dataset that is calibrated through the flows for flows method to form the ideal auxiliary, target dataset.
To construct the ideal auxiliary dataset, we train a flow to learn the mapping between the reference dataset and the target data _outside_ of the signal region, so as to keep the signal region blinded. Once trained, the flow can then be applied to the non-ideal auxiliary dataset within the signal region, thus constructing the ideal auxiliary dataset. We use the same architectures as in Sec. II.2, with the modification that we _condition_ the transport flows on the mass feature \(M\). This conditioning is motivated by the fact that the flow is trained outside the signal region and applied within the signal region, which is defined exactly by the variable \(M\).
### Results
In Fig. 8, we provide the distributions of the flow-moved reference dataset to the target dataset, as well as the ratios to the target, _outside_ of the signal region. As is clear from Fig. 7, the reference and target datasets are far more similar in this calibration example than they were in the toy examples. Therefore for the movement penalty method, it was necessary to scan over the strength of the L1 term added to the training loss in order to achieve good performance; we found that we needed to reduce the strength by a factor of 20, as compared with what was used for the toy distributions. In fact, all five transfer methods methods methods (base transfer, unidirectional transfer, flows for flows, movement penalty, and identity initialization) perform comparably, and all five methods are able to successfully transform the reference dataset such that the five marginal feature distributions greatly resemble those of the target.
In Fig. 9, we show a histogram of the distances traveled for each data point due to the flow action. Distributions for distance traveled in each individual dimension of feature space are given in Fig. 10. Since the reference and target distributions are so similar, the base transfer methods leads to a highly non-minimal transport path. While the unidirectional method performs well, it shows a longer tail in distance traveled that may represent a less-than-ideal mapping. The flows for flows and identity initialization methods perform comparably with relatively little distance traveled, while movement penalty appears to have found a nearly minimal path.
Based on the closeness of the distributions of the reference and target in Fig. 7, we might hope for a mapping that morphs features \(m_{J_{1}}\), \(\Delta m_{JJ}\), and \(\Delta R_{JJ}\) almost not
Figure 6: Distances traveled in parameter space between two nonidentical toy distributions. Each histogram compiles data from six transfer tasks, corresponding to the rows of Fig. 5. The “baseline” method shows the distances between two random, unrelated instantiations of each reference – target distribution pairing. The maximum possible distance travelable in parameter space is 11.31.
Figure 8: Distributions of and ratios of the flow-transported reference (less-than ideal auxiliary) dataset to the target (ideal auxiliary) dataset. Ratios are taken over each of the five marginal distributions in the parameter space; errorbars represent Poisson uncertainties in bin counts. All data is taken _outside_ of the signal region. All features have been individually minmaxscaled to the range [-3, 3] to optimize network training.
Figure 7: Reference and target distributions used in the application of the flows for flows procedure to scientific datasets. The feature space is comprised of the resonant feature \(M\) and five other features \(m_{J_{1}}\), \(\Delta m_{JJ}\), \(\tau_{J_{1}}^{21}\), \(\tau_{J_{2}}^{21}\), and \(\Delta R_{JJ}\). A description of these observables can be found in [26]. The signal region is defined by \(|M-M_{0}|<c\) for \(M_{0}=3.5\) TeV and \(c=200\) GeV.
at all, and features \(\tau_{J_{i}}^{21}\) and \(\tau_{J_{2}}^{21}\) very minimally. Indeed, this is exactly the behavior we see in Fig. 10 for the movement penalty method (and, to a lesser extent, for the flows and flows and identity initialization methods).
## V Conclusions and Future Work
In this work, we have explored a number of ways to use normalizing flows to create mappings between nontrivial reference and target datasets of the same dimensionality. Our aim is to consider methods that go above and beyond the "naive" base transfer method, which uses standard normalizing flows that map from reference to target via a base density intermediary. In particular, we have introduced the flows for flows method, which uses two normalizing flows to parameterise the probability densities of both the reference and the target and trains both with exact maximum likelihoods.
We have evaluated five transfer methods: base transfer, unidirectional transfer, flows for flows, movement penalty, and identity initialization. We have attempted to evaluate each method on two facets: the accuracy of the transport between reference and target, and the efficiency of the transport (i.e. how far away are points transported by the mapping). When the reference and target are fully unrelated (such as for the toy examples in Sec. III), the flows for flows method is comparable with the naive base transfer method both for accuracy and extent. When the reference and target sets are similar, or obviously related in some way (such as for the particle physics calibration application in Sec. IV), the flows and flows method is far preferable to the base transfer method. These results imply that the flows for flows method should be used over the base transfer method, as it can always provide both an accurate and efficient transport. However, the highest performing (and thus our recommended) methods of transport are either the movement penalty or identity initialization methods, depending on the specific application.
There are many avenues for further modifications of the flows for flows method, or other ways to construct flow-based mapping functions in general. One interesting avenue involves physically-motivated extensions of normalizing flows: continuous normalizing flows (CNF) [27], we can constrain the flow mappings such that they can be assigned velocity vectors, and convex potential (CP) flows [28], where the map is constrained to be the gradient of a convex potential. One can explicitly enforce optimal transports with OT-Flows [29], which add to the CNF loss both an L2 movement penalty and a penalty that encourages the mapping to transport points along the minimum of some potential function. While such modifications may not be necessary when the reference and target distributions are very similar, they could be explored for situations when the reference and target distributions are significantly different.
## Code
The flows for flows package can be found at [https://github.com/jraine/flows4flows](https://github.com/jraine/flows4flows). JR and SK contributed equally to its creation.
## Acknowledgements
TG, SK, and JR would like to acknowledge funding through the SNSF Sinergia grant called Robust Deep Density Models for High-Energy Particle Physics and Solar Flare Analysis (RODEM) with funding number CRSII5_193716. BN and RM are supported by the U.S. Department of Energy (DOE), Office of Science under contract DE-AC02-05CH11231. RM is additionally supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE 2146752; any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
|
2309.09741 | Heavy resonances and the oblique parameters S and T | It has been confirmed experimentally the existence of a mass gap between
Standard Model (SM) and eventual Beyond Standard Model (BSM) fields. Therefore,
the use of effective approaches to search for fingerprints of New Physics is
very appealing. A non-linear realizations of the Electroweak Symmetry Breaking
is considered here, where the Higgs is a singlet with free couplings and the SM
fields are also coupled to bosonic heavy resonances. A one-loop-level
calculation of the oblique S and T parameters is presented here. This analysis
allows us to constrain resonance masses to be above the TeV scale, $M_R\!>\!
3\,$TeV, in good agreement with our previous determinations, where these
observables were computed with a more simplified Lagrangian. | Ignasi Rosell, Antonio Pich, Juan Jose Sanz-Cillero | 2023-09-18T13:08:03Z | http://arxiv.org/abs/2309.09741v1 | # Heavy resonances and the oblique parameters \(S\) and \(T\)1
###### Abstract
It has been confirmed experimentally the existence of a mass gap between Standard Model (SM) and eventual Beyond Standard Model (BSM) fields. Therefore, the use of effective approaches to search for fingerprints of New Physics is very appealing. A non-linear realizations of the Electroweak Symmetry Breaking is considered here, where the Higgs is a singlet with free couplings and the SM fields are also coupled to bosonic heavy resonances. A one-loop-level calculation of the oblique \(S\) and \(T\) parameters is presented here. This analysis allows us to constrain resonance masses to be above the TeV scale, \(M_{R}\gtrsim 3\) TeV, in good agreement with our previous determinations, where these observables were computed with a more simplified Lagrangian.
keywords: Beyond Standard Model, Effective Field Theories, Higgs Physics, Chiral Lagrangians. +
Footnote †: journal:
## 1 Introduction
The LHC results have confirmed the Standard Model (SM) as the correct framework for explaining electroweak interactions within the energy ranges examined up to now. The discovery of a Higgs-like particle [1], with couplings very aligned with SM predictions, has effectively filled in the complete set of fundamental fields according to the SM, and there have been no new states discovered to date. Consequently, the available data indicate the presence of a mass gap between the SM and any hypothetical degrees of freedom associated with New Physics (NP). This gap serves as a rationale for employing effective field theories to systematically investigate low-energy physics for indications of NP scales.
Effective field theories are built upon several key components, being the particle content, the symmetries and the power counting the most important ones. In the context of electroweak theory, the choice of power counting method hinges on how the Higgs field \(h\) is introduced [2; 3]. There are two different approaches: the commonly used linear realization of Electroweak Symmetry Breaking (EWSB), where the Higgs is considered part of a doublet together with the three electroweak (EW) Goldstone bosons \(\vec{\phi}\), as in the SM; or the more encompassing non-linear realization, which does not presume any specific relationship between the Higgs and
the three Goldstone fields. Here we opt for the latter approach [4], employing an expansion based on generalized momenta. It is worth noting that the linear realization can be seen as a special case within the broader framework of the non-linear approach.
We examine a strongly-coupled scenario involving heavy spin-1 resonances interacting with the SM particles. Our objective is to evaluate the bounds placed on the masses of these heavy resonances based on the phenomenology derived from the electroweak oblique parameters [5].
In Ref. [6] we presented a one-loop calculation of the \(S\) and \(T\) parameters within strongly-coupled scenarios of this kind. Our analysis yielded robust and model-independent constraints on both the Higgs couplings and the heavy scales involved. By making only modest assumptions about the high-energy behavior of the underlying fundamental theory, we demonstrated that precision electroweak data necessitate the Higgs-like scalar to possess a \(hWW\) coupling very close to the SM one. Simultaneously, we found that the masses of vector and axial-vector resonances should exhibit considerable degeneracy and be above 4 TeV. The considerably larger dataset collected in recent years has provided a much more precise experimental measurement of the \(hWW\) coupling, \(\kappa_{W}\). Consequently, we no longer treat \(\kappa_{W}\) as an independent free parameter, opting instead to use its experimentally determined value as an input. This adjustment enables us to broaden the scope of our analysis to include a wider range of interactions: while our prior work solely focused on the bosonic P-even sector [6], we incorporate now both P-even and P-odd operators into our approach. Here we show the initial phase in the effort to enhance and update the results reported in Ref. [6]; we plan to complete this update by incorporating fermionic contributions as well [7].
In Section 2, we introduce our Lagrangian including the resonances, while Section 3 outlines the calculation of the \(S\) and \(T\) parameters, up to next-to-leading order (NLO), through dispersive representations. Section 4 is dedicated to exploring the practical implications of these results from a phenomenological perspective. Ultimately, we provide a summary in Section 5.
## 2 The theoretical framework
### The effective resonance Lagrangian
In the non-linear realization of EWSB, operators are not ordered following their canonical dimensions and they must be organized taking into account their behavior at low momenta, the chiral dimensions [8]. Here we focus on the NLO resonance contributions to the \(S\) and \(T\) parameters derived exclusively from the lightest bosonic absorptive cuts (\(\varphi\varphi\) and \(h\varphi\)). Consequently, we only need to consider bosonic operators involving at most one spin-1 resonance field. Following the notation of Ref. [4], the relevant CP-conserving Lagrangian for our calculations reads
\[\Delta\mathcal{L}_{\rm RT} = \frac{v^{2}}{4}\left(1+\frac{2\kappa_{W}}{v}h\right)(u_{\mu}u^{ \mu})_{2} \tag{1}\] \[+\langle\ V_{3\,\mu\nu}^{1}\left(\frac{F_{V}}{2\sqrt{2}}f_{+}^{ \mu\nu}+\frac{iG_{V}}{2\sqrt{2}}[u^{\mu},u^{\nu}]+\frac{\widetilde{F}_{V}}{2 \sqrt{2}}f_{-}^{\mu\nu}\right.\] \[\left.+\frac{\widetilde{A}_{1}^{h\nu}}{\sqrt{2}}\left[(\partial^ {\mu}h)u^{\nu}-(\partial^{\nu}h)u^{\mu}\right]\right\}\rangle_{2}\] \[+\langle\ A_{3\,\mu\nu}^{1}\left(\frac{F_{A}}{2\sqrt{2}}f_{-}^{ \mu\nu}+\frac{\lambda_{1}^{h\lambda}}{\sqrt{2}}\left[(\partial^{\mu}h)u^{\nu} -(\partial^{\nu}h)u^{\mu}\right]\right.\] \[\left.+\frac{\widetilde{F}_{A}}{2\sqrt{2}}f_{+}^{\mu\nu}+\frac{i \widetilde{G}_{A}}{2\sqrt{2}}[u^{\mu},u^{\nu}]\right)_{2},\]
where \(V_{3\,\mu\nu}^{1}\) and \(A_{3\,\mu\nu}^{1}\) introduce color-singlet custodial-triplet resonances with \(J^{PC}\) quantum numbers \(1^{--}\) (V) and \(1^{++}\) (A) by using an antisymmetric formalism. The Goldstone fields are parametrized through the SU(2) matrix \(U=u^{2}=\exp{(i\vec{\sigma}\vec{\varphi}/v)}\), where \(v=(\sqrt{2}G_{F})^{-1/2}=246\) GeV is the EWSB scale, \(u_{\mu}=-iu^{\dagger}D_{\mu}Uu^{\dagger}\) with \(D_{\mu}\) the appropriate covariant derivative, and \(f_{\pm}^{\mu\nu}\) contain the gauge-boson field strengths. Note that couplings with tilde are related to odd-parity operators.
### Short-distance constraints
From (1) one observes the presence of ten resonance parameters (including masses), so that the use of asymptotic constraints is fundamental to reduce the number of unknown parameters and, consequently, to be able to obtain phenomenological bounds. Moreover, this resonance Lagrangian is assumed to be an interpolation between the low- and the high-energy regimes and, as a result, considering a reasonable high-energy behavior is an important ingredient of this effective approach:
1. Requiring that two-Goldstone (\(\varphi\varphi\)) and Higgs-Goldstone (\(h\varphi\)) vector and axial form factors vanish at high energies allow us to determine the couplings \(G_{V}\), \(\widetilde{G}_{A}\), \(A_{1}^{h\lambda}\) and \(\widetilde{\lambda}_{1}^{h\nu}\) in terms of the remaining parameters [9], \[\frac{G_{V}}{F_{A}}=-\frac{\widetilde{G}_{A}}{\widetilde{F}_{V}}=\frac{ \lambda_{1}^{h\lambda}v}{\kappa_{W}F_{V}}=-\frac{\widetilde{A}_{1}^{h\nu}v}{ \widetilde{F}_{A}}=\frac{v^{2}}{F_{V}F_{A}-\widetilde{F}_{V}\widetilde{F}_{A}}.\] (2)
2. The assumed chiral symmetry of the underlying electroweak theory suggests that the \(W^{3}B\) correlator is an order parameter of the EWSB and it vanishes at high energies in asymptotically-free gauge
theories as \(1/s^{3}\)[10], giving rise to the 1st and 2nd Weinberg Sum Rules (WSRs) [11]:
1. 1st WSR (vanishing of the \(1/s\) term). At leading-order (LO) it implies [9]: \[\left(F_{V}^{2}-\widetilde{F}_{V}^{2}\right)-\left(F_{A}^{2}- \widetilde{F}_{A}^{2}\right)=v^{2}\,,\] (3) whereas at NLO it implies the more involved [7] \[\left(F_{V}^{2}-\widetilde{F}_{V}^{2}\right)-\left(F_{A}^{2}- \widetilde{F}_{A}^{2}\right)=v^{2}\left(1+\delta_{\mbox{\tiny{\rm{ NLO}}}}^{(1)}\right)\,,\] (4) plus an additional conditions \(\widetilde{\delta}_{\mbox{\tiny{\rm{ NLO}}}}^{(1)}=0\). Note that \(\delta_{\mbox{\tiny{\rm{ NLO}}}}^{(1)}\) and \(\widetilde{\delta}_{\mbox{\tiny{\rm{ NLO}}}}^{(1)}\) are obtained from the LO high-energy expansion of the one-loop contribution of the related correlator [6].
2. 2nd WSR (vanishing of the \(1/s^{2}\) term). At LO it implies [9]: \[\left(F_{V}^{2}-\widetilde{F}_{V}^{2}\right)M_{V}^{2}-\left(F_{A}^{2}- \widetilde{F}_{A}^{2}\right)M_{A}^{2}=0\,.\] (5) At NLO (5) convers into [7] \[\left(F_{V}^{2}-\widetilde{F}_{V}^{2}\right)M_{V}^{2}-\left(F_{A}^{2}- \widetilde{F}_{A}^{2}\right)M_{A}^{2}=v^{2}M_{V}^{2}\delta_{\mbox{\tiny{ \rm{ NLO}}}}^{(2)},\] (6) plus an additional conditions \(\widetilde{\delta}_{\mbox{\tiny{\rm{ NLO}}}}^{(2)}=0\). Note again that \(\delta_{\mbox{\tiny{\rm{ NLO}}}}^{(2)}\) and \(\widetilde{\delta}_{\mbox{\tiny{\rm{ NLO}}}}^{(2)}\) are obtained from the NLO high-energy expansion of the one-loop contribution of the related correlator [6].
Whereas the 1st WSR is expected to be satisfied in gauge theories with nontrivial ultraviolet (UV) fixed points, the applicability of the 2nd WSR hinges on the nature of the UV theory under consideration.
It is interesting to stress that in the absence of P-odd couplings (3) and (5) together require \(M_{A}>M_{V}\) and this mass hierarchy continues to hold if odd-parity couplings are assumed to be smaller than even-parity ones, \(\widetilde{F}_{VA}\ll F_{VA}\), which is a reasonable assumption we make.
## 3 Oblique Electroweak Observables: \(S\) and \(T\) at NLO
### The observables
We adopt here the notation used in Ref. [6]. The calculations are carried out in the Landau gauge, which ensures that the gauge boson propagators are transverse and their self-energies,
\[{\cal L}_{v.p.}= -\frac{1}{2}W_{\mu}^{3}\,\Pi_{33}^{\mu\nu}(s)W_{\nu}^{3}-\frac{1 }{2}B_{\mu}\,\Pi_{00}^{\mu\nu}(s)B_{\nu}\] \[-W_{\mu}^{3}\,\Pi_{30}^{\mu\nu}(s)B_{\nu}-W_{\mu}^{\ast}\,\Pi_{WW }^{\mu\nu}(s)W_{\nu}^{-}\,, \tag{7}\]
can be written as
\[\Pi_{ij}^{\mu\nu}(q^{2}) = \left(-g^{\mu\nu}+\frac{q^{\mu}q^{\nu}}{q^{2}}\right)\,\Pi_{ij}(q ^{2}). \tag{8}\]
The definitions of \(S\) and \(T\) involve \(e_{3}\) and \(e_{1}\),
\[e_{3} = \frac{g}{g^{\prime}}\,\,\widetilde{\Pi}_{30}(0)\,,\qquad e_{1} = \frac{\Pi_{33}(0)-\Pi_{WW}(0)}{M_{W}^{2}}\,, \tag{9}\]
where one has removed the tree-level Goldstone contribution from \(\Pi_{30}(s)\)[5]:
\[\Pi_{30}(s) = s\,\widetilde{\Pi}_{30}(s)\,+\,\,\frac{g^{2}\tan\theta_{W}}{4}\,v^{2}\,. \tag{10}\]
The \(S\) and \(T\) parameters are determined by the discrepancies between \(e_{3}\) and \(e_{1}\) and their respective SM contributions, denoted as \(e_{3}^{\rm SM}\) and \(e_{1}^{\rm SM}\), respectively:
\[S = \frac{16\pi}{g^{2}}(e_{3}-e_{3}^{\rm SM}),\quad T=\frac{4\pi}{g^{2} \sin^{2}\theta_{W}}(e_{1}-e_{1}^{\rm SM}). \tag{11}\]
### Dispersive relations
For the calculations of \(S\) and \(T\) we use dispersive representations [5; 6]:
\[S = \frac{16\pi}{g^{2}\tan\theta_{W}}\,\int_{0}^{\infty}\!\frac{ \mbox{\rm{ds}}}{s}\,\left[\rho_{S}(s)\,-\,\rho_{S}(s)^{\rm SM}\right],\] \[T = \frac{4\pi}{g^{\prime\,2}\cos^{2}\theta_{W}}\,\int_{0}^{\infty} \!\frac{\mbox{\rm{ds}}}{s^{2}}\,\left[\rho_{T}(s)\,-\,\rho_{T}(s)^{\rm SM}\right], \tag{12}\]
with the spectral functions
\[\rho_{S}(s) = \frac{1}{\pi}\,\mbox{Im}\widetilde{\Pi}_{30}(s)\,,\] \[\rho_{T}(s) = \frac{1}{\pi}\mbox{Im}[\Sigma^{(0)}(s)-\Sigma^{(+)}(s)]\,, \tag{13}\]
being \(\Sigma\) the corresponding Goldstone self-energy. At LO in \(g\) and \(g^{\prime}\) the SM one-loop spectral function read
\[\rho_{S}(s)^{\rm SM} = \frac{g^{2}\tan\theta_{W}}{192\pi^{2}}\left[\theta(s)-\left(1- \frac{m_{h}^{2}}{s}\right)^{3}\,\theta\left(s-m_{h}^{2}\right)\right],\] \[\rho_{T}(s)^{\rm SM} = \frac{3g^{\prime\,2}s}{64\pi^{2}}\,\left[\,-\,\theta(s)+\left(1- \frac{m_{h}^{4}}{s^{2}}\right)\theta(s-m_{h}^{2})\right].\]
### Leading-order calculation
At LO \(T\) vanishes (\(T_{\rm LO}=0\)), while there is a LO contribution to \(S\),
\[\Pi_{30}(s)\big{|}_{\rm LO}=\frac{g^{2}\tan\theta_{W}}{4}s\left[\frac{v^{2}}{ s}+\frac{F_{V}^{2}-\widetilde{F}_{V}^{2}}{M_{V}^{2}-s}-\frac{F_{A}^{2}- \widetilde{F}_{A}^{2}}{M_{A}^{2}-s}\right]. \tag{15}\]
so that, and by using (9-11),
\[S_{\rm LO} = 4\pi\left(\frac{F_{V}^{2}-\widetilde{F}_{V}^{2}}{M_{V}^{2}}-\frac{F_ {A}^{2}-\widetilde{F}_{A}^{2}}{M_{A}^{2}}\right)\,. \tag{16}\]
If one assumes the 1st and the 2nd WSRs of (3) and (5), \(S_{\rm LO}\) is determined in terms of only resonance masses,
\[S_{\rm LO}=4\pi v^{2}\left(\frac{1}{M_{V}^{2}}+\frac{1}{M_{A}^{2}}\right). \tag{17}\]
If only the 1st WSR is considered, and assuming \(M_{A}>M_{V}\) and \(\widetilde{F}_{A}^{2}<F_{A}^{2}\), (3) allows us to obtain a bound:
\[S_{\rm LO}=4\pi\left\{\frac{v^{2}}{M_{V}^{2}}+\left(F_{A}^{2}-\widetilde{F}_{ A}^{2}\right)\left(\frac{1}{M_{V}^{2}}-\frac{1}{M_{A}^{2}}\right)\right\}>\frac{4 \pi v^{2}}{M_{V}^{2}}. \tag{18}\]
### Next-to-leading-order calculation
At NLO the number of resonance parameters increases and we have opted for an expansion in small \(\widetilde{F}_{V,A}/F_{V,A}\), taking into account that odd-parity couplings are expected in our study to be smaller than even-parity couplings, that is, \(\widetilde{F}_{V,A}/F_{V,A}\ll 1\). If one considers the 1st and the 2nd WSRs of (4) and (6), the bosonic contributions studied here determine \(S\) in terms of only \(\kappa_{W}\), \(M_{V,A}\) and the expansion parameters \(\widetilde{F}_{V,A}/F_{V,A}\):
\[S_{\rm NLO} = 4\pi v^{2}\left(\frac{1}{M_{V}^{2}}+\frac{1}{M_{A}^{2}}\right)+S _{\rm NLO}^{\rm P-even}+S_{\rm NLO}^{\rm P-odd}\,,\] \[S_{\rm NLO}^{\rm P-even} = \frac{1}{12\pi}\left[\left(1-\kappa_{W}^{2}\right)\left(\log \frac{M_{V}^{2}}{m_{h}^{2}}-\frac{11}{6}\right)\right.\] \[\left.+\kappa_{W}^{2}\left(\frac{M_{A}^{2}}{M_{V}^{2}}-1\right) \log\frac{M_{A}^{2}}{M_{V}^{2}}\right]\,,\] \[S_{\rm NLO}^{\rm P-odd} = \frac{1}{12\pi}\left(\frac{\widetilde{F}_{V}^{2}}{F_{V}^{2}}+2 \kappa_{W}^{2}\frac{\widetilde{F}_{V}\widetilde{F}_{A}}{F_{V}F_{A}}-\kappa_{ W}^{2}\frac{\widetilde{F}_{A}^{2}}{F_{A}^{2}}\right)\times \tag{19}\] \[\left(\frac{M_{A}^{2}}{M_{V}^{2}}-1\right)\log\frac{M_{A}^{2}}{M_ {V}^{2}}+\mathcal{O}\left(\frac{\widetilde{F}_{V,A}^{4}}{F_{V,A}^{4}}\right).\]
As at LO, assuming only the 1st WSR of (4) and \(M_{A}\!>\!M_{V}\) allow us to get a lower bound for \(S_{\rm NLO}\),
\[S_{\rm NLO} > \frac{4\pi v^{2}}{M_{V}^{2}}+\Delta S_{\rm NLO}^{\rm P-even}+ \Delta S_{\rm NLO}^{\rm P-odd}\,,\] \[\Delta S_{\rm NLO}^{\rm P-even} = \frac{1}{12\pi}\left[\left(1-\kappa_{W}^{2}\right)\left(\log \frac{M_{V}^{2}}{m_{h}^{2}}-\frac{11}{6}\right)\right.\] \[\left.-\,\kappa_{W}^{2}\left(\log\frac{M_{A}^{2}}{M_{V}^{2}}-1+ \frac{M_{A}^{2}}{M_{V}^{2}}\right)\right]\,,\] \[\Delta S_{\rm NLO}^{\rm P-odd} = \frac{1}{12\pi}\left\{\left[\frac{\widetilde{F}_{V}^{2}}{F_{V}^{ 2}}+\kappa_{W}^{2}\frac{\widetilde{F}_{A}}{F_{A}}\left(2\frac{\widetilde{F}_{ V}}{F_{V}}-\frac{\widetilde{F}_{A}}{F_{A}}\right)\right]\times\right.\] \[\left.\left(1-\frac{M_{A}^{2}}{M_{V}^{2}}\right)+\log\frac{M_{A}^ {2}}{M_{V}^{2}}\times\right.\] \[\left.\left(\frac{\widetilde{F}_{V}^{2}}{F_{V}^{2}}-\kappa_{W}^{2} \frac{\widetilde{F}_{A}^{2}}{F_{A}^{2}}-2\frac{\widetilde{F}_{V}\widetilde{F}_ {A}}{F_{V}F_{A}}\right)\right\}+\mathcal{O}\left(\frac{\widetilde{F}_{V,A}^{4} }{F_{V,A}^{4}}\right]\!.\]
Independently of the assumption about the WSRs, \(T\) is determined again in terms of only \(\kappa_{W}\), \(M_{V,A}\) and \(\widetilde{F}_{V,A}\):
\[T_{\rm NLO} = T_{\rm NLO}^{\rm P-even}+T_{\rm NLO}^{\rm P-odd}\,,\] \[T_{\rm NLO}^{\rm P-even} = \frac{3}{16\pi{\rm cos}^{2}\theta_{W}}\left[\left(1-\kappa_{W}^{2} \right)\left(1-\log\frac{M_{V}^{2}}{m_{h}^{2}}\right)+\kappa_{W}^{2}\log \frac{M_{A}^{2}}{M_{V}^{2}}\right]\!\!,\] \[T_{\rm NLO}^{\rm P-odd} = \frac{3}{16\pi{\rm cos}^{2}\theta_{W}}\left\{2\kappa_{W}^{2}\frac{ \widetilde{F}_{A}}{F_{A}}-2\frac{\widetilde{F}_{V}}{F_{V}}+\frac{M_{V}^{2}}{M_ {A}^{2}-M_{V}^{2}}\times\right.\] (20) \[\left.\phantom{T_{\rm NLO}^{\rm P-odd}=} \phantom{T_{\rm NLO}^{\rm P
gray area assumes both WSRs and \(M_{A}\!>\!M_{V}\). The colored curves indicate explicitly the predicted results for \(M_{A}=M_{V}\) (orange), \(M_{A}=1.1\,M_{V}\) (blue), \(M_{A}=1.2\,M_{V}\) (red) and \(M_{A}\to\infty\) (dark gray). When only the 1st WSR is considered, the allowed range gets enlarged to the brown region. Note that the experimental data imply \(M_{V}\!>\!2\,\)TeV.
### Phenomenology at NLO considering both WSRs
In this case the expressions of \(S\) and \(T\) are reported in (19) and (21), respectively. Note that they are given in terms of only four free parameters: \(M_{V}\), \(M_{A}\), \(\widetilde{F}_{V}/F_{V}\) and \(\widetilde{F}_{A}/F_{A}\), but the last two ones are expected to be small in our expansion and a normal distribution with \(\widetilde{F}_{VA}/F_{VA}=0.00\pm 0.33\) is assumed. Moreover, the additional constraint \(\widetilde{\delta}^{(2)}_{\mbox{\tiny NLO}}=0\) allows us to fix \(M_{A}\) too (implying very close values of \(M_{A}\) to \(M_{V}\)). In Figure 2 we show the results. The experimental data imply much higher values for \(M_{V}\).
### Phenomenology at NLO considering only the 1st WSR
Now the lower bound of \(S\) in given in (20) and the expresssion of \(T\) is shown in (21). Please take note that again they are expressed using just four resonance parameters: \(M_{V}\), \(M_{A}\), \(\widetilde{F}_{V}/F_{V}\) and \(\widetilde{F}_{A}/F_{A}\). As in the previous case for \(\widetilde{F}_{V}/F_{V}\) and \(\widetilde{F}_{A}/F_{A}\) we consider a normal distribution with \(\widetilde{F}_{VA}/F_{VA}=0.00\pm 0.33\). The comparison between our determinations and the experimental values is shown in Figure 3, leading to \(M_{V}\!\gtrsim\!3\,\)TeV.
## 5 Conclusions
We conduct a calculation of the \(S\) and \(T\) oblique parameters at the next-to-leading-order level, employing an effective methodology that incorporates spin-1 heavy resonances. Our approach adopts a broad non-linear realization of Electroweak Symmetry Breaking (EWSB), devoid of any presumptions regarding the precise connection between the Higgs and the Goldstones. This results represents a first step in the update of the results reported in Ref. [6], encompassing a more general Lagrangian [4] and taking into account the latest experimental constraints of Ref. [12]. In the near future we will conclude this update by considering fermionic contributions too [7].
It is worth highlighting that employing dispersive relations has eliminated the reliance on arbitrary cut-off values, which can introduce unphysical aspects. Additionally, we incorporate essential high-energy constraints by presuming the presence of well-behaved form factors [9] and invoking the Weinberg Sum Rules (WSRs) [11]. This strategic approach enables us to express \(S\) and \(T\) using only a small set of resonance parameters: \(M_{V}\), \(M_{A}\), \(\widetilde{F}_{V}/F_{V}\) and \(\widetilde{F}_{A}/F_{A}\).
We assume that odd-parity couplings are subleading, so that we can employ an expansion in \(\widetilde{F}_{VA}/F_{VA}\). This assumption allows us to consider a normal distribution
Figure 1: LO predictions for \(S\). The green area covers the experimentally allowed region, at 68% and 95% CL [12]. The gray region assumes the two WSRs and we indicate explicitly the corresponding lines for \(M_{A}=M_{V}\) (orange), \(M_{A}=1.1\,M_{V}\) (blue), \(M_{A}=1.2\,M_{V}\) (red) and \(M_{A}\to\infty\) (dark gray). If only the 1st WSR is considered, the allowed region is given by both, the gray and the brown areas.
Figure 2: NLO determinations of \(S\) and \(T\) assuming both WSRs. The ellipses give the experimentally allowed regions of \(S\) and \(T\) at 68%, 95% and 99% CL [12]. The different colors of the points correspond to different values of \(M_{V}\): \(M_{V}=2\) (red), 3 (orange), 4 (yellow), 5 (green), 6 (blue) and 7 (purple) TeV. The values of \(\kappa_{W}\) and \(\widetilde{F}_{VA}/F_{VA}\) have been generated considering normal distributions given by \(\kappa_{W}=1.01\pm 0.06\)[13] and \(\widetilde{F}_{VA}/F_{VA}=0.00\pm 0.33\).
for \(\widetilde{F}_{A}/F_{A}\), with \(\widetilde{F}_{VA}/F_{VA}=0.00\pm 0.33\). Within the context of both Weinberg Sum Rules (WSRs), \(M_{A}\) must be a little larger than \(M_{V}\) and our findings are graphically illustrated in Figure 2. In the case where we disregard the 2nd WSR, we assume that \(M_{A}>M_{V}\) and we examine the alignment between our predictions and experimental constraints in Figure 3.
The primary inference drawn from our analysis is that the present electroweak precision data permits the existence of massive resonances at the natural electroweak scale, \(M_{R}\raisebox{-3.698858pt}{$\,\stackrel{{>}}{{\sim}}\,$}3\) TeV. Our findings align with the conclusions drawn in our earlier studies of Refs. [6; 9].
## Acknowledments
We wish to thank the organizers for the pleasant conference. This work has been supported in part by the Spanish Government (PID2019-108655GB-I00, PID2020-114473GB-I00, PID2022-137003NB-I00); by the Generalitat Valenciana (PROMETEU/2021/071); by the Universidad Cardenal Herrera-CEU (IND122/15); and by the ESI International Chair@CEU-UCH.
## Declaration of Generative AI and AI-assisted technologies in the writing process
During the preparation of this work the authors used ChatGPT in order to improve readability and language. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
|
2308.16548 | Does the Cosmological Constant really indicate the existence of a Dark
Dimension? | According to the "dark dimension" (DD) scenario, we might live in a universe
with a single compact extra dimension, whose mesoscopic size is dictated by the
measured value of the cosmological constant. This scenario is based on
swampland conjectures, that lead to the relation $\rho_{\rm swamp}\sim m_{_{\rm
KK}}^4$ between the vacuum energy $\rho_{\rm swamp}$ and the size of the extra
dimension $m_{_{\rm KK}}^{-1}$ ($m_{_{\rm KK}}$ is the mass scale of a
Kaluza-Klein tower), and on the corresponding result $\rho_{_{\rm EFT}}$ from
the EFT limit. We show that $\rho_{_{\rm EFT}}$ contains previously missed
UV-sensitive terms, whose presence invalidates the widely spread belief (based
on existing literature) that the calculation gives automatically the finite
result $\rho_{_{\rm EFT}}\sim m_{_{\rm KK}}^4$ (with no need for fine-tuning).
This renders the matching between $\rho_{\rm swamp}$ and $\rho_{_{\rm EFT}}$ a
non-trivial issue. We then comment on the necessity to find a mechanism that
implements the suppression of the aforementioned UV-sensitive terms. This
should finally allow to frame the DD scenario in a self-consistent framework,
also in view of its several phenomenological applications based on EFT
calculations. | Carlo Branchina, Vincenzo Branchina, Filippo Contino, Arcangelo Pernace | 2023-08-31T08:40:32Z | http://arxiv.org/abs/2308.16548v3 | # Does the Cosmological Constant really indicate the existence of a Dark Dimension?
###### Abstract
It has been recently proposed that we might live in a universe with a single compact extra dimension, whose mesoscopic size is dictated by the measured value of the cosmological constant. Central to this proposal is the result that in a \(4+n\) dimensional theory with \(n\) compact dimensions a tower of Kaluza-Klein (KK) states contributes an amount \(m_{{}_{\rm KK}}^{4}\) to the vacuum energy \(\rho_{4}\), where \(m_{{}_{\rm KK}}\) is the KK scale of the tower. We show that the result \(\rho_{4}\sim m_{{}_{\rm KK}}^{4}\) comes from a mistreatment of the asymptotics of the loop momenta in the \(4+n\) original theory. When the latter are correctly treated, new UV-sensitive terms appear in \(\rho_{4}\) that invalidate the prediction of the dark dimension. We also show that, despite recent claims to the contrary, it is always possible to perform consistent effective field theory calculations that include only a finite number of tower states.
## I Introduction
Theories with large extra dimensions were extensively explored in the nineties in search for a solution to the electroweak naturalness/hierarchy problem [1; 2; 3]. A recent surge of interest towards the physics of \(5D\) effective field theories (EFTs) with one compact extra dimension of mesoscopic size has followed the dark dimension (DD) proposal, which, following arguments based on Swampland conjectures [4], and born in a string framework, suggests the existence of a single extra dimension of \(\mu\)m size. A generic feature of string theories is the existence of towers with an infinite number of states, whose masses are given in terms of a scale \(\mu_{tow}\). According to the Swampland distance conjecture [5], at large distance in the moduli field space \(\phi\) one of the tower scales becomes exponentially small, \(\mu_{tow}\sim e^{-\alpha|\phi|}\) (\(\alpha\) positive \({\cal O}(1)\) constant), and the DD proposal is related to this asymptotic regime. As stressed in [4], in these regions only two cases seem to arise in string compactifications: a tower of string excitation modes or a tower of Kaluza-Klein (KK) states, i.e. \(\mu_{tow}\sim M_{s},m_{{}_{\rm KK}}\) (Emergent String Conjecture [6; 7]). In general, in \(d\) spacetime non-compact dimensions an infinite tower of states contributes to the vacuum energy \(\rho_{d}\) an amount \(\,\rho_{d}\sim\mu_{tow}^{d}\)[8; 9]. A similar result seems to hold even in the framework of higher dimensional field theories with compact extra dimensions. This is for instance the case for supersymmetric theories with Scherk-Schwarz or brane-localized SUSY breaking [10; 11]. Only KK modes are present and the usual calculation gives \(\rho_{d}\sim m_{{}_{\rm KK}}^{d}\).
Going back to the string (quantum gravity) framework, when the distance conjecture is implemented in AdS spaces [12]
\[\mu_{tow}\sim|\widetilde{\Lambda}_{\rm cc}|^{\gamma}\,, \tag{1}\]
where \(\widetilde{\Lambda}_{\rm cc}\) is the cosmological constant times the squared Planck mass \(M_{P}^{2}\). Even though there is much wider support in AdS, the conjecture is nonetheless extended also to dS spaces, where it forms the basis for the dark dimension proposal [4]. Restricting to the \(d=4\) case, the one-loop string calculation of \(\rho_{4}\) gives
\[\rho_{4}\sim\mu_{tow}^{4}\,. \tag{2}\]
The authors of [4] note that higher loops might only contribute with higher powers of \(\mu_{tow}\), so that (barring cancellation of the \(\mu_{tow}^{4}\) term) the comparison of (2) with (1) gives1
Footnote 1: They also refer to [13; 14] to further support the bound (3).
\[\gamma\geq\frac{1}{4}\,. \tag{3}\]
They assume (3) as starting point for their proposal. Moreover, observing that the experimental bounds on possible violations of the \(1/r^{2}\) Newton's law [15] give \(\mu_{tow}\geq 6.6\) meV, and that the energy scale associated to the measured value of \(\widetilde{\Lambda}_{\rm cc}\)[16] is of the same order, \(\widetilde{\Lambda}_{\rm cc}^{1/4}\sim 2.31\) meV, they infer that (1) is saturated with \(\gamma=1/4\), and accordingly the "experimental value" of \(\mu_{tow}\) is
\[\mu_{tow}^{exp}\sim 2.31\,{\rm meV} \tag{4}\]
(order the neutrino scale). Finally they observe that, although it is in principle possible that \(\mu_{tow}=M_{s}\), Eq. (4) indicates that this option is "ruled out by experiments" since we know that physics above the neutrino scale is well described by effective field theories, and no sign of string excitations is observed at these scales. They then conclude that the only possibility left is an "EFT decompactification scenario", with a Kaluza-Klein mass \(m_{{}_{\rm KK}}\sim\mu_{tow}^{exp}\sim 2.31\,{\rm meV}\).
This conclusion takes us from the string theory realm to the EFT terrain, and is crucial to the formulation of
the DD proposal. Typically, when physics is described in terms of a string KK tower, the original string theory is replaced by the corresponding higher dimensional EFT with compact extra dimensions. A thorough analysis of this delicate step is one of the goals of the present work.
According to [17], the strongest bounds for the compactification scale \(m_{{}_{KK}}^{-1}\) come from the heating of neutron stars due to the surrounding cloud of trapped KK gravitons [17; 18], which yields to the upper bounds: \(m_{{}_{\rm KK}}^{-1}<44\,\mu\)m for \(n=1\), \(m_{{}_{\rm KK}}^{-1}<1.6\times 10^{-4}\,\mu\)m for \(n=2\), with more stringent bounds for \(n>2\). The authors of [4] then conclude that \(n\geq 2\) is excluded since it is not compatible with \(m_{{}_{\rm KK}}\sim\widetilde{\Lambda}_{\rm cc}^{1/4}\), and that there should be a _single_ extra dimension, they call it _dark dimension_, of size \(\sim 1-100\,\mu\)m.
In string theory the finite result \(\rho_{d}\sim\mu_{tow}^{d}\) arises from modular invariance, that requires to sum over the infinite tower of states. In higher dimensional field theories with compact extra dimensions such an UV-insensitive result for \(\rho_{d}\) is obtained performing the calculation in a similar manner, i.e. summing over the infinite number of KK-states (the same is done for the calculation of the 4D Higgs effective potential and Higgs boson mass). Differently from the string theory case, however, in this EFT framework such a way of performing the calculation is less obvious to justify [19]. In fact, this question was at the centre of a heated debate in the early 2000's. Several authors tried to support this way of operating with different arguments [20; 21], and even nowadays there are attempts at justifying it from the string theory side [22].
This issue, and more generally the question of developing a well-founded EFT approach to field theories with compact extra dimensions, was recently re-analysed in [23], where the focus was on the problem of the UV-(in)sensitivity of the one-loop Higgs effective potential and Higgs mass. It was shown that within the usual calculations the asymptotics of the loop momenta are mistreated, and that this results in an artificial washout of UV-sensitive terms of topological origin. The latter stem from the boundary conditions that must necessarily be given to define the theory on a multiply connected manifold. Their presence was first pointed out in [23].
In this work we show that, when a proper EFT calculation of the vacuum energy \(\rho_{4}\) is performed, UV-sensitive terms arise. Moreover we discuss how the EFT logic can and must be consistently applied to theories with compact extra dimensions, although it has been recently argued that no controlled approximation can be obtained cutting a KK tower at a finite value [24].
## V Vacuum energy
For concreteness, in the following we stick to the case (sufficient for our scopes) of a \(5D\) EFT coupled to gravity, where the compact space dimension is in the shape of a circle of radius \(R\). We take the \(5D\) action to be
\[\mathcal{S}^{(4+1)}=\mathcal{S}^{(4+1)}_{\rm grav}+\mathcal{S}^{(4+1)}_{\rm matter }, \tag{5}\]
where
\[\mathcal{S}^{(4+1)}_{\rm grav}=\frac{1}{2\hat{\kappa}^{2}}\int d^{4}xdz\sqrt{ \hat{g}}\,\left(\hat{\mathcal{R}}-2\hat{\Lambda}_{cc}\right) \tag{6}\]
is the Einstein-Hilbert action in \(4+1\) dimensions and \(\mathcal{S}^{(4+1)}_{\rm matter}\) the matter action that contains the bosonic and fermionic fields of the theory. We indicate with \(x\) the \(4D\) coordinates and with \(z\) the coordinate along the compact dimension. After integration over \(z\), the \(5D\) metric (we use the \((+,-,-,-,-)\) signature)
\[\hat{g}_{{}_{MN}}=\begin{pmatrix}e^{2\alpha\phi}g_{\mu\nu}-e^{2\beta\phi}A_{ \mu}A_{\nu}&e^{2\beta\phi}A_{\mu}\\ e^{2\beta\phi}A_{\nu}&-e^{2\beta\phi}\end{pmatrix} \tag{7}\]
leads to the \(4D\) action [25]
\[\mathcal{S}^{(4)}_{\rm grav}=\frac{1}{2\kappa^{2}}\int\mathrm{d}^{4}x\,\sqrt {-g}\] \[\times\left[\mathcal{R}-2e^{2\alpha\phi}\hat{\Lambda}_{cc}+2\alpha \Box\phi+\frac{(\partial\phi)^{2}}{2}-\frac{e^{-6\alpha\phi}}{4}F^{2}\right], \tag{8}\]
where the \(4D\) constant \(\kappa\) is related to the \(5D\)\(\hat{\kappa}\) by
\[\kappa^{2}=\frac{\hat{\kappa}^{2}}{2\pi R}. \tag{9}\]
The constants \(\alpha\) and \(\beta\) satisfy the relation
\[2\alpha+\beta=0 \tag{10}\]
and the canonical radion kinetic term fixes
\[\alpha=\frac{1}{\sqrt{12}}. \tag{11}\]
For completeness we recall that the Newton constant \(\kappa\) can be used to write (8) in terms of dimensionful \(\phi\) and \(A_{\mu}\) fields through the redefinition
\[\phi\rightarrow\frac{\phi}{\sqrt{2}\kappa}\ \,\ \ A_{\mu}\rightarrow\frac{A_{\mu}} {\sqrt{2}\kappa}. \tag{12}\]
Considering the example of a complex \(5D\) scalar field \(\hat{\Phi}\) with action
\[\mathcal{S}^{(4+1)}_{\Phi}=\int d^{4}xdz\ \sqrt{\hat{g}}\left(\hat{g}^{MN} \partial_{M}\hat{\Phi}^{*}\partial_{N}\hat{\Phi}-m^{2}|\hat{\Phi}|^{2}\right) \tag{13}\]
and non-trivial boundary condition
\[\hat{\Phi}(x,z+2\pi R)=e^{2\pi iq}\,\hat{\Phi}(x,z), \tag{14}\]
for the corresponding \(4D\) action we have
\[\mathcal{S}^{(4)}_{\Phi}=\int d^{4}x\sqrt{-g}\ \times\\ \sum_{n}\left[|D\varphi_{n}|^{2}-\left(e^{2\alpha\phi}m^{2}+e^{6 \alpha\phi}\frac{(n+q)^{2}}{R^{2}}\right)|\varphi_{n}|^{2}\right], \tag{15}\]
where
\[D_{\mu}\equiv\partial_{\mu}-i\left(\frac{n+q}{R}\right)A_{\mu} \tag{16}\]
and \(\varphi_{n}(x)\) are the KK modes of \(\hat{\Phi}(x,z)\). Taking a constant background for the radion (that for notational simplicity we continue to call \(\phi\)) and the trivial background for \(A_{\mu}\),
\[\hat{g}^{0}_{{}_{MN}}=\begin{pmatrix}e^{2\alpha\phi}\eta_{\mu\nu}&0\\ 0&-e^{2\beta\phi}\end{pmatrix}, \tag{17}\]
from (15) we can define the \(\phi\)-dependent radius \(R_{\phi}\equiv R\,e^{-3\alpha\phi}\) (\(R\,e^{(\beta-\alpha)\phi}\) before using (10)) and the \(\phi\)-dependent mass \(m_{\phi}^{2}\equiv m^{2}e^{2\alpha\phi}\), so that the KK masses are
\[m_{n}^{2}\equiv m_{\phi}^{2}+\frac{(n+q)^{2}}{R_{\phi}^{2}}. \tag{18}\]
Going to Euclidean space and considering the general case, the one-loop contribution to the \(4D\) vacuum energy \(\rho_{4}\) of a single bosonic or fermionic tower of mass \(m\) and boundary charge \(q\) is then
\[\rho_{4}\sim(-1)^{\delta_{if}}\sum_{n}\int\frac{d^{4}p}{(2\pi)^{4}}\log\frac{p ^{2}+\frac{(n+q)^{2}}{R_{\phi}^{2}}+m_{\phi}^{2}}{\mu^{2}}, \tag{19}\]
where \(\mu\) is a subtraction scale, and \(i=b,f\) for bosons and fermions respectively.
The right hand side of (19) is calculated according to different strategies. One of them consists in performing the sum over \(n\) all the way up to infinity, and the integral in \(d^{4}p\) with the help of a cutoff2\(\Lambda\)[10; 11; 26]. Other methods are related to the implementation of the proper time [27], Pauli-Villars [20], thick brane [21], and dimensional regularizations [28]. They all give the same result. For the time being we focus on the first of them; we will comment on the others later. An in getting the UV-insensitive result \(\rho_{4}\sim R_{\phi}^{-4}=m_{{}_{\rm KK}}^{4}\) in [10; 11; 26] is that \(n\) is sent to infinity while \(\Lambda\) is kept fixed3. As mentioned in the Introduction, however, this way of performing the calculation mistreats the asymptotics of the \(5D\) loop momentum of the original theory. In fact, \(n/R\) is the fifth component \(p_{5}\) of the 5D loop momentum \(\hat{p}\equiv(p,n/R)\). Sending \(p_{5}\to\infty\) while keeping \(\Lambda\) fixed means that in the loop corrections we are (improperly) including first the asymptotics of the fifth component \((n/R)\) of the momentum and only later those of the other four components \(p_{1}\), \(p_{2}\), \(p_{3}\) and \(p_{4}\). As shown in [23], however, a necessary and physical requirement, overlooked in previous literature, is that the asymptotics of all the five components of \(\hat{p}\) have to be treated on an equal footing. This can be realized considering in (19) a 5D cutoff, \(\hat{g}^{{}_{MN}}_{0}\hat{p}_{{}_{M}}\hat{p}_{{}_{N}}=e^{-2\alpha\phi}p^{2}+e ^{-2\beta\phi}n^{2}/R^{2}\equiv\vec{p}^{\,2}+n^{2}/\vec{R}^{2}\leq\Lambda^{2}\), or equivalently through the insertion of a multiplicative smooth cutoff function \(e^{-(\vec{p}^{2}+n^{2}/\vec{R}^{2})/\Lambda^{2}}\) (\(\hat{g}^{{}_{MN}}_{0}\) is the inverse of the Euclidean flat background \(5D\) metric in (17)). Sticking to the first of these two options, defining \(\Lambda_{\phi}\equiv\Lambda e^{\alpha\phi}\), and performing the integration over \(p\), we get (below we write only the contribution of a bosonic tower)
Footnote 2: We note that the introduction of \(\Lambda\) is necessary, otherwise each of the integrals in (19) would be divergent.
\[\rho_{4} =\frac{1}{64\pi^{2}}\sum_{n=-[R_{\phi}\Lambda_{\phi}]}^{[R_{\phi} \Lambda_{\phi}]}\Bigg{\{}\left(\Lambda_{\phi}^{2}-\frac{n^{2}}{R_{\phi}^{2}} \right)\left(m_{\phi}^{2}+\left(\frac{n+q}{R_{\phi}}\right)^{2}\right)\] \[+\left(\Lambda_{\phi}^{2}-\frac{n^{2}}{R_{\phi}^{2}}\right)^{2} \log\frac{\Lambda_{\phi}^{2}+m_{\phi}^{2}-\frac{n^{2}}{R_{\phi}^{2}}+\left( \frac{n+q}{R_{\phi}}\right)^{2}}{\mu^{2}}\] \[+\left(m_{\phi}^{2}+\left(\frac{n+q}{R_{\phi}}\right)^{2}\right) ^{2}\log\frac{m_{\phi}^{2}+\left(\frac{n+q}{R_{\phi}}\right)^{2}}{\Lambda_{ \phi}^{2}+m_{\phi}^{2}-\frac{n^{2}}{R_{\phi}^{2}}+\left(\frac{n+q}{R_{\phi}} \right)^{2}}\] \[-\frac{1}{2}\left(\Lambda_{\phi}^{2}-\frac{n^{2}}{R_{\phi}^{2}} \right)^{2}\Bigg{\}}\equiv\sum_{n=-[R_{\phi}\Lambda_{\phi}]}^{[R_{\phi} \Lambda_{\phi}]}F(n), \tag{20}\]
where the brackets \([...]\) indicate "integer part" (to simplify the notation, but without loss of generality, we take \(\Lambda\) such that \(R_{\phi}\Lambda_{\phi}\) is an integer). The sum can be performed using the Euler-McLaurin (EML) formula,
\[\rho_{4}=\int_{-R_{\phi}\Lambda_{\phi}}^{R_{\phi}\Lambda_{\phi}}dx \,F(x)+\frac{F(R_{\phi}\Lambda_{\phi})+F(-R_{\phi}\Lambda_{\phi})}{2} \tag{21}\] \[+\sum_{j=1}^{r}\frac{B_{2j}}{(2j)!}\left(F^{(2j-1)}(R_{\phi} \Lambda_{\phi})-F^{(2j-1)}(-R_{\phi}\Lambda_{\phi})\right)+R_{2r},\]
where \(r\) is an integer, \(B_{n}\) are the Bernoulli numbers, and the rest \(R_{2r}\) is given by
\[R_{2r} =\sum_{k=r+1}^{\infty}\frac{B_{2j}}{(2j)!}\left(F^{(2j-1)}(R_{\phi }\Lambda_{\phi})-F^{(2j-1)}(-R_{\phi}\Lambda_{\phi})\right)\] \[=\frac{(-1)^{2r+1}}{(2r)!}\int_{-R_{\phi}\Lambda_{\phi}}^{R_{ \phi}\Lambda_{\phi}}dx\,F^{(2r)}(x)B_{2r}(x-[x]), \tag{22}\]
with \(B_{n}(x)\) the Bernoulli polynomials. Expanding for \(m_{\phi}/\Lambda_{\phi}\), \(q/\Lambda_{\phi}\ll 1\), we finally get
\[\rho_{4} =\frac{5\log\frac{\Lambda^{2}e^{2\alpha\phi}}{\mu^{2}}-2}{300\pi^{ 2}}e^{2\alpha\phi}R\Lambda^{5}+\frac{5m^{2}+3\frac{q^{2}e^{4\alpha\phi}}{R^{2}} }{180\pi^{2}}e^{2\alpha\phi}R\Lambda^{3}\] \[-\frac{35m^{4}+14m^{2}\frac{q^{2}e^{4\alpha\phi}}{R^{2}}+3\frac{q^{ 4}e^{8\alpha\phi}}{R^{4}}}{840\pi^{2}}e^{2\alpha\phi}R\Lambda+\frac{m^{5}}{60 \pi}e^{2\alpha\phi}R\] \[+\frac{3\log\frac{\Lambda^{2}e^{2\alpha\phi}}{\mu^{2}}+2}{2880\pi^{ 2}R^{4}}e^{10\alpha\phi}R+R_{4}+\mathcal{O}(\Lambda^{-1}), \tag{23}\]
where the rest \(R_{4}\) is the UV-insensitive term
\[R_{4}= -\frac{x^{2}{\rm Li}_{3}\left(r_{b}e^{-x}\right)+3x{\rm Li}_{4} \left(r_{b}e^{-x}\right)+3{\rm Li}_{5}\left(r_{b}e^{-x}\right)}{128\pi^{6}R^{4}} e^{12\alpha\phi}\] \[+h.c.+\frac{3\zeta(5)}{64\pi^{6}R^{4}}e^{12\alpha\phi}+{\cal O} \left(\Lambda^{-1}\right) \tag{24}\]
with
\[r\equiv e^{2\pi iq}\qquad,\qquad x\equiv 2\pi e^{-2\alpha\phi}R\sqrt{m^{2}}\,. \tag{25}\]
Eqs. (23) and (24) are re-written in terms of the original \(R\), \(\Lambda\) and \(m\) (rather than \(R_{\phi}\), \(\Lambda_{\phi}\) and \(m_{\phi}\)) to explicitly show the \(\phi\)-dependence. It is worth to note that the \(4D\) vacuum energy \(\rho_{4}\) is related to the corresponding \(5D\) one \(\rho_{4+1}\) through the relation \(\rho_{4}=2\pi Re^{2\alpha\phi}\,\rho_{4+1}\).
Several comments are in order. First of all we observe that, had we made the calculation in the usual way [10; 11; 26], all the \(q\)-dependent UV-sensitive terms in Eq. (23) (i.e. all the \(q\)-dependent terms except those contained in \(R_{4}\)) would be absent, while the other UV-sensitive terms are cancelled by SUSY. In fact, while the higher dimensional SUSY imposes \(m_{b}=m_{f}\) for the superpartners, \(q_{b}\) and \(q_{f}\) are necessarily different to have a broken SUSY spectrum at low energies. With the usual calculation we would then get the well-known result \(\rho_{4}\sim R_{4}^{b}-R_{4}^{f}\sim m_{{}_{\rm KK}}^{4}\). However, as we explain below, such a result comes from the fact that the UV-sensitive terms proportional to powers of \(q\) are artificially washed out due to an improper way of treating the asymptotics of the loop momentum.
In this respect, we now show that the interpretation of the \(5D\) theory as a \(4D\) one with an _infinite_ tower of states (if pushed too far) is misleading. Within this interpretational framework, in fact, it is natural to consider that the correct thing to do is to sum the infinitely many (\(n\to\infty\)) Coleman-Weinberg one-loop contributions brought by each of the towers. Any reference to the original \(5D\) loop momentum \(\tilde{p}\) (and a fortiori to the physical meaning of \(n\)) is lost. If on the contrary we correctly focus on the dynamical origin of the KK states, and recognize them as different momentum eigenstates that appear in the Fourier expansion of the \(5D\) field \(\hat{\Phi}(x,z)\), it is clear that sending \(n\to\infty\) while keeping the modulus \(p\) of the other four components fixed is unphysical. If our universe has compact extra dimensions, low-energy \(4D\) physical observables emerge from the piling up of quantum fluctuations above the compactification scale. In implementing such a dressing, it is clear that the components of the loop momenta must be treated in a consistent way, actually on an equal footing. We will further comment on this point later.
To better read the result (23), we stress that it contains four kinds of terms: (i) \(m\)- and \(q\)-independent UV-sensitive terms; (ii) UV-sensitive terms that depend only on \(m\); (iii) \(q\)-dependent UV-sensitive terms; (iv) UV-insensitive terms. As stressed above, in SUSY theories boson and fermion superpartners have the same mass \(m\), while the boundary charges \(q\) are necessarily different to trigger the Scherk-Schwarz mechanism. Therefore:
(a) in SUSY theories supersymmetry enforces cancellations between superpartners of all but the \(q\)-dependent terms in (23), so that for each supermultiplet the dominant contribution to \(\rho_{4}\) is controlled by the SUSY breaking parameter \(q_{b}^{2}-q_{f}^{2}\), and is
\[\rho_{4}\sim\frac{(q_{b}^{2}-q_{f}^{2})}{R^{2}}\,e^{6\alpha\phi}R\Lambda^{3}= (q_{b}^{2}-q_{f}^{2})\,m_{{}_{\rm KK}}^{2}R\Lambda^{3}; \tag{26}\]
(b) in non-supersymmetric theories, each of the higher dimensional fields (each tower in \(4D\) language) gives to the vacuum energy the dominant (uncancelled) contribution
\[\rho_{4} \sim e^{2\alpha\phi}R\Lambda^{5}\log\frac{\Lambda^{2}e^{2\alpha \phi}}{\mu^{2}}\] \[=m_{{}_{\rm KK}}^{2/3}R^{5/3}\Lambda^{5}\log\frac{(m_{{}_{\rm KK} }R)^{2/3}\Lambda^{2}}{\mu^{2}}. \tag{27}\]
Therefore, even in the light tower limit \(m_{{}_{\rm KK}}\to 0\) (large negative values of \(\phi\)), by no means the UV-insensitive \(R_{4}\sim m_{{}_{\rm KK}}^{4}\) term in (23) can overthrow these dominating contributions.
The main point that emerges from our result (23) is that \(q\)-dependent UV-sensitive terms _always arise_ when non-trivial boundary conditions on multiply-connected manifolds are realized (necessary for instance to implement the Scherk-Schwarz mechanism in SUSY theories), _independently_ of the size of the extra dimensions. On the contrary, the UV-insensitive terms in the rest \(R_{4}\) originate from the discreteness of the momentum along the circle, generated in the present case by the hierarchy between the size of the \(4D\) box and the radius of the circle.
## V Vacuum energy and dark dimension
As already stressed, taking a 5-dimensional supersymmetric EFT with one compact dimension in the shape of a circle of radius \(R\), and performing the calculation in the usual manner, for the vacuum energy \(\rho_{4}\) at the one-loop level we have
\[(\rho_{4})^{\frac{1}{4}}=\left(\sum_{i}R_{4}^{(i)}\right)^{\frac{1}{4}}\sim{ \cal C}\,m_{{}_{\rm KK}}\,, \tag{28}\]
where \(R_{4}^{(i)}\) is of the kind (24) (see also (25)). The sum over \(i\) includes all the bosonic and fermionic contributions, and \({\cal C}\) is an \({\cal O}(1)\) coefficient4. The DD proposal [4]
is based on the assumption that in the asymptotic region of the string moduli space where the EFT emerges, the vacuum energy \(\rho_{4}\) goes as \(m_{\mbox{\tiny KK}}^{4}\), where \(m_{\mbox{\tiny KK}}\) is a Kaluza-Klein mass (of order the neutrino scale) given by the cosmological constant. The authors argue that the EFT result (28) supports their proposal.
However, we have shown that (28) results from an improper treatment of the asymptotics of the loop momentum in the original \(5D\) theory, and that the correct result for \(\rho_{4}\) is given by (23) (with \(q\)-independent terms cancelled by SUSY). From this latter equation we see that for a supersymmetric theory with SUSY breaking parameter \(q_{b}^{2}-q_{f}^{2}\) the dominant contribution to \(\rho_{4}\) is (26), i.e. it goes as \(m_{\mbox{\tiny KK}}^{2}R\Lambda^{3}\). Far from being UV-finite as \(m_{\mbox{\tiny KK}}^{4}\), this term is strongly UV-sensitive. We also see that, for a non-supersymmetric theory the term that dominates \(\rho_{4}\) is (27): it scales with \(m_{\mbox{\tiny KK}}\) as \(m_{\mbox{\tiny KK}}^{2/3}\log m_{\mbox{\tiny KK}}\) and is UV-sensitive as \(\Lambda^{5}\log\Lambda\). In both cases the expected \(m_{\mbox{\tiny KK}}^{4}\) result is not recovered.
Therefore, it seems that the DD proposal can hardly be considered a physical reality. One might think that a possible way to escape from such a conclusion is to admit that what in the literature is usually called EFT limit of string theory, framework in which the DD proposal is formulated, actually gives rise to a new type of Effective Field Theory, far from what is usually intended by the community. At present, however, there is no hint for that, and, as stressed by the authors of [4] themselves, we all know that around and above the neutrino scale physics is well described by the EFT paradigm in the usual and well-known sense.
In this respect we also observe that to calculate the contribution of a KK tower to \(\rho_{4}\), in particular to study its UV (in)sensitivity, we could resort to the species scale cutoff \(\Lambda_{\rm sp}\)[29; 30], as it is sometimes done within the Swampland program (see for instance [31; 32; 33; 34; 35]). Referring to our previous examples, we consider the case of a massless \(5D\) field, where the masses of the tower states are given by (18) with \(m_{\phi}^{2}=0\). The number \(N\) of states with mass below \(\Lambda_{\rm sp}\) is
\[N\equiv n_{\rm max}+|n_{\rm min}|+1\,, \tag{29}\]
with \(n_{\rm max}\) and \(n_{\rm min}\) solutions of \((n+q)^{2}/R_{\phi}^{2}=\Lambda_{\rm sp}^{2}\), i.e.
\[n_{\rm max}=R_{\phi}\Lambda_{\rm sp}-q\qquad;\qquad n_{\rm min}=-R_{\phi} \Lambda_{\rm sp}-q\,. \tag{30}\]
The explicit calculation is performed in the Appendix.
We stress that when \(q\neq 0\), cutting the sum over \(n\) in (19) with (29) and (30) and the integral over the four-momentum \(p\) with \(\Lambda_{\rm sp}\)_is not_ equivalent to the introduction of a cut on the \(5D\) loop momentum \(\hat{p}\). As repeatedly underlined in the present work, the latter is the (physically) correct cut to apply. From the calculations in the Appendix we see that, when the cut is imposed on the combination \((n+q)^{2}/R_{\phi}^{2}\) rather than5\(n^{2}/R_{\phi}^{2}\), an artificial washout of \(q\)-dependent UV-sensitive terms is operated. This again comes from a mistreatment of the \(5D\) loop momentum asymptotics. Similarly to what we have already seen, the application of the \(\Lambda_{\rm sp}\) cut pushes too far the interpretation of the KK modes as massive states of the \(4D\) theory, losing sight of the original physical meaning of \(n\).
Footnote 5: Except for the \(e^{2\alpha\phi}\) rescaling factor, \(n^{2}/R_{\phi}^{2}\) coincides with \(p_{5}^{2}\), the square of the fifth component of \(\hat{p}\).
These issues were also discussed in [23], where it was shown in full generality that the inclusion of the boundary charge \(q\) in the cut (whatever kind of cut) is at the origin of the artificial washout of the \(q\)-dependent UV-sensitive terms. In this respect, it is worth to stress that performing the infinite sum while keeping \(\Lambda\) fixed is equivalent to include \(q\) in the cut. As explained in the present work, both are physically illegitimate operations. The proper time [27], Pauli-Villars [20], and thick brane [21] regularizations all implement the insertion of \(q\) in the cut over \(n\), thus realizing the artificial washout mentioned above. It is worth to point out that the use of dimensional regularization (DR), as done for instance in [28; 24], does not help to cope with this kind of issues. By construction, in fact, DR does not detect the full UV-sensitivity of a theory, since in this regularization power "divergences" are automatically cancelled (see [36] for a careful analysis of DR in comparison with other regularizations). We also note that DR totally masks the presence of UV-sensitive terms in odd dimensions, leading sometimes to the impression that no "divergences" appear in that case.
## III Compact dimensions and EFTs
The result \(\rho_{4}\sim m_{\mbox{\tiny KK}}^{4}\) (more generally \(\rho_{d}\sim m_{\mbox{\tiny KK}}^{d}\)) is sometimes used to argue for a possible general breakdown of EFT methods. On our side, referring to the original \((4+n)D\) theory (with \(n\) compact dimensions), we have shown in the previous sections that the EFT approach is perfectly suited to theories with compact extra dimensions, and found for \(\rho_{4}\) the radically different result (23). An interesting point of view on these issues has been recently given in [24], where the result \(\rho_{4}\sim m_{\mbox{\tiny KK}}^{4}\) is taken for granted but it is argued that it cannot be used as a signal of general departure from the EFT approach.
Their argument goes as follows. Consider a tower of states with mass spectrum \(m_{n}=f_{n}\,\mu_{tow}\). Cutting the sum at \(n=N\) means that we include in the theory KK modes up to the \(N\)-th one, and exclude the states from the \((N+1)\)-th up to infinity. In general, integrating out a (finite) set of fields to define a low energy EFT for
the lighter ones is a consistent operation only if there is a large mass hierarchy between the fields included and those excluded. When a KK tower is cut, the hierarchy between the heaviest state included and the lightest one excluded is given by \(f_{N}/f_{N+1}\). Being this ratio \(\mathcal{O}(1)\) (except for the case \(N=0\), that defines the \(4D\) EFT), the authors conclude that no EFT estimate with a finite number of KK states can ever be done, and that the \((4+n)D\) theory must necessarily contain the infinite tower.
The observation that an infinite tower of massive states cannot be divided into heavy and light fields to define an EFT for the latter ones is certainly true and interesting in its own right. In our opinion, however, this line of reasoning does not apply to higher dimensional theories with compact extra dimensions. In fact, sticking for concreteness to a higher dimensional \(5D\) theory, we should keep in mind that the KK modes are momentum eigenstates of the original \(5D\) fields (and not an infinitely numerable set of massive \(4D\) fields), and that the \(5D\) theory from which the \(4D\) theory derives is an EFT itself. As for any EFT, this means that the \(5D\) momentum \(\hat{p}\equiv(p,n/R)\) in the loops has to be cut at the scale where the theory loses its validity. The fifth component \(n/R\) of \(\hat{p}\) cannot be disentangled from the other four components, \(p\equiv(p_{1},p_{2},p_{3},p_{4})\), so that the cut in the KK states results from a physically necessary requirement. No large hierarchy between included and excluded momentum modes is ever needed.
Starting from the \(5D\) action \(\mathcal{S}_{\Lambda}^{(5)}\), and integrating out the modes in the range \([k,\Lambda]\), one obtains the action \(\mathcal{S}_{k}^{(5)}\) at the lower scale \(k\). Due to the discreteness of \(p_{5}=n/R\), the contribution from the related eigenmodes comes in a stepwise fashion. For \(k<1/R\) no such eigenmodes appear any longer, and the RG evolution becomes effectively of \(4D\) type. It is _only in this sense_ that the \(4D\) theory emerges from the \(5D\) one, and no \(4D\) theory with an infinite tower can ever give an accurate description of the original \(5D\) theory.
## IV Summary and conclusions
In the present work we analysed the recent Dark Dimension proposal [4], according to which the tiny measured value of the cosmological constant might signal the presence of a single compact extra dimension of mesoscopic size (order \(\mu m\) or so). Moving from Swampland arguments, this proposal is based on the idea that the cosmological constant fixes the scale \(m_{\mbox{\tiny KK}}\) of a KK tower (of order the neutrino scale), and relies on the result \(\rho_{4}\sim m_{\mbox{\tiny KK}}^{4}\) for the vacuum energy of the underlying higher dimensional EFT with one compact extra dimension.
The results of the present work make it impossible to sustain this proposal, at least the way it has been formulated in [4] and further implemented in [37; 38; 39; 40; 41; 42; 43]. Our analysis is based on the recognition that the original \(5D\) theory is an EFT, so that in the calculation of \(\rho_{4}\) the \(5D\) loop momentum \(\hat{p}=(p_{1},p_{2},p_{3},p_{4},n/R)\) has to be cut (for simplicity we stick to the compactification on a circle of radius \(R\), for which \(p_{5}=n/R\)). The usual calculations [26; 27; 20; 28] that give \(\rho_{4}\sim m_{\mbox{\tiny KK}}^{4}\) mistreat the asymptotics of \(\hat{p}\), and this results in an _artificial_ washout of UV-sensitive terms. In more physical words, \(\rho_{4}\sim m_{\mbox{\tiny KK}}^{4}\) comes from an incorrect way of implementing the piling up of quantum fluctuations. For example, for a \(5D\) supersymmetric model with Scherk-Schwarz SUSY breaking, a UV sensitivity with dominant term \(m_{\mbox{\tiny KK}}^{2}RA^{3}\) is generated (see (26)), while for a non-SUSY model the dominant UV-sensitive term is \(m_{\mbox{\tiny KK}}^{2/3}R^{5/3}\Lambda^{5}\log\bigl{(}(m_{\mbox{\tiny KK}}R) ^{1/3}\Lambda\bigr{)}\) (see (27)).
We have also shown that inconsistencies in the way of treating the asymptotics of the loop momenta appear even when the physical cut is implemented resorting to the species scale \(\Lambda_{\rm sp}\). In fact, when non-vanishing boundary charges \(q\) are present (as it is the case with the SUSY breaking Scherk-Schwarz mechanism), the straightforward exclusion of the KK masses (18) above \(\Lambda_{\rm sp}\) does not implement a physically acceptable cut on the fifth component \(p_{5}\) of the \(5D\) loop momentum \(\hat{p}\).
It must be clear that by no means what we have just said is in conflict with the existence of the natural physical cutoff \(\Lambda_{\rm sp}\), that originates from the coupling of \(N\) species of fields with gravity, and correctly embodies the scale where gravity itself becomes strong and the EFT language can no longer be used [29; 30]. We rather formulate a warning on a blind use of \(\Lambda_{\rm sp}\) in KK theories.
Our analysis also sheds light on a connected issue. Warnings on the possibility of developing an EFT approach to KK theories when only a finite number of states of the tower is considered have been recently raised [24]. Along the lines of reasoning followed in the present work, we have shown that the EFT language is perfectly suited to KK theories.
To conclude, our results indicate that the Dark Dimension proposal in its current formulation cannot be sustained. We do not know if there is any way to overcome our conclusions and rescue this idea.
###### Acknowledgements.
The work of CB is supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF-2022R1A2C2003567). The work of VB, FC and AP is carried out within the INFN project QFT-HEP.
## Appendix
In this appendix we perform the calculation of the vacuum energy \(\rho_{4}\) using the species scale cutoff \(\Lambda_{\rm sp}\). In a \(4D\) theory with \(N\) particle states, \(\Lambda_{\rm sp}=M_{p}/\sqrt{N}\). In a \(5D\) theory with one compact dimension the identification of \(\Lambda_{\rm sp}\) is done counting the number of KK states that respect the condition \(m_{n}^{2}\leq\Lambda_{\rm sp}^{2}\). The inequality is saturated when
\[m_{\phi}^{2}+\left(\frac{n+q}{R_{\phi}}\right)^{2}=\Lambda_{\rm sp}^{2}\to n _{\pm}=\left[-q\pm R_{\phi}\sqrt{\Lambda_{\rm sp}^{2}-m_{\phi}^{2}}\,\right], \tag{31}\]
where \(n_{\pm}\) reduce to \(n_{\rm max}\) and \(n_{\rm min}\) of (30) in the text when \(m_{\phi}=0\), and the brackets \([...]\) indicate "integer part" (that in the following we neglect for simplicity). The number of states between \(n_{+}\) and \(n_{-}\) is
\[N=n_{+}+|n_{-}|+1=2R_{\phi}\sqrt{\Lambda_{\rm sp}-m_{\phi}^{2}}+1 \tag{32}\]
and the species scale is then obtained as
\[\Lambda_{\rm sp}=\frac{a}{3}+\frac{X^{1/3}}{3\cdot 2^{1/3}}-\frac{2^{1/3}\left(3 b-a^{2}\right)}{3\cdot X^{1/3}} \tag{33}\]
with
\[X = 3\sqrt{3}\sqrt{4a^{3}c-a^{2}b^{2}-18abc+4b^{3}+27c^{2}} \tag{34}\] \[+\,2a^{3}-9ab+27c\]
and
\[a=m_{\phi}^{2}+\frac{1}{4R_{\phi}^{2}};\quad b=\frac{M_{p}^{2}}{2R_{\phi}^{2} };\quad c=\frac{M_{p}^{4}}{4R_{\phi}^{2}}. \tag{35}\]
Expanding for \(m_{\phi},R_{\phi}^{-1}\ll M_{p}\), we get
\[\Lambda_{\rm sp}^{2}=\frac{M_{p}^{4/3}}{(2R_{\phi})^{2/3}}-\frac{M_{p}^{2/3}}{ 3\,(2R_{\phi}^{4})^{1/3}}+\frac{m_{\phi}^{2}+\frac{1}{4R_{\phi}^{2}}}{3}+ \mathcal{O}(M_{P}^{-2/3}). \tag{36}\]
The first term of this expansion is the one typically referred to in the literature, where only a rough estimate of \(\Lambda_{\rm sp}\) is reported (see for instance [32]).
The contribution of a bosonic (or fermionic, adding an overall minus sign) tower to the vacuum energy is
\[\rho_{4}\sim\sum_{n=n_{-}}^{n_{+}}\int^{(\Lambda_{\rm sp})}\frac{d^{4}p}{(2\pi )^{4}}\log\frac{p^{2}+\frac{(n+q)^{2}}{R_{\phi}^{2}}+m_{\phi}^{2}}{\mu^{2}}, \tag{37}\]
where the upper case (\(\Lambda_{\rm sp}\)) in the integral means that the modulus of the four-dimensional momentum is cut at \(\Lambda_{\rm sp}\). Performing the integration over \(p\) we find
\[\rho_{4}=\frac{1}{64\pi^{2}}\sum_{n=n_{-}}^{n_{+}}\Bigg{\{}-\Lambda _{\rm sp}^{4}+2\Lambda_{\rm sp}^{2}\left(m_{\phi}^{2}+\frac{(n+q)^{2}}{R_{\phi }^{2}}\right)\] \[+2\left(m_{\phi}^{2}+\frac{(n+q)^{2}}{R_{\phi}^{2}}\right)^{2} \log\left(\frac{m_{\phi}^{2}+\frac{(n+q)^{2}}{R_{\phi}^{2}}}{\Lambda_{\rm sp }^{2}+m_{\phi}^{2}+\frac{(n+q)^{2}}{R_{\phi}^{2}}}\right)\] \[+2\Lambda_{\rm sp}^{4}\log\left(\frac{\Lambda_{\rm sp}^{2}+m_{ \phi}^{2}+\frac{(n+q)^{2}}{R_{\phi}^{2}}}{\mu^{2}}\right)\Bigg{\}}\equiv\sum_ {n=n_{-}}^{n_{+}}G(n). \tag{38}\]
As in the text (see (21), (22) and the text therein, where \(B_{i}\) and \(B_{i}(x)\) are defined), the sum can be calculated by means of the EML formula,
\[\rho_{4} = \int_{n_{-}}^{n_{+}}dx\,G(x)+\frac{G(n_{+})+G(n_{-})}{2}\] \[+ \sum_{j=1}^{r}\frac{B_{2j}}{(2j)!}\left(G^{(2j-1)}(n_{+})-G^{(2j- 1)}(n_{-})\right)+R_{2r}\]
where
\[R_{2r} = \sum_{k=r+1}^{\infty}\frac{B_{2j}}{(2j)!}\left(G^{(2j-1)}(n_{+})-G ^{(2j-1)}(n_{-})\right) \tag{40}\] \[= \frac{(-1)^{2r+1}}{(2r)!}\int_{n_{-}}^{n_{+}}dx\,G^{(2r)}(x)B_{2r} (x-[x]).\]
In the physically meaningful limit \(m_{\phi},R_{\phi}^{-1}\ll\Lambda_{\rm sp}\), the result for the vacuum energy is
\[\rho_{4} = \frac{20\log\left(\frac{4M_{p}^{2}}{5\mu^{2}R_{\phi}}\right)+12 \pi-57}{2^{-1/3}\cdot 3840\pi^{2}R_{\phi}^{2/3}}M_{p}^{10/3} \tag{41}\] \[+ \frac{-4\log\left(\frac{4M_{p}^{2}}{\mu^{2}R_{\phi}}\right)-6\pi +27}{2^{-2/3}\cdot 2304\pi^{2}R_{\phi}^{4/3}}M_{p}^{8/3}+\frac{12\pi-35}{4608 \pi^{2}R_{\phi}^{2}}M_{p}^{2}\] \[+ \frac{\left(4\,m_{\phi}^{2}R_{\phi}^{2}+1\right)\log\left(\frac{ M_{p}^{2}}{2\mu^{3}R_{\phi}}\right)-3(5-4\pi)m_{\phi}^{2}R_{\phi}^{2}}{1152\pi^{2}R_{ \phi}^{2}}M_{p}^{2}\] \[+ \frac{-20\log\left(\frac{M_{p}^{2}}{\mu^{3}R_{\phi}}\right)-120 \pi+309+104\log 2}{2^{-1/3}\cdot 124416\pi^{2}R_{\phi}^{8/3}}M_{p}^{4/3}\] \[+ \frac{3(19-8\pi)-4\log\left(\frac{4M_{p}^{2}}{\mu^{3}R_{\phi}} \right)}{2^{-1/3}\cdot 3456\pi^{2}R_{\phi}^{8/3}}\left(m_{\phi}R_{\phi}\right)^{2}M_{p}^{ 4/3}\] \[+ \frac{525\pi+367\log 2-1953+35\log\left(\frac{M_{p}^{2}}{\mu^{2}R_{\phi}} \right)}{2^{-2/3}1866240\pi^{2}R_{\phi}^{10/3}}M_{p}^{2/3}\] \[+ \frac{9\log\left(\frac{M_{p}^{2}}{\mu^{3}R_{\phi}}\right)+135\pi-4 32+99\log 2}{2^{-2/3}46656\pi^{2}R_{\phi}^{10/3}}m_{\phi}^{2}R_{\phi}^{2}M_{p}^{ 2/3}\]
\[+\frac{2\log\left(\frac{M_{p}^{2}}{\mu^{3}R_{\phi}}\right)+3\pi-30-14 \log 2}{2^{-2/3}1728\pi^{2}R_{\phi}^{10/3}}m_{\phi}^{4}R_{\phi}^{4}M_{p}^{2/3}\] \[+\frac{61-18\pi+40(17-6\pi)m_{\phi}^{2}R_{\phi}^{2}+80(33-9\pi)m_{ \phi}^{4}R_{\phi}^{4}}{138240\pi^{2}R_{\phi}^{4}}\] \[+\frac{m_{\phi}^{5}R_{\phi}}{60\pi}+R_{4}+\mathcal{O}(M_{P}^{-2/3 }), \tag{41}\]
with \(R_{4}\) given in (24), (25).
A few comments are in order. Limiting ourselves to the leading order relation \(\Lambda_{\rm sp}\sim R_{\phi}^{-1/3}M_{p}^{2/3}\) (see (36)), we observe that the powers \(M_{p}^{10/3}\), \(M_{p}^{2}\) and \(M_{p}^{2/3}\) correspond to the powers \(\Lambda^{5},\Lambda^{3}\) and \(\Lambda\) respectively in terms of a generic cutoff \(\Lambda\). However, comparing the coefficients of \(M_{p}^{2}\) and \(M_{p}^{2/3}\) in (41) with the corresponding coefficients of \(\Lambda^{3}\) and \(\Lambda\) in (23), we note that they have a different structure. Moreover, powers of \(M_{p}\) other than those mentioned above (that do not find any correspondence in (23)) are also present. These differences have a twofold origin: they are due both to the fact that the species scale cut is cylindrical in \(5D\) momentum space (see [23] for a thorough discussion on the difference between the implementation of a cylindrical and a spherical cutoff on the \(5D\) momentum) and to the fact that, as per (36), the relation \(\Lambda_{\rm sp}\sim R_{\phi}^{-1/3}M_{p}^{2/3}\) holds only at the leading (large \(M_{p}\)) order.
An even more important difference between the results (23) and (41) is that the latter does not contain any UV-sensitive term proportional to the boundary charge \(q\). Actually (41) comes from a physically illegitimate operation. In fact, rather than a cut on \(p_{5}^{2}=e^{-2\beta\phi}\,n^{2}/R^{2}\), \(\Lambda_{\rm sp}\) implements a cut on the "KK masses" \(m_{n}^{2}=m_{\phi}^{2}+(n+q)^{2}/R_{\phi}^{2}\). As discussed in the text, the (unphysical) introduction of the combination \(n+q\) in the cutoff is at the origin of the artificial washout of the \(q\)-dependent UV-sensitive terms. This makes the result (41), and more generally the introduction of the species scale cutoff in higher dimensional theories with compact extra dimensions, unreliable. The species scale cut only arises as a result of a too literal interpretation of the \(5D\) theory in terms of a \(4D\) theory with towers of massive \(4D\) fields. These warnings do not apply to the case of a bona fide \(4D\) theory with a large number \(N\) of fields coupled to gravity, where \(\Lambda_{\rm sp}\) truly is the quantum gravity physical cutoff.
It is also worth to point out that (41) provides an example of a hard cutoff calculation where no \(q\)-dependent UV-sensitive terms are generated. In previous literature the opinion was widely expressed that the use of a hard cutoff was at the origin of UV-sensitive terms, that were considered as spurious [20; 21; 44]. The above result shows that the presence of these terms is rather due to a correct treatment of the asymptotics of the loop momenta.
|
2309.11668 | Towards Effective Disambiguation for Machine Translation with Large
Language Models | Resolving semantic ambiguity has long been recognised as a central challenge
in the field of Machine Translation. Recent work on benchmarking translation
performance on ambiguous sentences has exposed the limitations of conventional
Neural Machine Translation (NMT) systems, which fail to handle many such cases.
Large language models (LLMs) have emerged as a promising alternative,
demonstrating comparable performance to traditional NMT models while
introducing new paradigms for controlling the target outputs. In this paper, we
study the capabilities of LLMs to translate "ambiguous sentences" - i.e. those
containing highly polysemous words and/or rare word senses. We also propose two
ways to improve their disambiguation capabilities, through a) in-context
learning and b) fine-tuning on carefully curated ambiguous datasets.
Experiments show that our methods can match or outperform state-of-the-art
systems such as DeepL and NLLB in four out of five language directions. Our
research provides valuable insights into effectively adapting LLMs to become
better disambiguators during Machine Translation. We release our curated
disambiguation corpora and resources at
https://data.statmt.org/ambiguous-europarl. | Vivek Iyer, Pinzhen Chen, Alexandra Birch | 2023-09-20T22:22:52Z | http://arxiv.org/abs/2309.11668v2 | # Towards Effective Disambiguation for Machine Translation
###### Abstract
Resolving semantic ambiguity has long been recognised as a central challenge in the field of Machine Translation. Recent work on benchmarking translation performance on ambiguous sentences has exposed the limitations of conventional Neural Machine Translation (NMT) systems, which fail to handle many such cases. Large language models (LLMs) have emerged as a promising alternative, demonstrating comparable performance to traditional NMT models while introducing new paradigms for controlling the target outputs. In this paper, we study the capabilities of LLMs to translate "ambiguous sentences" - i.e. those containing highly polysemous words and/or rare word senses. We also propose two ways to improve their disambiguation capabilities, through a) in-context learning and b) fine-tuning on carefully curated ambiguous datasets. Experiments show that our methods can match or outperform state-of-the-art systems such as DeepL and NLLB in four out of five language directions. Our research provides valuable insights into effectively adapting LLMs to become better disambiguators during Machine Translation. We release our curated disambiguation corpora and resources at [https://data.statmt.org/ambiguous-europarl](https://data.statmt.org/ambiguous-europarl).
## 1 Introduction
While the field of NMT has advanced rapidly in recent times, the disambiguation and translation of ambiguous words still remain an open challenge. Notably, Campolongo et al. (2022) created a benchmark named DiBiMT to study the behaviour of state-of-the-art (SOTA) NMT systems when translating sentences with ambiguous words.1 They reported that even the best-performing commercial NMT systems yielded accurate translations only 50-60% of the time,2 while other open-source multilingual models like mBART50 Tang et al. (2021) and M2M100 Fan et al. (2021) performed much worse. This was found to be due to biases against rare and polysemous word senses inherited during pretraining. Table 1 shows an example from the DiBiMT benchmark where DeepL3 mistranslates an ambiguous word while the LLM BLOOMZ resolves the word to its correct in-context meaning.
Footnote 1: [https://nlp.uniroma1.it/dibimt/public/leaderboard](https://nlp.uniroma1.it/dibimt/public/leaderboard)
Footnote 2: Subsequent iterations of these commercial models have improved, but large margins still remain.
Footnote 3: [https://deepl.com/en/translator](https://deepl.com/en/translator)
In this paper, we explore whether LLMs can indeed perform better at translating "ambiguous sentences" - i.e. those containing highly polysemous and/or rare word senses. The motivation behind this is that while NMT models can potentially learn biases from noisy or narrow domain parallel data, hurting their ability to detect and translate rare word senses, LLMs can potentially be pretrained on a wider variety of monolingual text - though they might also prefer fluency over accuracy. Still, LLMs have shown many emergent abilities due to scale Brown et al. (2020); Chowdhery et al. (2022); Wei et al. (2022) and moreover, have demonstrated great potential for Machine Translation (MT) Vilar et al. (2023); Zhang et al. (2023).
We comprehensively examine how these trends extend to the specific task of translating ambiguous sentences. We select a diverse set of foundational and instruction-tuned LLMs, of different
\begin{table}
\begin{tabular}{l l} \hline \hline Source & The horse had a blaze between its eyes. \\ \hline DeepL & (There is a flame between the horse’s eyes.) \\ \hline BLOOMZ & (There is a white line between the horse’s eyes.) \\ (176B) & (There is a white line between the horse’s eyes.) \\ \hline \hline \end{tabular}
\end{table}
Table 1: An example of English-to-Chinese translation involving an ambiguous term “blaze”. For BLOOMZ, we use 1-shot prompting to obtain the translation.
sizes and with varying combinations of languages in the pre-training data. We then compare how these LLMs match up against several widely used NMT models on the DiBiMT test set, which covers translation from English to five languages: Spanish, Italian, German, Russian and Chinese. We find that, with only 1-shot in-context learning (Brown et al., 2020), LLMs - in particular, BLOOMZ 176B (Muennighoff et al., 2023) and LLaMA 65B (Touvron et al., 2023) - match or outperform top-performing open-source and commercial MT systems, and set a new SOTA in two of the five languages we tested. Furthermore, we propose two methods for adapting LLMs for ambiguous translation: 1) in-context learning with sentences having the same word sense, and 2) fine-tuning on curated ambiguous parallel corpora. We show that these methods are highly effective and can further improve performance by up to 15 points in DiBiMT accuracy in the best case.
Our work thus makes three key contributions:
1. We evaluate the performance of LLMs compared to top-performing NMT systems in the challenging task of translating ambiguous sentences. We report SOTA scores on 2 of the 5 languages tested, and comparable performance otherwise.
2. We also show that our suggested techniques of similar sentence in-context learning and targeted disambiguation fine-tuning significantly outperform naive few-shot prompting
3. We conclude our work by evaluating LLMs on the FLORES200 test sets, and confirm that improvements in disambiguation accuracy correlate strongly with those in overall MT quality.
## 2 Background
### Ambiguity in machine translation
Resolving ambiguity in the source sentence was historically framed as one of the most fundamental challenges in MT (Weaver, 1952). In an effort to address this challenge, traditional works integrating Word Sense Disambiguation in Statistical Machine Translation (Carpuat and Wu, 2007; Chan et al., 2007) were followed by those integrating it in NMT architectures in various ad-hoc ways (Choi et al., 2017; Liu et al., 2018; Pu et al., 2018). Later, with the introduction of the Transformer (Vaswani et al., 2017), it was shown that higher layer encoder representations are robust enough to handle disambiguation (Tang et al., 2019) without any explicit handling of word senses.
However, more recent research creating challenging evaluation benchmarks has called the purported abilities of NMT systems into question once again. Following the proposal of the MuCoW benchmark for testing WMT19 (Raganato et al., 2019) and WMT20 (Scherrer et al., 2020) systems, Raganato et al. (2020) showed how Transformer-based NMT models, in general, underperform when translating rare word senses. Campolongo et al. (2022), who experimented with SOTA commercial (Google Translate, DeepL) and open-source systems (mBART50, M2M100, OPUS-NMT (Tiedemann and Thottingal, 2020), etc.), arrived at the same conclusion when they proposed the DiBiMT benchmark for evaluating MT systems between English and 5 languages (Spanish, Italian, German, Russian, and Chinese). They found similar biases against low-frequency and highly polysemous word senses. They also noted the accuracies of these systems were much lower than the then SOTA WSD system, ESCHER (Barba et al., 2021) - indicating significant room for improvement. In this work, we explored whether foundational and instruction-tuned LLMs could bridge this gap with minimal supervision (i.e. few-shot prompting).
### LLMs and machine translation
Previous research has found that LLMs can perform machine translation without being specifically fine-tuned (Radford et al., 2019). In order to elicit a translation, research in this direction follows the paradigm of LLM prompting:
1. Zero-shot prompting, where an LLM is directly asked to translate a source input into the target language (Radford et al., 2019).
2. Few-shot prompting, also called in-context learning, where an LLM is supplied with demonstrations of input and output pairs from the same task it is performing, before being queried an input (Brown et al., 2020).
3. Chain-of-thought (CoT), where an LLM is prompted to reason to gain relevant knowledge about the input before producing an output (Wei et al., 2022; Kojima et al., 2022).
Besides training-free approaches, another route is instruction tuning, which optimizes an LLM on a
mixed range of downstream tasks and fine-tunes the model to understand and respond to user intention through natural language Wei et al. (2021).
It was observed that LLMs might not surpass Transformer models solely trained to translate, especially for non-English and low-resource translation directions Vilar et al. (2023); Hendy et al. (2023). Nevertheless, LLMs have been shown to achieve superiority in tasks requiring in-depth understanding and manipulation of text, primarily due to them being pretrained on very large corpora. For example, without fine-tuning, LLMs are good at adapting to word alignments Moslem et al. (2023), translation evaluation Kocmi and Federmann (2023), idiom translation Raunak et al. (2023), iterative refinement Chen et al. (2023), and interactive translation via CoT Pilault et al. (2023); He et al. (2023). Related to our work is Pilault et al. (2023)'s proposal of using interactive question answering as a CoT process for LLMs to disambiguate source words. As an alternative approach, we aim to generate translations in a single pass by leveraging SOTA WSD systems to provide contexts that guide LLMs to disambiguate better.
## 3 Methodology
### Preliminaries
A word sense is a concept in a Knowledge Base (in this work, BabelNet by Navigli et al. (2021)) that denotes a distinct meaning of a word in the context of a sentence. The polysemy degree of an ambiguous word is defined as the total count of all possible senses that a particular word can have. The sense frequency is defined as the occurrence count of that particular sense in a disambiguated training corpus.
In this work, we define an ambiguous word as a polysemous term with multiple possible, and likely related, meanings - with the correct sense inferable only from the sentence-level context. We then refer to a sentence with an ambiguous word as an "ambiguous sentence" for brevity and ease of explanation. By definition, the DiBiMT test set Campolongo et al. (2022) contains only one ambiguous word per sentence.
Word Sense Disambiguation (WSD) is the process of linking an ambiguous word in a sentence to its appropriate word sense in the Knowledge Base. We use ESCHER-WSD Barba et al. (2021) in this work, a high-performing WSD system that had achieved the SOTA for English.
### _K-shot prompting_
Given a test sentence \(X\) and a Large Language Model to prompt for translations, we construct a query with \(k\) demonstrations, i.e. parallel sentence pairs \(\{(X_{1},Y_{1}),(X_{2},Y_{2})\ldots(X_{k},Y_{k})\}\) as examples, followed by the test sentence. As shown in Figure 1, for foundation LLMs, we frame the prompt as a text completion task, while for instruction-tuned LLMs (like BLOOMZ) we structure the last phrase as a question, in order to conform to the latter's question answering format. In the naive setting, we choose our demonstrations randomly from the development set.
### In-context learning with similar ambiguous contexts
LLMs can effectively gain knowledge relevant to the test domain through prompting, and this process is named in-context learning (ICL). We leverage ICL to help LLMs ingest information on translation of ambiguous sentences, by providing related sense translations as examples in the prompt. To achieve this, we first identify the most polysemous word in the input sentence by disambiguating it with a WSD system, and then calculate the polysemy degree of all disambiguated senses with respect to a large development set. We choose the most polysemous word sense4 and search for other occurrences of the same sense in the same development set. Finally, we randomly sample \(k\) source-target pairs including such a sense to use as demonstrations in \(k\)-shot prompting, instead of using random pairs. This technique seemed to return enough examples for our purposes in most cases - for 5-shot prompting, given a corpus of 1.8M sentences, we observed that we got all 5 matches 92.5% of the time.
Footnote 4: Currently, we only explore the case of one ambiguous word per sentence, due to the nature of the benchmark. One could extend our approach to multiple ambiguous words by separately sampling examples for each polysemous word and conducting higher-shot prompting - but further research would be needed to find the optimal way to combine these examples.
### Low-rank fine-tuning
Apart from providing relevant examples through prompting, another conventional approach is to optimize the model parameters in a domain adaptation fashion for disambiguation. Considering the computational cost, our work experiments with instruction fine-tuning via low-rank adaptation (LoRA). This technique appends trainable lower-rank decomposition matrices to giant matrices in an LLM
that can remain frozen during fine-tuning Hu et al. (2021). By sacrificing a little performance, this fine-tuning method achieves great parameter efficiency. We aim to adjust LLMs to perform the translation task specifically. In order to maximise an LLM's capability to disambiguate when translating, we follow a careful data selection procedure to identify the most ambiguous sentences in our corpus.
Given the size of LLMs, it would be infeasible to fine-tune them on a large parallel corpus, so we opt to curate a smaller dataset that suits the ambiguous translation task. We would like a balanced mix of sentences with highly polysemous words as well as those with rare senses of a given word. This is to ensure fine-tuning reduces both polysemy degree-related and sense frequency-related biases, as discovered by Campolungo et al. (2022) and consequently, maximises disambiguation performance. We, thus, sort our corpora in two ways: one, by the maximum polysemy degree (greatest first) and two, by the minimum sense frequency (rarest first) of all word senses in a given sentence, disambiguated with ESCHER-WSD. We take the top \(N/2\) sentences from each set and interleave them to create our final fine-tuning corpus of size \(N\). We release our fine-tuning corpus, along with the ESCHER-WSD disambiguation outputs for public use.5
Footnote 5: [https://data.statmt.org/ambiguous-europarl](https://data.statmt.org/ambiguous-europarl)
Once the data is chosen, we follow the fine-tuning paradigm of Alpaca Taori et al. (2023): the model is prompted with an instruction specifying the source and target languages, as well as the test sentence as an input, and the model is expected to respond with the translation.6
Footnote 6: [https://github.com/tatsu-lab/stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca)
## 4 Experiments
In this section, we seek to answer the following research questions:
1. **RQ1:** How do LLMs perform at translation of ambiguous sentences compared to traditional high-performing NMT systems? (Section 4.3)
2. **RQ2:** What methods could one use to adapt LLMs for this task and improve performance over naive few-shot prompting? (Section 4.4)
3. **RQ3:** How do these disambiguation-adapted LLMs fare in terms of overall translation quality? (Section 4.5)
### Models
To ensure reproducibility, we pick four well-known and high-performing open-source LLMs,7 of which we sample seven versions for experimentation:
Footnote 7: at the time of experiment formulation
* BLOOM Scao et al. (2022): A fully open-source, multilingual, foundation LLM that supports 46 languages. To establish the range of its capabilities, we explore both the smallest (7.1B) and the largest (176B) versions.
* BLOOMZ Muennighoff et al. (2023): BLOOM instruction-tuned on a multilingual prompting set. Again, we choose the smallest (7.1B) and the largest (176B) versions.
* LLMA Touvron et al. (2023): The popular LLM trained by Meta AI, on gigantic datasets ranging up to 1.5T tokens. We evaluate the smallest (7B) and the largest (65B) versions.
Figure 1: Templates used for \(k\)-shot LLM prompting, with \(k>=0\).
* Alpaca Taori et al. (2023): A LLaMA model instruction-tuned on a 52K dataset generated using Self-Instruct Wang et al. (2023).
To effectively position these open-source LLMs against traditional NMT systems, we compare them against the best-performing and the most widely used commercial and open-source models:
1. DeepL Translator8: a SOTA commercial NMT system (accessed on 24th July 2023). Footnote 8: [https://www.deepl.com/en/translator](https://www.deepl.com/en/translator)
2. Google Translate9: Probably the most widely used commercial NMT system (accessed on 24th July 2023). Footnote 9: [https://translate.google.com/](https://translate.google.com/)
3. OPUS Tiedemann and Thottingal (2020): Small, bilingual, Transformer-based NMT models trained on the OPUS parallel corpora.
4. mBART50 Tang et al. (2021): Multilingual NMT models pretrained on monolingual corpora from 50 languages, and fine-tuned on the translation task. We report performances of both the English-to-many and many-to-many fine-tuned models.
5. M2M100 Fan et al. (2021): A massive multilingual NMT model that was trained on 2200 translation directions to support many-to-many translation among 100 languages in total. We compare both the base (418M) and the large (1.2B) versions.
6. NLLB-200 NLLB Team et al. (2022): It is the current SOTA in many low-resource pairs, scaling to 200 languages. We experiment with all its variants, where the largest is a mixture-of-experts (MoE) model with 54B parameters. We also benchmark its smaller checkpoints at 1.3B and 3.3B, as well as distilled versions at 0.6B and 1.3B.
We take the results for mBART50, M2M100, and OPUS directly from the DiBiMT leaderboard.10 We use Hugging Face11 for accessing and inferencing all other models - except for Google Translate and DeepL, which are accessed using their respective APIs. Despite their presence on the leaderboard, we re-evaluate these systems since they are being constantly updated.
Footnote 10: [https://nlp.uniroma1.it/dibimt/public/leaderboard](https://nlp.uniroma1.it/dibimt/public/leaderboard)
Footnote 11: [https://huggingface.co/](https://huggingface.co/)
### Experimental setup
DatasetsIn this study, we use the DiBiMT test set for evaluation and measure accuracy across all five translation directions: English to Spanish, Italian, Chinese, Russian, and German, respectively. For validation, we use the development set from FLORES 200 (NLLB Team et al., 2022) in our base setting. To search for similar ambiguous contexts (Section 3.3), we require a larger development set to find relevant examples and also to accurately estimate polysemy degree. Hence, we use the Europarl corpus Koehn (2005), disambiguated with ESCHER-WSD. We also use the same disambiguated corpus for fine-tuning, however, we first follow the filtering procedure described in Section 3.4 to create a small corpus full of ambiguous sentences. Validation during fine-tuning is done using 500 randomly sampled sentences from this corpus and the rest is used for training. We detail the data statistics used for these experiments in Table 2.
LLM prompting setupDue to memory constraints, and to compare all models fairly, we load LLMs in 8-bit and use a batch size of 1. For generation, we set both beam size and temperature to 1. To prevent repetition in LLM output, we set no_repeat_ngram_size to 4. From the LLM's response, we filter out the sentence before the first newline character as the output translation.
LoRA fine-tuningWe inject LoRA modules into all query, key, and value matrices. We set rank to 8, alpha to 8, and dropout to 0.05. For training, we set the effective batch size to 32, the learning rate to 3e-4, and the maximum length to 256. The total training budget is 5 epochs, and we pick the best model checkpoint based on cross-entropy loss on the validation set. The training data is shuffled after every epoch. Inference is done with a beam size of 3, and a maximum generation length of 150.
### LLMs vs NMT systems on DiBiMT
We show the results of our experiments in Table 3. For the purposes of the subsequent discussion, we note here that LLaMA was not intentionally trained
\begin{table}
\begin{tabular}{c c c} \hline \hline System & En-Es & En-It \\ \hline Similar contexts dev set & 1.81M & 1.73M \\ Fine-tuning corpus & 100K & 100K \\ \hline \hline \end{tabular}
\end{table}
Table 2: Statistics of data used in our experiments, in terms of parallel sentence count.
on Chinese and is, thus, an 'unseen' language. Similarly, for BLOOM, Chinese and Spanish are "seen" and the rest are "unseen". We share our key observations below:
1. **LLMs usually match or beat massive MT models on seen languages.** Except for the very rich-resourced En-De, where supervised MT systems appear to have an edge, LLaMA 65B mostly matches the SOTA NMT systems (namely DeepL and NLLB-200). Furthermore, BLOOMZ sets a new SOTA in its seen languages, Spanish and Chinese, and outperforms DeepL by margins of 7.3% and 12.2% respectively. These improvements against such strong, supervised massive NMT systems are particularly remarkable since our corresponding setup for inferencing the LLMs is quite cheap - as we noted previously, this is only naive few-shot prompting of an 8-bit quantized model, with a beam size of 1.
2. **LLMs perform relatively worse for unseen languages, but they can still be much better than some supervised MT models.** We note that relative to seen languages, LLaMA under
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline System & \# Params & Variant & En-Es & En-It & En-Zh & En-Ru & En-De & Average \\ \hline \hline \multicolumn{8}{c}{_Compen-source NMT systems_} \\ \hline \hline OPUS & 74M & Bilingual En-X models & 36.79 & 29.93 & 25.94 & 28.71 & 27.04 & 29.68 \\ \multirow{-5}{*}{mBART50} & 611M & One-to-Many & 31.31 & 26.62 & 26.63 & 30.93 & 26.43 & 28.38 \\ & 611M & Many-to-Many & 29.98 & 25.89 & 28.12 & 27.54 & 24.25 & 27.16 \\ \multirow{-5}{*}{M2M100} & 418M & Base & 22.35 & 17.27 & 12.34 & 17.01 & 15.62 & 16.92 \\ & 1.2B & Large & 28.81 & 23.16 & 17.30 & 27.03 & 22.87 & 23.83 \\ \multirow{-5}{*}{NLLB-200} & 0.6B & Distilled version & 40.93 & 36.38 & 28.64 & 47.13 & 33.41 & 37.30 \\ & 1.3B & Distilled version & 50.40 & 53.65 & 41.15 & 54.52 & 52.81 & 50.51 \\ & 1.3B & Original checkpoint & 48.81 & 48.43 & 37.31 & 54.36 & 48.93 & 47.57 \\ & 3.3B & Original checkpoint & 53.23 & 57.23 & 39.95 & 57.44 & 56.24 & 52.82 \\ & 54B & Mixture of Experts & 61.33 & **67.19** & 48.02 & **67.88** & **67.97** & **62.48** \\ \hline \hline \multicolumn{8}{c}{_LLLAMA family LLMs_} \\ \hline \hline \multirow{8}{*}{LLaMA} & \multirow{3}{*}{7B} & 1-shot prompting & 53.64 & 48.84 & 30.61\({}^{\dagger}\) & 60.65 & 57.41 & 50.23 \\ & & 3-shot prompting & 55.53 & 50.53 & 30.52\({}^{\dagger}\) & 57.31 & 55.34 & 49.85 \\ \multirow{-5}{*}{65B} & \multirow{3}{*}{7B} & 5-shot prompting & 56.33 & 48.66 & 27.92\({}^{\dagger}\) & 56.83 & 55.26 & 49.00 \\ & & 1-shot prompting & 56.57 & 60.22 & 44.73\({}^{\dagger}\) & 65.71 & 62.05 & 57.86 \\ & & 3-shot prompting & 59.83 & 60.18 & 42.77\({}^{\dagger}\) & **67.45** & 63.41 & 58.73 \\ & & 5-shot prompting & 60.78 & **63.47** & 42.49\({}^{\dagger}\) & 66.31 & 62.98 & **59.21** \\ \multirow{-5}{*}{Alpaca} & \multirow{3}{*}{7B} & \multirow{3}{*}{0-shot prompting} & 49.75 & 45.24 & 29.63\({}^{\dagger}\) & 55.23 & 51.52 & 46.27 \\ \hline \hline \multicolumn{8}{c}{_BLOOM family LLMs_} \\ \hline \hline \multirow{8}{*}{BLOOM} & 7.1B & 1-shot prompting & 55.69 & 28.79\({}^{\dagger}\) & 51.08 & 40.00\({}^{\dagger}\) & 29.67\({}^{\dagger}\) & 41.05 \\ & & 1-shot prompting & 63.66 & 42.02\({}^{\dagger}\) & 60.30 & 43.22\({}^{\dagger}\) & 37.04\({}^{\dagger}\) & 49.25 \\ \multirow{-5}{*}{176B} & \multirow{3}{*}{3-shot prompting} & 64.52 & 46.33\({}^{\dagger}\) & 61.20 & 44.30\({}^{\dagger}\) & 36.69\({}^{\dagger}\) & 50.61 \\ & & 5-shot prompting & 65.53 & 45.9\({}^{\dagger}\) & 61.73 & 42.92\({}^{\dagger}\) & 38.06\({}^{\dagger}\) & 50.85 \\ \multirow{-5}{*}{BLOOMZ} & \multirow{3}{*}{7.1B} & 0-shot prompting & 56.89 & 33.91\({}^{\dagger}\) & 53.2 & 33.33\({}^{\dagger}\) & 21.67\({}^{\dagger}\) & 39.80 \\ & & 1-shot prompting & 60.87 & 40.68\({}^{\dagger}\) & 52.37 & 33.33\({}^{\dagger}\) & 30.65\({}^{\dagger}\) & 43.58 \\ \multirow{-5}{*}{BLOOMZ} & \multirow{3}{*}{7.2} & 0-shot prompting & 62.67 & 45.78\({}^{\dagger}\) & 61.87 & 47.98\({}^{\dagger}\) & 44.06\({}^{\dagger}\) & 52.47 \\ & & 1-shot prompting & **64.35** & 49.31\({}^{\dagger}\) & **66.57** & 51.88\({}^{\dagger}\) & 43.92\({}^{\dagger}\) & 55.21 \\ \multirow{-5}{*}{BLOOM} & \multirow{3}{*}{176B} & 3-shot prompting & **67.31** & 45.91\({}^{\dagger}\) & **64.44** & 53.42\({}^{\dagger}\) & 45.08\({}^{\dagger}\) & 55.23 \\ & & 5-shot prompting & **68.55** & 49.22\({}^{\dagger}\) & **63.36** & 52.60\({}^{\dagger}\) & 44.94\({}^{\dagger}\) & 55.73 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Accuracies on DiBiMT test for establish NMT systems and LLMs, using naive \(k\)-shot prompting. For Alpaca, we can only use 0-shot prompting due to its particular prompt template. We highlight the top three scores per language in bold, with the best underlined as well, the 2nd best as is, and the 3rd best italicized. We indicate scores for unseen languages (ie. not intentionally included in pretraining) with a \(\dagger\).
performs in translation to Chinese. Similarly, BLOOM performs worse for its' unseen languages of German, Italian, and Russian. Still, LLMs yield reasonable performance here that is still much better than some supervised NMT systems. For example, BLOOMZ-7B achieves 40.68% accuracy in English-Italian, which is about 35.9% more than OPUS, 52.8% more than mBART50 and 75% more than M2M100-1.2B. While NLLB-200 does outperform BLOOMZ-7B, our results just highlight the power of pretraining at scale.
3. **Scale helps improve performance for ambiguity translation**. Continuing from the last point, similar to NMT models that improve with scale (e.g. NLLB-200), we observe that LLMs too perform consistently better at ambiguous translation on scaling up to their larger variants. This applies to the translation of both seen and unseen languages. That said, the lighter models, such as LLaMA 7B or BLOOM 7B, also perform quite well and in many cases, 1-shot prompting of these LLMs is almost as good as NLLB translations.
4. **LLM performance does improve on average with more demonstrations, but this is not uniform.** On average, we observe that 5-shot prompting works best, followed by 3-shot and then 1-shot, though some outliers exist for LLaMA 7B. Moreover, when looking at the performance of individual language pairs, we note that the improvement trend is not uniform, and it is possible a 3-shot translation outperforms a 5-shot one. This aligns with the finding of Zhang et al. (2023), who reach the same conclusion regarding overall MT quality. Nonetheless, as we show in Section 4.4.1, accuracy does significantly improve when we provide relevant and helpful examples - suggesting quality of demonstrations matters more than quantity.
5. **General-purpose instruction-tuned LLMs consistently outperform foundation LLMs.** Interestingly, we observe that 1-shot prompting of a general-purpose instruction-tuned LLM like BLOOMZ often significantly outperforms 5-shot prompting of BLOOM, even on the very specific task of ambiguity translation. In fact, even with 0-shot prompting, models like Alpaca 7B, BLOOMZ 7B and BLOOMZ 176B perform reasonably well, matching some supervised MT systems. We observed that this did not work for foundation LLMs like BLOOM 165B and LLaMA 7B, and 0-shot prompting of these models yielded hallucinations in many cases.
Lastly, we include a qualitative comparison of DeepL and BLOOMZ 176B translations for the EnZh pair in the Appendix (see Table 8) - where we observe that BLOOMZ generates more contextual translations, relatively speaking, while its counterpart tends to translate literally in many cases.
### Adapting LLMs for ambiguous MT
This section reports experiments with two proposed strategies to enable LLMs to disambiguate better and improve performance on the ambiguous translation task. While both methods are shown to significantly improve performance, we include a discussion of the relative tradeoffs between the techniques in Appendix A.2.
#### 4.4.1 Improving In-Context Learning by leveraging similar ambiguous contexts
Rather than selecting our examples randomly as in our naive setting, we employ the data selection procedure described in Section 3.3 to discover other examples that contain the same word sense as the most polysemous sense in the input sentence. We report our scores in Table 4, and our findings below:
1. **Similar contexts yield more improvements as the example count increases** We observe that for 1-shot prompting, similar contexts perform comparably or slightly better than random examples. However, the gains increase substantially as we move towards 3-shot and 5-shot prompting. We can understand this from the intuition that 1-shot prompting likely just guides the LLM towards generating a reasonable translation, whereas with more relevant examples, it learns to disambiguate better and translate in context accordingly.
2. **Larger models observe greater and more consistent gains than smaller LLMs** Compared to LLaMA 7B, the other LLMs (LLaMA 65B, BLOOM 176B and BLOOMZ 176B) yield much larger accuracy improvements on a more uniform basis. This is probably because scaling up allows LLMs to model polysemous words better in their semantic
space, facilitating effective in-context learning of disambiguation capabilities.
#### 4.4.2 Fine-tuning with ambiguous corpora
We fine-tune Alpaca 7B, BLOOM 7B and BLOOMZ 7B in En-Es and En-It directions using the data described in Section 4.2. We show our results when prompting these fine-tuned LLMs in Table 5. We make the following observations:
1. **Fine-tuning generally improves performance.** We observe that fine-tuned LLMs significantly outperform their non-finetuned versions in most cases. The biggest improvement is observed for BLOOM 7B in En-It, where accuracy increases by as high as 47.73%, indicating the effectiveness of our method. The only exception to this is when the LLM is already strong, such as BLOOMZ 7B at En-Es, and then the improvements are marginal. But even so, strong instruction-tuned LLMs like BLOOMZ still gain significantly from fine-tuning on the En-It pair - where it was originally weaker due to Italian being an unseen language during pretraining.
2. **Fine-tuning for 2-3 epochs is sufficient.** We plot the DiBiMT accuracy versus epoch curves in Figure 2 where the performance is evaluated after each epoch. We observe that in all cases, accuracy peaks between the 1st and the 3rd epoch, after which it mostly plateaus or dips slightly - suggesting that one does not need to fine-tune these LLMs for too long.
3. **Fine-tuning improves LLM performance until about 65K training samples.** We now try to answer the Research Question of how many training samples we need for fine-tuning these LLMs, to get optimal performance. We plot the Accuracy vs corpus size graph in Figure 3, where we indicate corpus size by the number of parallel sentences. We observe that accuracy increases non-monotonically with an increase in corpus size, but peaks anywhere between 36K-63K training samples, which seems to depend on the pre-existing capabilities of the LLM. For a raw foundation LLM like BLOOM 7B, relatively more fine-tuning data (54K-63K) appears to be beneficial. Alpaca 7B, which has been instruction-tuned on an English-only dataset, also seems to benefit from further fine-tuning--especially for En-Es, accuracy peaks after 63K training samples. However, for a powerful LLM like BLOOMZ that has been instruction-tuned on a large multilingual dataset like xP3 (Muennighoff et al., 2023), fine-tuning on smaller datasets (at most 36K sentences, in our case) appears to suffice.
### Overall MT performance of disambiguation-adapted LLMs
Lastly, for completeness, we also evaluate the overall translation quality of the key LLMs used in this work - since we are interested in noting how well the reported disambiguation accuracies extend to overall MT performance. While choosing our test set, we want to ensure it is recently released (ideally within the last year) to minimize the chances of its inclusion in the pretraining corpora of LLMs. We, thus, choose FLORES 200 (NLLB Team et al., 2022) as our test set since it satisfies this criterion and also supports all our languages of evaluation. We use spBLEU12(Goyal et al.,
\begin{table}
\end{table}
Table 4: 1-shot, 3-shot and 5-shot results for En-Es and En-It prompting with randomised examples (Rand. ) versus similar contexts (Sim. ). The best-performing systems from Table 3, i.e. DeepL and NLLB-200 are chosen as baselines. For LLMs, for each setting, the better-performing baseline between Rand. and Sim. is highlighted in bold. The overall best score (among all LLMs) is underlined as well, while the best NMT system is also italicized.
2022), chrF++13[Popovic, 2017] and COMET22[Rei et al., 2022] using the wmt22-comet-da model as metrics. In this setting, we evaluate Alpaca with 0-shot prompting, while LLaMA 7B, LLaMA 65B and BLOOM 176B use the 1-shot setup. NLLB-200 is our primary supervised NMT baseline. We also evaluate LoRA fine-tuned versions of Alpaca 7B and BLOOM 7B, from section 4.4.2, on the English-Spanish and English-Italian pairs. We exclude BLOOMZ from this evaluation since it is instruction-tuned on FLORES200. We report our results in Table 6.
Footnote 13: nrefs:1|case:mixed|eff:yes|nc:6|mw:2|space:no|version:2.3.1
We observe trends similar to those of our DiBiMT experiments. BLOOM 176B performs well in translation of seen languages, performing comparably to NLLB-200 in English-Spanish and outperforming it in English-Chinese. This is particularly the case for COMET22 scores, a metric which has shown high correlations with human evaluation, ranking second in the WMT22 Metrics shared task [Freitag et al., 2022]. For the other languages, LLaMA 65B usually performs better than BLOOMZ, but in the 1-shot prompting setup, it is unable to beat the NLLB-200 54B MOE. We also notice that the fine-tuned versions of Alpaca 7B and BLOOM 7B consistently outperform their
Figure 3: DiBiMT accuracy vs fine-tuning (FT) corpus size in terms of parallel sentence count. These results are obtained from evaluating checkpoints at every 300 steps in the 1st epoch - which roughly corresponds to about 9K sentences, since we use a batch size of 32.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{System} & \multicolumn{3}{c}{En-Es} & \multicolumn{3}{c}{En-It} \\ \cline{2-7} & Alpaca 7B & BLOOM 7B & BLOOMZ 7B & Alpaca 7B & BLOOM 7B & BLOOMZ 7B \\ \hline w/o FT & 49.75 & 55.69 & 60.87 & 45.24 & 28.79 & 40.68 \\ FT (Best Loss) & 63.27 & 57.86 & 60.39 & 59.62 & 37.72 & 39.73 \\ FT (Best Acc.) & 63.31 & 59.72 & 61.56 & 59.77 & 42.40 & 44.73 \\ \hline \hline \end{tabular}
\end{table}
Table 5: DiBiMT Accuracies after fine-tuning Alpaca 7B, BLOOM 7B, and BLOOMZ 7B on En-Es and En-It pairs. The “Best Loss” baseline refers to the checkpoint with best cross-entropy loss on the validation set. The “Best Acc.” baseline refers to the checkpoint with best DiBiMT accuracy, on evaluating the last checkpoint after each epoch.
Figure 2: DiBiMT accuracy at the end of every epoch, for the LoRA fine-tuned LLMs
vanilla counterparts - suggesting our techniques to improve disambiguation performance also boost overall translation quality.
Thus, while we evaluate some key LLMs to verify consistent trends, we want to avoid re-running all our baselines on FLORES200. So, we try to answer a broader question: how well does disambiguation accuracy on DiBiMT correlate with standard MT metrics? We conduct a Pearson's correlation test [1] between the accuracy metric and spBLEU, chrF++, and COMET22 respectively. We report our results in Table 7, and find that all MT quality metrics correlate positively with accuracy--with \(p\)-values of the two-sided alternative hypothesis being much lesser than 0.05 in all cases. We discover that spBLEU and COMET22 exhibit higher correlations than chrF++. We hypothesize that this could be due to the character-level chrF++ being less sensitive to word-level senses. Overall, the results of Tables 6 and 7 suggest that the significant accuracy improvements noted earlier are not at the cost of translation quality, and in turn, could yield improvements in overall MT scores too.
## 5 Conclusion
In this work, we studied the capabilities of LLMs to handle ambiguity during machine translation. We choose seven of the most widely used foundation and instruction-tuned LLMs and compare accuracy with SOTA commercial and open-source NMT systems on the DiBiMT translation benchmark. Out of 5 language directions, we report scores comparable to the SOTA on two (En-Ru, En-It) and set a new SOTA on two others (En-Zh, En-Es). We then present two techniques that significantly improve disambiguation accuracy: in-context learning with similar contexts, and fine-tuning on an ambiguous corpus. We end the paper with an evaluation of overall MT quality. We hope the methods and findings shared in this work could guide future researchers studying ambiguity in translation.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{System} & \multicolumn{4}{c}{En-Es} & \multicolumn{4}{c}{En-It} \\ \cline{2-9} & spBLEU & chrF++ & COMET22 & spBLEU & chrF++ & COMET22 \\ \hline NLLB-200 54B & 23.10 & 22.83 & 0.82 & 38.00 & 56.34 & 0.90 & 44.80 & 62.79 & 0.88 \\ Alpaca 7B (0-shot) & 4.80\({}^{\dagger}\) & 10.40\({}^{\dagger}\) & 0.62\({}^{\dagger}\) & 21.80 & 42.60 & 0.82 & 27.30 & 50.30 & 0.82 \\ LLaMA 7B (1-shot) & 5.60\({}^{\dagger}\) & 10.80\({}^{\dagger}\) & 0.66\({}^{\dagger}\) & 20.70 & 41.20 & 0.79 & 22.80 & 45.40 & 0.78 \\ LLaMA 65B (1-shot) & 13.80\({}^{\dagger}\) & 17.60\({}^{\dagger}\) & 0.77\({}^{\dagger}\) & 26.70 & 46.10 & 0.82 & 31.80 & 52.80 & 0.81 \\ BLOOM 7B (1-shot) & 19.00 & 19.50 & 0.83 & 3.70\({}^{\dagger}\) & 22.30\({}^{\dagger}\) & 0.46\({}^{\dagger}\) & 8.20\({}^{\dagger}\) & 31.70\({}^{\dagger}\) & 0.51\({}^{\dagger}\) \\ BLOOM 176B (1-shot) & 25.10 & 23.80 & 0.86 & 10.30\({}^{\dagger}\) & 31.80\({}^{\dagger}\) & 0.65\({}^{\dagger}\) & 19.90\({}^{\dagger}\) & 45.40\({}^{\dagger}\) & 0.74\({}^{\dagger}\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: FLORES 200 results for \(k\)-shot prompting of some key LLMs used in this work, compared with the NLLB-200 baseline. We also include results for the LoRA fine-tuned models, for the En-Es and En-It pairs. Same as the previous notation, we indicate all unseen language results with a \({}^{\dagger}\). We observe similar trends in all standard MT metrics, as those observed with DiBiMT accuracy.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & spBLEU & ChrF++ & COMET22 \\ & w/ acc. & w/ acc. & w/ acc. \\ \hline \(\rho\) & 0.83 & 0.56 & 0.76 \\ \(p\)-value & 0.0001 & 0.0039 & 0.0010 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Pearson’s correlation \(\rho\)[1] between DiBiMT accuracy and spBLEU, chrF++, and COMET22 respectively, together with p-values.
### Limitations
In this work, we attempt to note overall trends in LLM performance as compared to conventional NMT systems and, based on our results, suggest methods that generally improve performance. That said, there are exceptions to these trends - prompting with similar contexts can, at times, degrade performance and so can increasing the number of demonstrations (see Table 4). But there is some consistency here too that these observations mostly apply to smaller LLMs (such as LLaMA 7B) while the larger LLMs benefit more significantly. Also, as noted in Section 4.4.1, in a small percentage of cases (7.5%), we are unable to find 5 matches when attempting 5-shot prompting with similar contexts. In such cases, it might be worthwhile, from a performance perspective, to use random demonstrations; nonetheless, since we are interested in verifying the utility of similar contexts and also since there are only a few cases where it might be pertinent, we do not explore this.
## Acknowledgements
This work has received funding from UK Research and Innovation under the UK government's Horizon Europe funding guarantee [grant numbers 10039436 and 10052546].
The computations described in this research were performed using the Baskerville Tier 2 HPC service ([https://www.baskerville.ac.uk/](https://www.baskerville.ac.uk/)). Baskerville was funded by the EPSRC and UKRI through the World Class Labs scheme (EP/T02221/1) and the Digital Research Infrastructure programme (EP/W032244/1) and is operated by Advanced Research Computing at the University of Birmingham.
|
2308.07323 | Analytical Techniques to Support Hospital Case Mix Planning | This article introduces analytical techniques and a decision support tool to
support capacity assessment and case mix planning (CMP) approaches previously
created for hospitals. First, an optimization model is proposed to analyse the
impact of making a change to an existing case mix. This model identifies how
other patient types should be altered proportionately to the changing levels of
hospital resource availability. Then we propose multi-objective decision-making
techniques to compare and critique competing case mix solutions obtained. The
proposed techniques are embedded seamlessly within an Excel Visual Basic for
Applications (VBA) personal decision support tool (PDST), for performing
informative quantitative assessments of hospital capacity. The PDST reports
informative metrics of difference and reports the impact of case mix
modifications on the other types of patient present. The techniques developed
in this article provide a bridge between theory and practice that is currently
missing and provides further situational awareness around hospital capacity. | Robert L Burdett, Paul corry, David Cook, Prasad Yarlagadda | 2023-07-31T22:59:34Z | http://arxiv.org/abs/2308.07323v1 | # Analytical Techniques to Support Hospital Case Mix Planning
###### Abstract
This article introduces analytical techniques and a decision support tool to support capacity assessment and case mix planning (CMP) approaches previously created for hospitals. First, an optimization model is proposed to analyse the impact of making a change to an existing case mix. This model identifies how other patient types should be altered proportionately to the changing levels of hospital resource availability. Then we propose multi-objective decision-making techniques to compare and critique competing case mix solutions obtained. The proposed techniques are embedded seamlessly within an Excel Visual Basic for Applications (VBA) personal decision support tool (PDST), for performing informative quantitative assessments of hospital capacity. The PDST reports informative metrics of difference and reports the impact of case mix modifications on the other types of patient present. The techniques developed in this article provide a bridge between theory and practice that is currently missing and provides further situational awareness around hospital capacity.
Keywords:hospital case mix planning, multi criteria decision making, decision support system, OR in health services +
Footnote †: journal: Medical and Environmental Research
## 1 Introduction
This article considers case mix planning (CMP) in hospitals. This is the problem of identifying a patient cohort (a.k.a., case mix) with a specific set of features deemed desirable or ideal. Identifying the ideal composition and number of patients to be treated, however, is not straight-forward and is quite nuanced. A variety of challenges make this task challenging (Hof et. al. (2017)). First, there are many different alternative case mixes that can be selected. Some case mixes are favourable for some patient type and unfavourable for others. Second, the term "ideal" is subjective and can mean different things in a practical setting. A case mix may be sought that is most equitable, for instance in the allocation and usage of hospital resources. A case mix may also be sought that is most economical or financially viable to treat. From a utilization and output-oriented perspective, a maximal cohort may also be sought. That cohort results in the greatest number of patients treated over time. A maximal cohort saturates the resources of the hospital and is a measure of the hospitals' capacity. Identifying a case mix that meets or exceeds specified demands or targets is also of significant interest. Last, quality of care is especially important in all the variants mentioned.
CMP is often made more complex by either a lack of precise information, a high volume of unrefined empirical data, and stochastic parameters (Burdett et. al. 2022a). The presence of stochastic treatment durations and length of stay make it difficult to exactly ascertain how long each hospital resource would be utilized over a specified time. The categorisation of patient type is also a vital ingredient in CMP, but the approach taken is rarely straightforward, often ad-hoc and politically sensitive. There are many different types of illness, and many medical and surgical treatments. Categorising patients into finite groups with common features is subjective, and any choice that is made can easily skew the results. Patients may be grouped by specialty, diagnostic resource group (DRG), international classification of disease (ICD), similar resource consumption and treatment duration, or other strategies (Burdett et. al. 2017). In CMP there is a choice to include, surgical, medical, or acute patients, and to consider a hospital holistically or piecemeal. It is worth mentioning
that CMP may also refer to the allocation of available operating theatre time amongst different surgical specialties and surgical patient types.
CMP is an active research area and has been considered on numerous occasions. A variety of quantitative methods have been applied for CMP in hospitals. In most cases, the activity is performed parametrically without observation of actual hospital operations. For instance, an analysis is performed relative to key hospital parameters and other geographic details, like hospital layout and configuration. The articles by Burdett and Kozan (2016), Burdett et. al. (2017), Fulshof et. al. (2017), McRae et. al. (2018), McRae and Brunner (2019), Mahmoudzadeh et.al. (2020) demonstrate the current state of the art. Their focus is predominantly the identification of a favourable case mix, subject to resource availability constraints. Our review of the literature suggests that CMP across multiple hospitals has yet to be comprehensively addressed and stochastic assessments are also quite new (Burdett et. al. (2022b), Mahmoudzadeh et.al. (2020)). The development of appropriate decision support tools is severely lacking, and only the recent work by Burdett et. al. (2022b) considers how hospital planners, executives, and layperson, could perform CMP. Overall, there is a lack of methodological support for health care professionals in their decision making, encompassing the provision and allocation of finite hospital resources (Bruggemann et.al.2021).
From a general viewpoint it's evident that strategic, tactical, and operational level decision problems are abundant in health care, and much research is being performed in this field. Operational problems are popular, and some examples include Chen et. al. (2015), Spratt and Kozan (2016), Liu et. al. (2019), Hawkinson et.al. (2018). Providing advanced predictive analytics is an important goal in the healthcare industry (Kreuger (2018)) and the development of planning software, that is user-friendly, intuitive, and extensible is challenging, but vital. Krueger (2018) reports the emerging use of operational tools and science in health care. However, we should also acknowledge that there has been restricted uptake and development of advanced hospital planning tools and software to date (Burdett et. al. 2022b). Trust in the Operations Research (OR) team building the tool is reported as most vital to success (Pagel et.al. 2017). Predicting patient flows is a vital task and provides information that can be used in hospital case mix planning and capacity assessment. Resta et.al. (2018) developed artificial neural networks to classify, cluster and predict patient flows. They used their self-organizing map to gain a deeper understanding of patient flows in an emergency department in Italy. The potential for capacity planning research in nursing and other community care facilities is also emerging. Frail and elderly patients are highly vulnerable and are often a cause of bed blocking within hospitals due to the lack of or unsuitable community care resources available (Williams et.al. 2021). It is therefore important for research to focus on hospitals and how they feed into community care services. Multi-criteria decision making (MCDM) has increased greatly in healthcare in recent years (Chalgham et.al., 2019). Chalgham et.al. (2019) proposed MCDM methods to improve in-patient flows from an emergency department. Malik et.al. (2015) formulated and solved a bi-objective aggregate capacity planning problem for operating theatres. The considered the minimization of elective patients waiting for treatment and minimization of healthcare expenditure. Both of the aforesaid articles, however, did not consider an entire hospital and the effect on downstream resources, likes beds. Emergency management of health care facilities in the event of various disasters and crises is also an active area for which this articles' approach is applicable. High level emergencies have serious consequences on hospital activities (Chen et.al., 2015). Chen, Guinet and Ruiz (2015) considered hospital evacuation planning and developed a simulation model to analyse the process.
**Research Aims.** In this article some supporting techniques are proposed to assist with hospital CMP. The supporting techniques we have developed are deemed necessary for a variety of reasons and do not currently exist in the literature. These will be discussed in due course. First, we consider how a specific change to an existing patient case mix impact resource availability for other patient type. Specifically, we would like to know which patient types are affected, and by how much are they affected. For instance, can more, or less patients be treated? We think this functionality is necessary because a patient case mix that meets all goals and expectations is unlikely to be found immediately.
There are many qualitative factors and stakeholders, and some slight adjustment of the mix would typically be necessary. To identify the effect, we propose a mathematical programming model. The capability to analyse changes of this nature and to immediately see the effect will improve situational awareness for hospital executives, managers, and planners who are weighing up the pros and cons of treating more, or less patients of a given type. This type of enquiry in theory facilitates a broader sensitivity analysis that can provide significant insights into how a hospital can be operated, and who they can treat. Previously, the goal of CMP was just to identify a single favourable case mix solution, or to provide a set of non-dominated case mix solutions, with no understanding of the difference between those.
Second, we consider how to compare different patient case mix to better facilitate CMP in a multi-criterion setting (Burdett and Kozan (2016)). In a multi-criterion setting, it is necessary to generate a Pareto frontier, which is set of alternatively optimal solutions. How to choose one solution over another is very unclear. We would like to decide rationally, and to have a consistent framework to substantiate any choice. To this end, analytical aids are needed.
The techniques developed in this article motivate the development of a decision support tool for hospital managers and planners to use. Without such a tool, it is hard to imagine, that the techniques advocated in this article could be applied in a practical setting. It seems unlikely that any hospital or health care provider would have the time and resources to create a tool that does what this paper suggests, so it falls to us to prove the concept. We test how the new techniques can be embedded within a personal decision support tool to perform the described CMP analyses.
The format of this article is as follows. In Section 2 the details of the analytical methods and framework is provided, commencing with an outline of key technical details. The specification, capabilities and graphical user interfaces are then presented in Section 3. Design strategies employed during development are also provided and examples of how the GUI are used to perform various assessments is shown. Conclusions and final remarks are given in Section 4. Broader issues including potential further extensions to the software are also discussed.
## 2 Methodology
### Background Details.
The formal details and notation relevant to later modelling are now described. To avoid confusion, it is important to note that in _some limited circumstances we reuse some symbols and differentiate parameters by their index_. This abuse of notation is beneficial as fewer symbols need to be introduced and understood.
For CMP we first recognise that are limited operating theatres, wards, and beds in a hospital. Operating theatres and wards consist of beds where treatments and care are provided. They are viewed as distinct precincts (a.k.a., zones) for health care. The set of hospital precincts are denoted by \(W\), and for each zone \(w\in W\) the number of treatment spaces contained within is denoted \(b_{w}\). Therefore, there are \(\sum_{w\in W}b_{w}\) treatment spaces in total.
It is assumed that there is a clearly defined set of patient types (a.k.a., groups) that are treatable. For each patient type \(g\in G\), there is a set of patient sub-types denoted \(P_{g}\). The full set of patient sub types is hence denoted \(GP=\{(g,p)|g\in G,p\in P_{g}\}\). For each element of \(GP\) there is a specific clinical pathway, denoted \(A_{(g,p)}\). The pathway is essentially a set of medical and surgical activities, each with a planned treatment time and location. As such, we can write \(A_{(g,p)}=\{(g,p,k)|(g,p)\in GP\}\). The activity time measured in either hours or minutes is \(t_{(g,p,k)}\) and the set of candidate locations for the activity is denoted \(l_{(g,p,k)}\). The set of all activities is \(A=\mathrm{U}_{(g,p)\in GP}A_{(g,p)}\) and those that can be performed in zone \(w\) is \(A_{w}=\{a\in A|w\in l_{a}\}\).
The number of patients of type \(g\) treated in hospital \(h\) and the number of patients of sub-type \(p\) is to be determined. These constitute the case mix and sub mix of the hospital. They are respectively
denoted \(n^{1}_{g}\) and \(n^{2}_{g,p}\). The total number of patients treated is \(\mathbb{N}\). It is worth noting that, \(\mathbb{N}=\sum_{g}n^{1}_{h,g}\ \forall g\in G\), \(n^{1}_{h,g}=\sum_{p\in P_{g}}\bigl{(}n^{2}_{h,g,p}\bigr{)}\ \ \forall g\in G,\forall p\in P_{g}\).
Within each precinct, the treatment spaces are available for a specified number of hours per period. The time availability of \(w\in W\) is \(T_{w}\). The number of activities of type \(a=(g,p,k)\) performed in precinct \(w\) is denoted \(\alpha_{a,w}\). It is worth noting that \(\alpha_{a,w}=0\ \ \forall w\in W\backslash l_{a}\), i.e., activities cannot be performed outside of their candidate locations. The decision captured by \(\alpha_{a,w}\) for all \(a\) and \(w\) is referred to as the allocation. It is inherently linked to the decisions \(n^{1}_{g}\) and \(n^{2}_{g,p}\).
A CMP model has been proposed previously and will be used as the basis upon which this articles' quantitative techniques are based. The CMP model is listed in Appendix A for interested readers. The CMP model determines maximum number of patients that treatable over a specified period, subject to limited time availabilities of operating theatres, wards and beds, and a required case mix.
### Case Mix Alterations (CMA)
For a given case mix \(n^{1}\), the effect of a change in the number of patients of a single type \(g^{*}\), is worth understanding. If patient type \(g^{*}\) is increased and anchored, then it may be necessary to decrease several other patient types, as less capacity is available for them. If patient type \(g^{*}\) is decreased and anchored, then the opposite occurs, i.e., more capacity is available for the others. The following steps are required to facilitate a query such as described:
Step 1. Select a current case mix \(n^{1}\), and designate targets \(\hat{n}^{1}_{g}=n^{1}_{g}\).
Step 2. Select a change of \(\delta\) units in one patient type \(g^{*}\), i.e., \((\delta,g^{*})|0\leq\hat{n}^{1}_{g^{*}}+\delta\leq\overline{n}^{1}_{g^{*}}0\) where \(\delta\neq 0\) and \(\overline{n}^{1}_{g}\) is the upper bound on the number of patients of type \(g\) that are treatable.
Step 3. If \(\delta>0\) then the difference \(\|n^{1}-\hat{n}^{1}\|\) should be minimized, else if \(\delta<0\) then the difference should maximized, where \(\|n^{1}-\hat{n}^{1}\|=\bigl{(}\sum_{g\in G}\bigl{(}Y_{g}\bigr{)}^{2}\bigr{)}^ {0.5}\) and \(\gamma_{g}=\hat{n}^{1}_{g}-n^{1}_{g}\) or \(\gamma_{g}=\frac{\hat{n}^{1}_{g}-n^{1}_{g}}{\overline{n}^{1}_{g}}\).
Step 4. Solve the optimization model below to obtain the case mix \(n^{1}\). Report the differences \(Y_{g}\).
Minimize \[Z=sgn(\delta)\times\sum_{g|g\neq g^{*}}\bigl{(}Y_{g}\bigr{)}^{2}\] where \[sgn(\delta)=\delta/|\delta|\] (1) Subject to: \[n^{1}_{g^{*}}=\hat{n}^{1}_{g^{*}}+\delta\quad\quad\text{[ Forced number of type $g^{*}$]}\] (2) \[Y_{g^{*}}=0\ \text{and}\ \gamma_{g}=sgn(\delta)\times \bigl{(}\hat{n}^{1}_{g}-n^{1}_{g}\bigr{)}/\overline{n}^{1}_{g}\ \forall g\in G|g\neq g^{*}\quad\text{[ Scaled difference]}\] (3) \[sgn(\delta)\times n^{1}_{g}\leq sgn(\delta)\times\hat{n}^{1}_{g} \forall g\in G|g\neq g^{*}\ \text{[ Forced increase or decrease]}\] (4) \[Y_{g}\geq 0\ \forall g\in G\quad\text{[Positive scaled difference]}\] (5) \[+\text{Constraint (A2) - (A10) excluding (A9) from Appendix A}\]
This model is hereby denoted CMA-SSQ. Scaling is introduced in constraint (3) to manage the different orders of magnitude that may occur. As a result, differences can be compared more objectively. Without scaling, some types will have more, or less increase (decrease). Equation (3) also ensures that \(Y_{g}\) takes positive values in both situations. For instance, \(\forall g\in G|g\neq g^{*}\):
\[Y_{g}=\frac{\hat{n}^{1}_{g}-n^{1}_{g}}{\overline{n}^{1}_{g}}\ \text{if}\ \delta>0\ \ \text{and}\ \gamma_{g}=\frac{n^{1}_{g}-n^{1}_{g}}{\overline{n}^{1}_{g}}\ \text{if}\ \delta<0\]
With regards to constraint (4), if \(\delta<0\), then there must be an increase, i.e., \(n^{1}_{g}\geq\hat{n}^{1}_{g}\). If \(\delta>0\), then there must be a decrease, i.e., \(n^{1}_{g}\leq\hat{n}^{1}_{g}\). Constraint (5) ensures that a patient type is not increased when it should be decreased or vice versa.
The CMA-SSQ model has a sum of squares objective, and this is a quadratic function. As such, the model can be solved using Quadratic Programming (QP) techniques. However, when \(\delta<0\), we
find the model to be non-convex (i.e., the hessian is not positive semi-definite), and QP is not suitable. Separable Programming (SP) techniques, however, may be applied. This requires the objective to be expanded as in (6), and the term \(\big{(}n_{g}^{1}/\overline{n}_{g}^{1}\big{)}^{2}\) to be approximated by a piecewise linear function, i.e., \(\big{(}\frac{n_{g}^{1}}{\overline{n}_{g}^{1}}\big{)}^{2}\approx y_{g}\) where \(y_{g}=\text{PWL}\big{(}n_{g}^{1}/\overline{n}_{g}^{1},f,b,\sigma\big{)}\), \(b_{0}=0\), \(b_{i}=\frac{i}{l+1}\)\(i=1,...,I+1\), \(\sigma_{i}=\frac{f(b_{i})-f(b_{i-1})}{b_{i}-b_{i-1}}\)\(i=1,...,I+1\), and \(f(b_{i}):b_{i}\rightarrow(b_{i})^{2}\).
\[\sum_{g|g\neq g^{*}}(Y_{g})^{2}=\sum_{g\in G(g^{*})}\big{(}\frac{n_{g}^{1}}{ \overline{n}_{g}^{1}}\big{)}^{2}-2\left(\frac{\widehat{n}_{g}^{1}}{\overline {n}_{g}^{1}}\right)\,n_{g}^{1}+\left(\frac{\widehat{n}_{g}^{1}}{\overline{n} _{g}^{1}}\right)^{2} \tag{6}\]
In total there is \(|G|-1\) piecewise linear functions. The term \(n_{g}^{1}/\overline{n}_{g}^{1}\) lies in the range [0,1] and as such, so does the term \(\big{(}n_{g}^{1}/\overline{n}_{g}^{1}\big{)}^{2}\). The squared term is well approximated by a piecewise linear function over that interval and requires a relatively small number of intervals and breakpoints.
There are other alternatives to the sum of squares approach. Enforcing a more equitable change across all types is one such option. The following linear objective may instead be considered:
\[\text{Minimize }Z=\sum_{g\in G}(Y_{g}) \tag{7}\]
This variation is hereby denoted CMA-LIN. This approach still favours some types and not others and is not necessarily equitable, but it may be reasonable in some circumstances. Another alternative is to explicitly force an equitable increase or decrease (EQ-QBI). The following objective and constraint can be imposed to achieve this:
\[\text{Minimize }Z=sgn(\delta)\times\lambda\text{ where }\gamma_{g}=\lambda\ \forall g\in G|g\neq g^{*} \tag{8}\]
The scaling previously discussed is vital for using an objective function like (8). The equality \(\gamma_{g}=\lambda\ \forall g\in G|g\neq g^{*}\) is strict and that might be an issue sometimes. For instance, a particular patient type may not be able to be increased or decreased at all, as the required resources may already be saturated, or the lower or upper bound has already been reached. In response, we can instead enforce \(Y_{g}\leq\lambda\), and add the following constraint:
\[n_{g}^{1}\geq\widehat{n}_{g}^{1}-sgn(\delta)\times\lambda\overline{n}_{g}^{1} \ \forall g\in G|g\neq g^{*} \tag{9}\]
This variation is hereby denoted CMA-EQ. It is worth noting that \(\lambda.sgn(\delta)\leq\widehat{n}_{g}^{1}/\overline{n}_{g}^{1}\ \forall g\in G\) and a bound can be computed for \(\lambda\).
**Final Remarks:** The model when solved may report no changes to the case mix. However, the output of the model is still useful. For instance, the model produces a plan of how to process the new case mix. A different allocation of resources may be necessary to achieve the alteration. The model may not be feasible in some circumstances, and that is informative of situations where an equitable alteration is not possible. An inequitable alteration however may be possible, but that requires a more sophisticated multicriteria approach and some additional direction and guidance about how to regulate competition between the different patient types.
**Demonstrative Example**. Let us consider a scenario of reduced size to demonstrate the case mix alteration models. In this scenario there are five intensive care beds, ten operating theatres, five recovery wards with (2, 5, 10, 14, 3) beds respectively, and five patient types. A feasible case mix of 113.53 patients (i.e., weekly) with \(n^{1}=\)(5.68, 48.82, 20.43, 10.22, 28.38) determined from a prior analysis is to be altered. The case mix proportions are (0.05, 0.43, 0.18, 0.09, 0.25) and the patient types have the parameters shown in Table 1. Their names have been removed for confidentiality.
Table 2-5 describe the effect of various alterations and the solution of the variant CMA models. The results are rounded to 2 decimal places. The change to a particular patient type is shown in column two, and the change to all other patient types is shown in column four. In column six, the total corresponding change in the other patient types is shown. Depending on the method we also show \(\lambda\) and \(Z\). The selected changes provide a sufficient picture and for brevity concentrate upon describing what would happen if a patient type was eliminated completely or increased to the upper bound. Some other intermediate values are also considered.
From the CMA-EQ numerical testing in Table 2, it is worth noting a few things. It is possible, that some requested changes reduce all other patient types to zero. Also, if a patient type is increased too greatly, then some patient types would need to be reduced below zero to achieve a uniform equitable decrease. This makes no sense, however, and the model is not solvable in those circumstances. For example, when patient type T5 is increased to a level greater than 57.6, then it is impossible to achieve a uniform decrease. To obtain a solution, it is necessary to add in constraint (9). Hence, patient type T4 is decreased as much as possible, while the others are decreased equitably. The gamma value for patient type T4 is 0.097289 for all changes in patient type T5 above 57.6. As shown, when the increase is 57.9, then lambda is 0.0982 and 0.0982\(\times\)105.047 = 10.3156 > 10.22.
In Table 4 and 5, the original CMA-SSQ approach is demonstrated. In Table 4 the results were obtained via quadratic programming, however, in Table 5 separable programming. The number of breakpoints used to approximate the squared term in the expansion of \(\left(y_{g}\right)^{2}\) was 500. There are subtle differences between the results of QP and SP, however, for the most part, only minor differences have been observed in decimal place accuracy.
The CMA-SSQ provides a more equitable case mix than CMA-LIN, but less equitable than CMA-EQ. Hence, it lies between the CMA-EQ and CMA-LIN approaches. This conclusion is justified by the values presented in Column 7 in Tables 2 - 4. The total impact shown in Column 7 is the sum of the alterations in Column 4, excluding those to the selected patient type.
\begin{table}
\begin{tabular}{c c c c c c c} \(\mathbf{g}\) & Considered & \(\mathbf{\delta}\) & Alterations & New Case Mix & N & Total & \(\lambda\) \\ Revision & & & & & & & Impact \\ \hline
1 & 5.68 \(\rightarrow\) 0 & -5.68 & (-5.68, 0.42, 0.48, 0.49, 0.33) & (0, 49.24, 20.91, 10.71, 28.71) & 109.57 & 1.71 & 0.0 \\
1 & 5.68 \(\rightarrow\) 2 & -3.68 & (-3.68, 0.27, 0.31, 0.32, 0.21) & (2, 49.09, 20.74, 10.54, 28.59) & 110.96 & 1.11 & 0.0 \\
1 & 5.68 \(\rightarrow\) 25 & 19.32 & (19.32, -1.41, -1.63, -1.65, -1.10) & (25, 47.41, 18.8, 8.57, 27.28) & 127.06 & -5.79 & 0.02 \\
1 & 5.68 \(\rightarrow\) 25.18 & 19.50 & (19.50, -1.43, -1.64, -1.67, -1.11) & (25.18, 47.39, 18.79, 8.55, 27.27) & 127.18 & -5.85 & 0.02 \\
2 & 48.82 \(\rightarrow\) 0 & -48.82 & (2.26, -48.82, 9.28, 9.42, 6.28) & (7.94, 0.29, 71.96, 34.66) & 91.95 & 27.23 & 0.09 \\
2 & 48.82 \(\rightarrow\) 30 & -18.82 & (0.87, -18.82, 3.58, 3.63, 2.42) & (6.55, 30, 24.01, 13.85, 30.8) & 105.21 & 10.5 & 0.04 \\
2 & 48.82 \(\rightarrow\) 65 & 16.18 & (-0.75, 16.18, -3.07, -3.12, -2.08) & (4.93, 65, 17.36, 7.1, 26.3) & 120.69 & -9.02 & 0.03 \\
2 & 48.82 \(\rightarrow\) 89.79 & 40.97 & (-1.89, 40.97, -7.78, -7.9, -5.27) & (3.78, 89.79, 12.65, 2.32, 23.11) & 131.65 & -22.84 & 0.08 \\
3 & 20.43 \(\rightarrow\) 0 & -20.43 & (3.53, 12.59, -20.43, 14.73, 9.82) & (9.21, 61.41, 0, 24.95, 38.2) & 133.77 & 40.67 & 0.14 \\
3 & 20.43 \(\rightarrow\) 10 & -10.43 & (1.80, 6.43, -10.43, 7.52, 5.01) & (7.48, 55.25, 10, 17.74, 33.39) & 123.86 & 20.76 & 0.07 \\
3 & 20.43 \(\rightarrow\) 30 & 9.57 & (-1.65, -5.9, 9.57, -6.9, -4.6) & (4.03, 42.93, 30, 3.32, 23.79) & 104.07 & -19.04 & 0.07 \\
3 & 20.43 \(\rightarrow\) 65 & 44.57 & (-5.68, -47.61, 44.57, -10.22, -28.38) & (0, 1.215, 65, 0, 0) & 66.22 & -91.89 & 0.53 \\
4 & 10.22 \(\rightarrow\) 0 & -10.22 & (0.75, 2.68, 3.09, -10.22, 2.09) & (6.43, 51.5, 23.52, 0, 30.47) & 111.92 & 8.61 & 0.03 \\
4 & 10.22 \(\rightarrow\) 3 & -7.22 & (0.53, 1.89, 2.18, -7.22, 1.48) & (6.21, 50.71, 22.61, 3, 29.86) & 112.39 & 6.08 & 0.02 \\
4 & 10.22 \(\rightarrow\) 15 & 4.78 & (-0.35, -1.25, -1.44, 4.78, -0.98) & (5.33, 47.57, 18.99, 15, 27.41) & 114.3 & -4.02 & 0.01 \\
4 & 10.22 \(\rightarrow\) 105.05 & 94.83 & (-5.68, -34.07, -20.43, 94.83, -26.56) & (0, 14.75, 0, 105.05, 1.82) & 121.62 & -86.74 & 0.38 \\ \end{tabular}
\end{table}
Table 2: Demonstration of CMA-EQ approach for the selected case mix
\begin{table}
\begin{tabular}{l l l l l l l} \(g\) & Considered & \(\delta\) & Alterations & New Case Mix & N & Total & Z \\ & Revision & & & & & & \\ & & & & & & & \\
1 & 5.68 \(\rightarrow\) 25 & 19.32 & (19.32, -0.69, -2.34, -1.34, -0.72) & (25, 48.13, 18.09, 8.89, 27.66) & 100.11 & -5.09 & 0.0 \\
1 & 5.68 \(\rightarrow\) 25.18 & 19.50 & (19.50, -0.7, -2.36, -1.35, -0.73) & (25.18, 48.12, 18.07, 8.86, 27.66) & 127.89 & -5.14 & 0.0 \\
2 & 48.82 \(\rightarrow\) 65 & 16.18 & (-0.05, 16.18, -4.16, -2.39, -1.28) & (5.63, 65, 16.27, 7.83, 27.10) & 121.83 & -7.88 & 0.0 \\
2 & 48.82 \(\rightarrow\) 89.79 & 40.97 & (-0.12, 40.97, -10.53, -6.04, -3.24) & (5.56, 89.79, 9.9, 9.4, 18, 25.14) & 134.57 & -19.93 & 0.02 \\
3 & 20.43 \(\rightarrow\) 30 & 9.57 & (-0.18, -4.4, 9.57, -8.52, -4.56) & (5.51, 44.43, 30, 1.7, 23.82) & 105.46 & -17.65 & 0.01 \\
3 & 20.43 \(\rightarrow\) 65 & 44.57 & (-3.28, 48.82, 42.45, -10.22, -28.38) & (2.4, 0.65, 0, 0, 0) & 67.4 & -90.7 & 0.49 \\
4 & 10.22 \(\rightarrow\) 15.05 & 4.78 & (-0.02, -0.59, -2, 4.78, -0.62) & (5.66, 48.23, 18.42, 15, 27.76) & 115.07 & -3.234 & 0 \\
4 & 10.22 \(\rightarrow\) 105.05 & 94.83 & (-1.32, -33.18, -20.43, 94.83, -28.38) & (4.36, 15.64, 0, 105.05, 0) & 125.05 & -83.31 & 0.34 \\
5 & 28.38 \(\rightarrow\) 32 & 3.62 & (-0.02, -0.5, -1.69, -0.97, 3.62) & (5.66, 48.32, 18.74, 9.25, 32) & 113.97 & -3.18 & 0 \\
5 & 28.38 \(\rightarrow\) 57 & 28.62 & (-0.16, -3.95, -13.36, -7.66, 28.62) & (5.23, 44.87, 7.07, 2.56, 57) & 116.73 & -25.13 & 0.02 \\
5 & 28.38 \(\rightarrow\) 57.9 & 29.52 & (-0.16, -4.08, -13.78, -7.9, 29.52) & (5.52, 44.74, 6.65, 2.32, 57.9) & 117.13 & -25.92 & 0.03 \\
5 & 28.38 \(\rightarrow\) 65 & 36.62 & (-0.2, -5.06, -17.09, -9.81, 36.62) & (5.48, 43.76, 3.34, 0.41, 65) & 117.99 & -32.16 & 0.04 \\
5 & 28.38 \(\rightarrow\) 70 & 41.62 & (-5.68, -5.6, -18.91, -10.22, 41.62) & (0, 43.22, 1.52, 0, 70) & 114.74 & -40.41 & 0.1 \\ \end{tabular}
\end{table}
Table 4: Demonstration of CMA-SSQ (QP) approach for the selected case mix
\begin{table}
\begin{tabular}{l l l l l l l} \(g\) & Considered & \(\delta\) & Alterations & New Case Mix & N & Total & Z \\ & Revision & & & & & \\ & & & & & & \\ & & & & & & \\
1 & 5.68 \(\rightarrow\) 0 & -5.68 & (-5.68, 2.88, 0, 0, 0) & (0, 51.70, 20.43, 10.22, 28.38) & 110.73 & 2.88 & 0.03 \\
1 & 5.68 \(\rightarrow\) 25 & -3.68 & (-3.68, 1.88, 0, 0, 0) & (2, 50.69, 20.43, 10.22, 28.38) & 111.72 & 1.88 & 0.02 \\
1 & 5.68 \(\rightarrow\) 25 & 19.32 & (19.32, 0, -3.84, 0, 0) & (25, 48.82, 16.59, 10.22, 28.38) & 129.01 & -3.84 & 0.04 \\
1 & 5.68 \(\rightarrow\) 25 & 19.50 & (19.50, 0, -3.88, 0, 0) & (25, 18.48, 8.26, 16.55, 10.22, 28.38) & 129.15 & -3.88 & 0.04 \\
2 & 48.82 \(\rightarrow\) 0 & -48.82 & (19.50, -48.82, 0, 0, 22.8) & (25, 18.0, 20.43, 10.22, 51.18) & 107.01 & 42.30 & 1.10 \\
2 & 48.82 \(\rightarrow\) 30 & -18.82 & (19.50, -18.82, 0, 0, 5.24) & (25,18, 30, 20.43, 10.22, 33.62) & 119.45 & 24.74 & 0.85 \\
2 & 48.82 \(\rightarrow\) 65 & 16.18 & (0, 16.18, -6.35, 0, 0) & (5.68, 65, 16.08, 10.22, 28.38) & 125.36 & -6.35 & 0.06 \\
2 & 48.82 \(\rightarrow\) 89.79 & 40.97 & (0, 40.97, -16.09, 0, 0) & (5.68, 89.79, 4.34, 10.22, 28.28) & 138.31 & -16.09 & 0.16 \\
3 & 20.43 \(\rightarrow\) 0 & -20.43 & (10.5, 3.101, -20.43, 0, 6.51) & (25.18, 79.83, 0, 10.22, 34.89) & 150.12 & 48.02 & 1.21 \\
3 & 20.43 \(\rightarrow\) 10 & -10.43 & (19.50, 16.68, -10.43, 0, 0) & (25.18, 65.5, 10, 10.22, 28.38) & 139.28 & 36.19 & 0.96 \\
3 & 20.43 \(\rightarrow\) 30 & 9.57 & (0, 9.57, -10.22, -5.78) & (5.68, 48.82, 30, 0, 22.6) & 107.1 & -16 & 0.18 \\
3 & 20.43 \(\rightarrow\) 65 & 44.57 & (-3.28, -48.82, 44.57, -10.22, -28.38) & (2.4, 0.65, 0, 0) & 67.4 & -90.7 & 1.18 \\
3 & 20.43 \(\rightarrow\) 30 & 9.57 & (0, 9.57, -10.22, -5.78) & (5.68, 48.82, 30, 0, 22.6) & 107.1 & -16 & 0.18 \\
3 & 20.43 \(\rightarrow\) 65 & 44.57 & (-3.28, -48.82, 44.57,-10.22, -28.38) & (2.4, 0.65, 0, 0) & 67.4 & -90.7 & 1.18 \\
4 & 10.22 \(\rightarrow\) 0 & -10.22 & (19.50, 4.61, -0.12, 2.0) & (25.18, 53.43, 20.43, 0, 28.38) & 127.42 & 24.12 & 0.83 \\
4 & 10.22 \(
**Further Discussion:** A change to the main case mix was discussed above. When applying the proposed CMA models, the patient sub mixes implied by the original starting case mix are maintained. Whatever proportions are inherent therein, are not altered. In many situations we would expect the impact to other patient types to be of the same order of magnitude of the original alteration, but inverse to it. However, some alterations of the case mix permit higher or lower increases (decreases) to be realised. These impacts are not directly proportional to the scale of the original alteration. This means that there may be some latent unused capacity, relative to the original case mix.
Specific changes to a particular patient sub type are also worth considering. A similar process and model can be posed for this situation. The exact details are shown below:
Step 1. Select a current case mix \(n^{2}\), and designate targets \(\hat{n}^{2}_{g,p}=n^{2}_{g,p}\).
Step 2. Select a change of \(\delta\) units in one patient sub type \((g^{*},p^{*})\), i.e., \((\delta,g^{*},p^{*})|-\hat{n}^{2}_{g^{*},p^{*}}\leq\delta\leq\overline{n}^{2} _{g^{*},p^{*}}-\hat{n}^{2}_{g^{*},p^{*}}\) where \(\delta\neq 0\) and \(\overline{n}^{2}_{g,p}\) is the upper bound on the number of patients of sub type \((g,p)\) that are treatable.
Step 3. Solve the CMA variant model below to obtain the case mix \(n^{2}\). Report the differences \(\gamma_{g,p}\).
Minimize \(sgn(\delta)\times Z\)
where \(Z=\sum_{g\in G}\sum_{p\in P_{g}}\left(Y_{g,p}\right)^{2}\) or \(Z=\sum_{g\in G}\sum_{p\in P_{g}}\gamma_{g,p}\) or \(Z=\lambda\)
Subject to:
\[n^{2}_{g^{*},p^{*}}=\hat{n}^{2}_{g^{*},p^{*}}+\delta \tag{11}\] \[\gamma_{g^{*},p^{*}}=0\text{ and }\gamma_{g,p}=sgn(\delta) \times\left(\frac{\hat{n}^{2}_{g,p}-\hat{n}^{2}_{g,p}}{\hat{n}^{2}_{g,p}} \right)\forall g\in G,\forall p\in P_{g}|g\neq g^{*},p\neq p^{*}\] (12) \[sgn(\delta)\times n^{2}_{g,p}\leq sgn(\delta)\times\hat{n}^{2}_ {g,p}\forall g\in G,\forall p\in P_{g}|g\neq g^{*},p\neq p^{*}\] (13) \[n^{2}_{g,p}\geq\hat{n}^{2}_{g,p}-sgn(\delta)\times\hat{n}^{2}_ {g,p}\forall g\in G,\forall p\in P_{g}|g\neq g^{*},p\neq p^{*}\] (14) \[\gamma_{g,p}\geq 0\ \forall g\in G,\forall p\in P_{g}\] (15) \[n^{2}_{g,p}\geq\mu^{2}_{g,p}n^{1}_{g}\ \forall g\in G,\forall p\in P_{g}|g\neq g^{*} \text{ (Sub mix adherence)} \tag{16}\]
\(+\) From Appendix A, the constraints (A2) - (A8)
\begin{table}
\begin{tabular}{l l l l l l l l} \(g\) & Considered & \(\delta\) & Alterations & New Case Mix & N & Total & \(Z\) \\ & Revision & & & & & Impact & \\ \hline
[MISSING_PAGE_POST]
In constraint (16) it is necessary to point out that sub mix proportions are only enforced for unaffected patient types, i.e., \(g\in G|g\neq g^{*}\).
**Demonstrative Example.** Consider the patient sub type cohort (a.k.a., mix) ([3.97, 1.7], [48.82], [5.11, 8.17, 7.15], [10.22], [28.38]) associated with the previously considered case mix. If sub type T1-1 was chosen and an alteration of 5 was selected, then \(\bar{n}_{1,1}^{2}\rightarrow\) 8.97. For the CMA-EQ option the following sub mix is obtained, \(n^{2}=\)([8.97, 1.32], (48.54), (4.99, 7.99, 6.99), (9.9), (28.16)). Associated with that sub mix are the following alterations, ([5,-0.38],(-0.277),(-0.12.-0.18,-0.16),(-0.32),(-0.22)). Hence, the new case mix is \(n^{1}=\)(10.29, 48.54, 19.96, 9.9, 28.16) and \(\mathbb{N}=\) 116.85.
### Case Mix Comparisons
In this section, multi-criteria decision-making theory is adapted to help support hospital CMP. A multicriteria decision support system (MCDSS) is also proposed.
In hospital CMP, the identification of a single case mix solution that meets certain conditions is important and has been predominantly sought in the methods proposed in previous articles. The identification of a few alternative case mix solutions is valuable and constructive. As shown in Section 2.2, further analysis of a single case mix solution is worthy. It also makes sense to start with a single case mix solution and to suggest alterations. In the previous section, we considered how to assess the impact of any change. A mathematical model was proposed for that purpose. It produces a second case mix and provides the decision maker with two case mixes to compare. However, this raises the question, how does a decision maker determine which is preferable?
In multi-criteria CMP, it is essential to identify an assortment of potential solutions, namely a Pareto frontier of alternatively optimal case mix solutions. As there are multiple conflicting objectives, no single solution exists that simultaneously optimizes each objective. Pareto optimality is a useful concept. It is worth noting that all Pareto optimal solutions are considered equally good without further preference information. A solution is called Non-Dominated, Pareto Optimal, or Pareto efficient, if none of the objective functions can be improved without degrading some other objective values. To obtain a Pareto frontier, there are various techniques, the basis of which is the solution of an underlying mathematical optimization model with multi-objectives.
In Burdett and Kozan (2016), a 21 objective CMP problem was solved, and tens of thousands of Pareto optimal case mix solutions were found. From a practical perspective, it is unclear how those solutions would be accessed and compared, and how a single solution could be chosen by hospital planner, managers, and executives. The first step to handling this dilemma seems to be to provide a means of scoring an individual case mix solution, and second to provide a means of critiquing two case mix, providing insight into which is better or worse. This aligns well with the task discussed in Section 2.2. To the best of our knowledge previous articles have not formally scored patient case mix solutions, nor provided techniques to compare competing case mix solutions purely on the basis of number of patients treated. Other considerations are abundant but are outside the scope of this article.
#### 2.3.1 Scoring Case Mix
Every case mix \(n^{1}\) can be compared to the ideal and anti-ideal case mix. The ideal case mix denoted \(\overline{n}^{1}\) occurs when each patient type achieves its' upper bound. The anti-ideal is the case mix with zero patients of each type, which is hereby denoted \(\underline{n}^{1}\). The proximity of \(n^{1}\) to \(\overline{n}^{1}\) is an important indicator of progress for obvious reasons. The proximity can be computed according to various metrics, and the following is most sensible.
\[proximity=100\times\frac{\left\|n^{1}-\overline{n}^{1}\right\|_{2}}{\left\|n^{ 1}-n^{1}\right\|_{2}}\text{ where }\left\|\nu^{1}-\nu^{2}\right\|_{2}=\left(\Sigma_{g} \frac{\left(\nu_{g}^{1}-\nu_{g}^{2}\right)^{2}}{\left(\epsilon_{g}\right)^{2} }\right)^{1/2} \tag{17}\]
The proximity value lies in the range [0,100], and takes the value zero at the utopia point (i.e., when \(n^{1}=\overline{n}^{1}\)). The denominator is the distance between the ideal and anti-ideal, and the numerator is the distance between the ideal and the current case mix. As some patient types may have vastly different orders of magnitude, the proximities should be scaled, to maintain objectivity. The value \(\epsilon_{g}\) is a user defined measure of significance for patients of type \(g\), and the default value is \(\epsilon_{g}=\overline{n}_{g}^{1}\). Finally, we can compute progress as \(100-proximity\). The proximity of two case mix can also be calculated using equation (17).
**Demonstrative Example:** If we compare case mix (5.68, 48.82, 20.43, 10.22, 28.38) and (16.46, 71.67, 11.79, 10.59, 24.39) where the ideal solution is (25.18, 89.79, 65.48, 105.05, 70), the diagram in Figure 1 is useful and may be produced. It shows scaled proximity information. Evidently, the second case mix is further from the anti-ideal and closest to the ideal. The second case mix has higher numbers of patient type T1 and T2, and similar numbers of T4 and T5. Only in patient type T3 is it more noticeably deficient. The scaled distance of 0.518 indicates some difference, but the extent of the dissimilarity is not sufficiently clear. All we can say is that there is an aggregated squared difference of 0.518\({}^{2}\)=0.269.
**Final Remarks.** Scoring case mix by proximity to the ideal is a useful metric. In equation (17) other norms may also be used, like the 1-norm. The significance level is inherently subjective, and highly influential in the assessment. Two case mix may have the same proximity yet be placed in a totally different parts of the objective space. In those circumstances, the approach is not useful for making any type of judgement. It seems that an iterative approach, involving an alteration of the level of significance is necessary, to build up sufficient evidence, of the merit of one case mix over another. It is necessary to identify if small changes to the level of significance change the resulting comparison, and how those changes skew the proximity.
#### 2.3.2 Quantifying Similarity and Dissimilarity
As discussed, proximity is not a full proof concept to judge achievement or case mix similarity and dissimilarity. Definition 1 and its' corollary are self-evident and may be used instead, as a more rigorous way to perform a comparison. Definition 1 is based upon the e-dominance principles of Laumanns et. al. (2002), later adapted for instance by Hancock et. al. (2015). When using the concept of e-dominance, the user is expected to provide a value that represents the minimum amount of change in an objective that is considered significant. In other words, the user defines what a significant difference is. In some scenarios, a difference of one unit implies significant difference, but in others, it would not. The application of the e-dominance principle is akin to partitioning the objective space into regions. The approach maintains diversity equally in all regions of the Pareto frontier, regardless of where significant trade-off is or is not occurring.
**Definition 1:** Given two case mix solutions \(n^{1(A)}\) and \(n^{1(B)}\), where \(n^{1(A)},n^{1(B)}\subset\mathbb{R}^{|G|}\), we can say that the solutions are similar (i.e., they are not significantly different) if the distance
Figure 1: Comparing case mix solutions and their proximity
\(\epsilon_{g}\ \forall g\in G\). Otherwise, they are not similar, and significant trade-offs must be realised if one solution is selected over another.
**Corollary:** If \(\left|n_{g}^{1(A)}-n_{g}^{1(B)}\right|\leq\epsilon_{g}\) then the change in patient type \(g\) is not significant. If \(\left|n_{g}^{1(A)}-n_{g}^{1(B)}\right|>\epsilon_{g}\) then we can conclude significant difference. The level of similarity (LOS) is \(100\times\left(\frac{1}{|G|}\right)\sum_{g\in G}\left|n_{g}^{1(A)}-n_{g}^{1(B) }\right|\leq\epsilon_{g}\) (1) and the level of dissimilarity (LOD) is one minus the LOS.
**Corollary:** Around any solution \(n^{1}\), the boundary of the region of similarity-dissimilarity can be identified, for instance by identifying all solutions \(n^{1(\cdot)}\) such that \(\epsilon_{g}<\left|n_{g}^{1}-n_{g}^{1(\cdot)}\right|\leq\lambda\epsilon_{g}\ \forall g\in G\).
**Demonstrative Example:** If we compare case mix (5.68, 48.82, 20.43, 10.22, 28.38) and (16.46, 71.67, 11.79, 10.59, 24.39) and define \(\epsilon=\) (2.5,9,6.5,10.5,7) then, only in patient type T4 and T5 is there a lack of significant difference. Hence, we could say that the level of similarity 40% (i.e., 2/5) and the level of dissimilarity is 60%. These numbers can be added to Figure 1 on the arc connecting the two case mixes.
It is worth noting that around case mix one, the boundary of similarity is as follows, ([3.18, 8.18], [39.82, 57.82], [13.93, 26.93], [0, 20.7], [21.38, 35.38]), when \(\lambda=1\).
#### 2.3.3 Comparing Case Mix
It is desirable to make a judgement upon which case mix, say \(n^{1(A)}\) or \(n^{1(B)}\), is most preferable, if not better or worse. To the best of our knowledge, no such approach has yet been provided in the literature. Determining the exact nature of the trade-offs that will be incurred is also needed. To do that, we propose Definition 2. The concept behind Definition 2 is to find the net effect as shown in Figure 2. In essence, we propose that differences between the case mixes are partitioned into gains and losses. The gains and losses can then be aggregated using the concept of a resultant vector. This produces two values which can be easily compared, to make a judgement. Compared to an approach involving \(|G|\) comparisons the new approach is more transparent and user friendly. Our approach is primarily defined to critique differences in the number of patients of each type, and to make judgements relative to that metric. Other metrics perhaps qualitative, however, can also be compared. Financial metrics, however, can be aggregated in to a single "dollar" value and may be less well suited to this type of analysis.
**Definition 2**: Given two case mix solutions \(n^{1(A)}\) and \(n^{1(B)}\), where \(n^{1(A)},n^{1(B)}\subset\mathbb{R}^{|G|}\), we can say that one solution is preferable if the ratio of the net gain to the net loss, namely \(R=\mathcal{S}^{+}/\mathcal{S}^{-}\), where \(\mathcal{S}^{+}=\left\|V^{+}\right\|_{2}\) and \(\mathcal{S}^{-}=\left\|V^{-}\right\|_{2}\), is not roughly one. In other words, the next gain is not roughly the same as the net loss. Here, \(V^{+}=\sum_{g\in G|\delta_{g}>0}\delta_{g}\tilde{u}_{g}\) is the resultant "gain" vector and \(V^{-}=\sum_{g\in G|\delta_{g}<0}\delta_{g}\tilde{u}_{g}\) is the resultant "loss" vector, where \(\tilde{u}_{g}\) are "unit" basis vectors and \(\delta_{g}=\hat{n}_{g}^{1(B)}-\hat{n}_{g}^{1(A)}\) or \(\delta_{g}=\left(n_{g}^{1(B)}-n_{g}^{1(A)}\right)/\epsilon_{g}\) and \(\hat{n}_{g}^{1}=(n_{g}^{1}-\underline{n}_{g}^{1})/(\overline{n}_{g}^{1}- \underline{n}_{g}^{1})\). When using \(\delta_{g}=(n_{g}^{1(B)}-n_{g}^{1(A)})/\epsilon_{g}\) we can say that one solution is "significantly" better.
**Corollary:** If \(\mathcal{S}^{+}<\mathcal{S}^{-}\) then \(n_{g}^{1(A)}\succcurlyeq n_{g}^{1(B)}\) where "\(\succcurlyeq\)" is used to signify "better than". Similarly, if \(\mathcal{S}^{+}>\mathcal{S}^{-}\) then \(n_{g}^{1(B)}\succcurlyeq n_{g}^{1(A)}\).
**3D Example:** Consider case mix (1,20,16) and (10,5,35) where \(x\in[0,15]\), \(y\in[0,30]\), \(z\in[3,50]\). The normalized points are (0.066,0.666,0.32) and (0.666,0.166,0.7). The net differences are (+9,-15,+19) and (+0.6,-0.5,+0.38) after normalization. The net gain is 0.71 and the net loss is 0.5. Hence, case mix 2 is preferable. The net effect is not directly towards the ideal, but to the right of it.
**Higher Dimensional Example**: When comparing case mix one (5.68, 48.82, 20.43, 10.22, 28.38) to case mix two (16.46, 71.67, 11.79, 10.59, 24.39), we note that case mix 2 has 21.37 more patients. There is a direct gain of 34 patients (i.e., spread across \(g=1\),2 and 4), and a loss of 12.63 (i.e., spread across \(g=3\) and 5). The differences are as follows, (10.78, 22.85, -8.64, 0.37, -3.99). As such, \(V^{+}=(0.43,0.25,0,0.0032,0)\) and \(V^{-}=(0,0,0.132,0,0.057)\). Hence, the net gain is \(G^{+}=0.498\) and the net loss is \(G^{-}=0.144\) with a ratio of \(R=3.465\). The second case mix is preferable as the net gain exceeds the net loss.
It is foreseeable that a decision maker may also be interested in trade-offs occurring between some patient types and not others, which are deemed unimportant for one reason or another. The proposed approach can easily be modified to assess net gains and losses only in those types, as opposed to the automatic inclusion of all patient types. An assessment where multiple comparisons are performed separately to identify if comparatively higher amounts of deterioration in one or more patient types occurs is also possible.
**Demonstrative Example**: Reconsidering the previous case mixes let us assume that trade-offs between patient type T3, T4 and T5 are important. The considered differences are (-8.64, 0.37, -3.99). As such, \(V^{+}=(0,0.00352,0)\) and \(V^{-}=(0.132,0,0.057)\). Hence, the net gain is \(G^{+}=0.00352\) and the net loss is \(G^{-}=0.144\) with a ratio of \(R=0.245\). Therefore, the first case mix is preferable as the net loss exceeds the net gain.
**Final Remarks**. The case mix comparisons described relate only to \(n^{1}\). For a particular \(g\in G\), different sub mixes can also be compared. Specifically, Definition 2 can be applied to compare sub mix \(n_{g}^{2(A)}\) and \(n_{g}^{2(B)}\) where \(n_{g}^{2(\cdot)}=\left(n_{g,1}^{2},n_{g,2}^{2(\cdot)},...,n_{g,1|g_{g}|}^{2( \cdot)}\right)\). Two different sub mixes can be compared in entirety
Figure 2: Comparing case mix (1,20,16) and (10,5,35)
as well. For instance, it is possible to compare \(\text{vec}(n^{2(A)})\) and \(\text{vec}(n^{2(B)})\). There are however many more criterions to compare (i.e., \(\sum_{g\in G}P_{g}\)).
## 3 Putting Theory into Practice
To facilitate hospital case mix planning activities, a prototype PDST was created in Burdett et. al. (2022b). The tool permits users to determine the maximum number of patients that may be treated over time, subject to case mix and time availability constraints. This task is performed by solving the CMA model shown in Appendix A. Given user defined targets, a best fit case mix is also obtainable, by solving a non-linear variant of the CMA model. The feasibility of a specified case mix can also be checked within the PDST.
The two tasks discussed in previous sections, namely case mix alteration and case mix comparison have been added to the PDST and suitable graphical user interfaces (GUI) have been constructed. The details of the new extensions are discussed in due course. Both tasks however rely upon the upper bound (a.k.a., limit) \(\hat{n}_{g}^{1}\) for each patient type. This bound describes the maximum number of patients of type \(g\) that can be realised if the resources of the entire hospital were used to process only that patient type.
The window shown in Figure 3 is provided for users to activate this assessment. The "Bound Analysis" button activates the solution process, involving the solution of the CMA model \(|G|\) times. During iteration \(g\in G\), the case mix is set as \(\mu_{g}^{1}=1\) and \(\mu_{g^{\prime}}^{1}=0\;\;\forall g^{\prime}\in G|g^{\prime}\neq g\). The results shown in the right pane are generated and displayed progressively, instead of all at once.
### Case Mix Alteration and Impact Assessments
To alter a case mix and to analyse the effect, the GUI shown in Figure 4 has been added. The first step is to specify the current case mix using the Load Cohort" button. The combo box summarises the different patient types, their current \(n_{g}^{1}\) value, plus for reference purposes, the upper bound \(\overline{n}_{g}^{1}\). Any patient type can be selected and altered, using the drop-down selection mechanism. The "Analyse Change" button informs the user to enter a \(-n_{g}^{1}\leq\delta\leq\overline{n}_{g}^{1}\) value, and if satisfactory, the CMA model is then solved. In the bottom ListView the results are then displayed. The before and after values are shown plus the required change. The user is asked whether the results are accepted or rejected. Further alterations to other patient types are then permitted to be analysed. In other words, a hierarchical assessment is facilitated.
Figure 3: GUI to perform a bound analysis
Figure 5 shows an alternative approach. The bar chart in Figure 5 is essentially a set of sliders. Each slider is for a specific patient type. The current value for each patient type is shown as marker on a bar, whose height is relative to the upper bound. A single patient type can then be selected easily and incremented and decremented to whatever value the user likes. The change to other patients can then be shown immediately, and the overall effect can be witnessed. For instance, changes in the height of each marker can be visualised. The man downside of this idea is that the number of bars may be difficult to see and manipulate for scenarios with hundreds of patient types.
### Case Mix Comparisons
To critique two case mix, the GUI displayed in Figure 6 has been developed. It is worth noting that SQ is an abbreviation for a squared value and SC for a scaled value. Also, Min refers to the nadir solution and Max the ideal. In that GUI the first step is to load two candidate case-mix. The next step is to choose a level of significance for each patient type. If none is selected, then the present upper bounds are used. The "Compare" button activates the assessment, and applies the calculations from Section 2.3, and specifically those associated with Definition 2. The GUI currently shows all calculations involved. This highly detailed output is for experts and is extraneous to the average end-user. In future versions, this information could possibly be hidden, accessible only in reports or by direct query.
Figure 6 shows a situation where a level of significance has been defined. In column H, differences exceeding \(\epsilon_{g}\) are highlighted using (*). The level of significance that has been used, has amplified the gains and losses, and placed both case mixes closer to the ideal.
Figure 4: GUI to facilitate case mix alteration
Figure 5: An alternative interface (of sliders) for users to manipulate
## 4 COVID Inspired Case Study
Covid-19 is an infectious disease (a.k.a., coronavirus) that causes mild to moderate respiratory illness in most people. Originating at the end of 2019 it has been "in play" for about two years. In the early stages many countries around the world were unable or unwilling to quarantine this virus and consequently its' spread throughout the world has been assured. To the best of our knowledge further impacts and misery are expected for many more years, for instance as mutations occur and produce new variants of the virus, and as the virus progresses through the population.
It is safe to say that there would be very few hospitals in the world that have not been seriously impacted by Covid-19. Around the world the virus has interfered with the day-to-day operations of most. For instance, this virus has caused many additional representations, and many of those patients have been seriously ill. To handle that demand, many elective surgeries been postponed, and many wards and beds have been repurposed. Intensive care facilities, used for ventilating patients mechanically, have been particularly stretched.
This articles' methods are well suited to analysing the impact of covid-19 or any other virus or medical disaster on the capacity of a hospital. To demonstrate that assertion let us reconsider the earlier scenario with five intensive care beds, ten operating theatres, five recovery wards and five patient types. Let us now consider a longer time frame of two months and the arrival of additional COVID patients who are admitted to the hospital for treatment. We will now identity the effect on the current case mix of 113.53 patients per week where \(n^{1}=\)(5.68, 48.82, 20.43, 10.22, 28.38). The covid-19 patients are defined as a sixth patient type, namely T6. Amongst that cohort, various sub types can be defined. We define four sub types based upon Gulsen (2020), according to severity of illness at presentation. The clinical pathway and average length of stay for each sub type in show in Table 6. These values were adapted from the data summarised in Vekaria et. al. (2021) and Whitfield et. al. (2020). To cope with anticipated demand during the pandemic, a new ward has been set up (i.e., Ward 6) and several current wards (i.e., Ward 1 and 5) have been repurposed as shown in Table 7. All covid patients will be kept in COVID specific wards to restrict transmission.
Figure 6: GUI to facilitate case mix comparisons
It is worth noting that we have added surgery times to ward length of stay for this analysis, as beds are acquired before surgery begins. Subject to the proportional case mix (0.05, 0.43, 0.18, 0.09, 0.25, 0) the treatable cohort (i.e., hospital capacity) is 908 patients of the following types \(n^{1(\text{orig})}=\)(45.41, 390.54, 163.48, 81.74, 227.06, 0). After the hospital's layout is changed, the capacity to treat non covid patients is not altered, even though fewer wards and beds are available. This occurs because some restrictions have been relaxed. For instance, Ward 4 has been permitted to treat more patient types.
Let us now consider case mix alterations required as a result of treating covid patients of type T6. Let us also consider whether the hospital reconfiguration is sufficient to meet the demand for covid patients, and whether further wards should be repurposed. Pre-analysis shows that no more than 21.5 covid-19 patients can be treated per week, given the sub mix (0.45, 0.35, 0.15, 0.5). Hence, we analyse the effect of T6 patients in the range (0, 172).
\begin{table}
\begin{tabular}{l l l l l} \hline
**Sub Type** & **Mix (\%)** & **Summary** & **Length of stay** \\ \hline
**T6-1 (mild)** & 45 & Patient with mild upper respiratory tract infection & Ward & 0.25 days in a ward \\
**T6-2 (moderate)** & 35 & Patients requiring hospitalization, with pneumonia & Ward & 5 days in a ward \\ & & and with/without the need for oxygen & & \\
**T6-3 (severe)** & 15 & Patients who need ICU treatment and require non-invasive or invasive mechanical ventilation, or with & (Ward), ICU, & 2 days in ward prior to ICU, 5 days in ward \\ & & acute respiratory distress and/or non-pulmonary & & \\ & & involvement & & Ward) & days in ICU + 7 days in ward \\
**T6-4 (critical)** & 5 & Patients who need immunomodulatory therapy or & & (ICU, Ward) & 14 days in ICU + 7 days in ward \\ & & with multi-organ failure and/or cytokine storm & & & \\ \hline \end{tabular}
\end{table}
Table 6: New patient type and sub types
\begin{table}
\begin{tabular}{l l l l l l l} \hline
**Type** & \(\overline{\mathbf{n}}^{1}\) & **\# Sub** & **Sub** & **Sub** & **(icu, theatre, ward)** & **Wards Used** \\ \hline
**(orig)** & **(new)** & **Types** & **Type** & **Type** & **time (\# hrs)** & \\ & & & & & & **Mix (\%)** & \\
**1** & 201.47 & 1000 & 2 & 1-1 & 70 & (0, 1.2, 17.86) & Ward 2, Ward 4 \\ & & & & 1-2 & 30 & (6, 1.25,8.35) & Ward 2, Ward 4 \\
**2** & 718.33 & 1000 & 1 & 2-1 & 100 & (0, 2.4, 16.31) & Ward 2, Ward 4 \\
**3** & 523.82 & 523.82 & 3 & 3-1 & 25 & (0, 6.5, 12.94) & Ward 3 \\ & & & & 3-2 & 40 & (0, 4.56, 12.39) & Ward 3 \\ & & & & 3-3 & 35 & (0, 7.6, 5.54) & Ward 3 \\
**4** & 840.38 & 840.38 & 1 & 4-1 & 100 & (0, 3.4, 18.99) & Ward 4 \\
**5** & 560 & 560 & 1 & 5-1 & 100 & (12, 4.1, 22.81) & Ward 4 \\
**6** & na & 172.91 & 4 & 6-1 & 45 & (0,0.6) & Ward 1, Ward 5, Ward 6 \\ & & & & 6-2 & 35 & (0,0,120) & Ward 1, Ward 5, Ward 6 \\ & & & & 6-3 & 15 & (120,0,216) & Ward 1, Ward 5, Ward 6 \\ & & & & 6-4 & 5 & (336,0,168) & Ward 1, Ward 5, Ward 6 \\ \hline \end{tabular}
\end{table}
Table 8: New patient type information and bounds
\begin{table}
\begin{tabular}{l l l l l} \hline \(\delta\) & Alterations & New Case Mix (\(\mathbf{n}^{1}\)) & Ward Util (\%) \\ & N (N (\(\mathbf{n}-\delta\)) & & (ICU, OT, WI, WI, WI, WI, WI, WI, WI, WI) \\
**50** & (-0.0025, -0.0026, -0.001, -0.002, & (45.41, 390.54, 163.48, 81.74, 227.06, 50) & (67.56, 100, 0, 9.02, 19.75, 81.73, 0, 53.01) \\ & -0.001, 100) & (-0.0025, -0.0026, -0.001, -0.002, & (45.41, 390.53, 163.48, 81.74, 227.06, 100) & (93.55, 100, 100, 0, 19.75, 84.95, 100, 22.69) \\ & -0.001, 100) & 1008.2 (908.22) & \\
**125** & (-45.41, -52.79, -27.65, -44.36, & (0, 33.37, 75.15, 35.83, 37.38, 197.5, 125) & (100, 80.54, 100, 0, 16.41, 66.28, 100, 49.2) \\ & -29.56, 125) & 833.46 (708.46) & \\
**150** & (-45.41, -182.25, -95.47, -81.74, & (0, 208.29, 68.01, 0, 125, 150) & (100, 44.62, 100, 0, 8.22, 35.59, 100, 75.71) \\ & -102.06, 150) & 551.3 (401.3) & \\ \hline \end{tabular}
\end{table}
Table 9: Application of CMA-EQ approach
Table 9 shows that, the hospital can meet covid-19 patient demands quite easily, up to a point, roughly 12-13 patients/week, and has some capacity to spare. As the number increases further, the original cohort of different patient types is greatly affected and maintaining an equitable decrease reduces overall outputs considerably. However, some of the original cohort, can still be treated. The exact number is shown in brackets in column 3.
Table 10, however shows that if equity does not matter, then the hospital can still treat a high number of patients, relative to "pre-covid" times. However, some patient types can be exploited, like T5. The model found that reducing T5 admissions, provides the capacity to treat most of the original cohort and the new covid patients.
Table 11 shows the SSQ approach, and demonstrates the "in-between" approach, that is a little bit more equitable than the CMA-LIN variant. Types T1 and T5 are reduced the most.
Final Remarks:The above analysis could be repeated for any layout alteration that is being considered and for any sized hospital. An iterative approach considering a sequence of changes is also appropriate. In the above situation, if we later decided that more than 21.5 covid-19 patients were to be treated per week, another reconfiguration would have to be envisaged and analysed.
## 5 Conclusions
This article proposes analytical techniques to support hospital case mix planning. This emerging topic considers the ramifications of treating different patient cohorts and ultimately seeks to identify the type of patient cohort that should be treated, above all others. Ultimately, the choice of case mix is synonymous to choosing a single Pareto optimal solution in a multi-criteria objective space.
Though the idea of CMP is appealing, the reality is that most hospitals do not apply any formal CMP techniques. They operate dynamically and treat patients as they emerge, considering severity of illness to prioritise one patient type over another. The master surgical schedule is also influential, chosen to satisfy surgeons and their availability, at the expense of all other considerations.
\begin{table}
\begin{tabular}{l l l l} \(\mathbf{\delta}\) & Alterations & New Case Mix (\(\mathbf{n}^{1}\)) & Ward Util (\%) \\ & N (\(\mathbf{N}-\mathbf{\delta}\)) & (ICU, OT, WI, WZ, W3, W4, W5, W6) \\ \hline
**50** & (0, 0, -0.005, 0, 50) & (45.41, 390.54, 163.48, 81.74, 227.06, 50) & (67.66, 100, 0, 9.02, 19.75, 81.73, 0, 53.01) \\ & & 958.22 (908.22) & \\
**100** & (0, 0, -0.005, 0, 0, 100) & (45.41, 390.53, 163.48, 81.74, 227.06, 100) & (93.55, 100, 100, 0, 19.75, 84.95, 44.2, 50.6) \\ & & 1008.2 (908.22) & \\
**125** & (0, 0, 0, 0, -36.37, 125) & (45.41, 390.54, 163.48, 81.74, 190.69, 125) & (100, 95.34, 100, 0, 19.75, 79.75, 89.58, 54.41) \\ & & 996.86 (781.86) & \\
**150** & (0, 0, 0, -108.87, 150) & (45.41, 390.54, 163.48, 81.74, 118.19, 150) & (100, 86.05, 100, 0, 19.75, 69.38, 100, 75.71) \\ & & 94.36 (799.36) & \\
**172** & (0, 0, 0, 0, -172.67, 172) & (45.41, 390.54, 163.48, 81.74, 54.39, 172) & (100, 77.88, 100, 0, 19.75, 60.25, 100, 99.03) \\ & & 907.56 (735.56) & \\ \end{tabular}
\end{table}
Table 10: Application of CMA-LIN approach
\begin{table}
\begin{tabular}{l l l} \(\mathbf{\delta}\) & Alterations & New Case Mix (\(\mathbf{n}^{1}\)) & Ward Util (\%) \\ & N (\(\mathbf{N}-\mathbf{\delta}\)) & (ICU, OT, WI, WZ, W3, W4, W5, W6) \\ \hline
**50** & (-0.01, -0.01, -0.01, -0.01, -0.01, -0.01, -0.01, -0.01, -0.01, -0.01, 50) & (45.41, 390.54, 163.48, 81.74, 227.06, 50) & (67.66, 100, 0, 0, 19.75, 84.94, 0, 53.01) \\ & -0.01, 50) & 958.17 (908.17) \\
**100** & (-0.01, -0.01, -0.01, -0.01, -0.01, -0.01, -0.01, -0.01, 100) & (45.41, 390.53, 163.48, 81.74, 227.06, 100) & (93.55, 100, 66.77, 0, 19.75, 84.95, 62.67, 52.43) \\ & -0.01, 100) & 1008.2 (908.22) \\
**125** & (-16.23, -0.012, -0.016, -0.
To improve the state of the art, we consider how an alteration to an existing patient case mix either frees up capacity for other patient types or eliminates it. Our mathematical optimization model identifies the impact to other patient types and identifies how many extra patients of each type can be treated, or which patient types need to be reduced, and by how much. Our approach provides hospital planners a sensitivity analysis, that may be used to inform and guide an iterative approach, to obtain the most appealing case mix.
We also developed some techniques to assess, compare and critique competing case mix. A score based upon proximity to the ideal was first proposed. Measures of similarity and dissimilarity were then proposed. These are based upon the concept of e-dominance. Last, an approach to measure trade-offs and to aggregate those into a net gain and net loss were developed. These statistics permit an end-user a means of judging overall merit, and a way to judge which case mix is superior.
In summary, the proposed approach forces an end user to clarify and disclose their belief around the value of treating patient types in different numbers. And a means to adapt their beliefs or requirements. In this article, the number of patients of each type was treated as the main objective. However, other objectives may also be considered, like reimbursement or revenue. Regarding the uptake of these methods, graphical user interfaces were proposed, implemented, and tested. The resulting decision support tool looks viable, and further developments are being considered.
**Acknowledgements:** This research was funded by the Australian Research Council (ARC) Linkage Grant LP 180100542 and supported by the Princess Alexandra Hospital and the Queensland Children's Hospital in Brisbane, Australia.
|
2303.15455 | A new regularisation for time-fractional backward heat conduction
problem | It is well-known that the backward heat conduction problem of recovering the
temperature $u(\cdot, t)$ at a time $t\geq 0$ from the knowledge of the
temperature at a later time, namely $g:= u(\cdot, \tau)$ for $\tau>t$, is
ill-posed, in the sense that small error in $g$ can lead to large deviation in
$u(\cdot, t)$. However, in the case of a time fractional backward heat
conduction problem (TFBHCP), the above problem is well-posed for $t>0$ and
ill-posed for $t=0$. We use this observation to obtain stable approximate
solutions for the TFBHCP for $t=0$, and derive error estimates under suitable
source conditions. We shall also provide some numerical examples to illustrate
the approximation properties of the regularized solutions. | M. Thamban Nair, P. Danumjaya | 2023-02-28T12:30:49Z | http://arxiv.org/abs/2303.15455v1 | # A new regularisation for time-fractional backward heat conduction problem
###### Abstract.
It is well-known that the backward heat conduction problem of recovering the temperature \(u(\cdot,t)\) at a time \(t\geq 0\) from the knowledge of the temperature at a later time, namely \(g:=u(\cdot,\tau)\) for \(\tau>t\), is ill-posed, in the sense that small error in \(g\) can lead to large deviation in \(u(\cdot,t)\). However, in the case of a time fractional backward heat conduction problem (TFBHCP), the above problem is well-posed for \(t>0\) and ill-posed for \(t=0\). We use this observation to obtain stable approximate solutions for the TFBHCP for \(t=0\), and derive error estimates under suitable source conditions. We shall also provide some numerical examples to illustrate the approximation properties of the regularized solutions.
## 1. Introduction
For \(0<\alpha<1\), consider the time-fractional heat equation
\[\frac{\partial^{\alpha}u}{\partial t^{\alpha}}=\frac{\partial^{2}u}{\partial x ^{2}},\quad 0<x<\pi. \tag{1}\]
In the above, we used the \(\alpha\)-derivative of \(u\) with respect to \(t\) in the _Caputo sense_. That is, if \(\varphi\) is a real valued differentiable function on an open interval of the form \((0,a)\) for some \(a>0\),
\[\frac{d^{\alpha}\varphi}{dt^{\alpha}}(t)=\frac{1}{\Gamma(1-\alpha)}\int_{0}^{t }\frac{\varphi^{\prime}(s)}{(t-s)^{\alpha}}ds,\quad 0<t<a.\]
It is to be observed that for \(\alpha=1\), the equation (1) reduces to the ordinary heat equation
\[\frac{\partial u}{\partial t}=\frac{\partial^{2}u}{\partial x^{2}},\quad 0<x <\pi,\]
and in that case, under the boundary condition
\[u(0,t)=0=u(\pi,t),\quad t>0, \tag{2}\]
and initial condition
\[u(0,x)=f_{0}(x),\quad 0<x<\pi \tag{3}\]
## 1. Introduction
Let \(\mathbb{R}^{n}\) be a real real number and let \(\alpha_{0},\alpha_{1}\) be the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\). We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\) respectively. We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\) respectively. We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\) respectively. We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\) respectively. We denote by \(\alpha_{0},\alpha_{1}\) the real numbers of \(\mathbb{R}^{n}\) and \(\alpha_{1},\alpha_{2}\) the real numbers of \(\mathbb{R}^{n}\) respectively.
Analogous to the well-studied ordinary _backward heat conduction problem_, let us consider the following inverse problem, the _time-fractional backward heat conduction problem_ (TFBHCP) associated with (1) and (2):
**Problem (\(P_{t}\)):** Knowing \(g:=u_{\alpha}(\cdot,\tau)\) for some \(\tau>0\), find \(u_{\alpha}(\cdot,t)\) for \(0\leq t<\tau\).
Many studies have shown that fractional diffusion equation model is appropriate for investigating problems arising in the areas of spatially disordered systems, porous media, fractal media, turbulent fluids and plasmas, biological media with traps, and stock price movements and so on (see [9, 11], and the references therein). The regularization theory for the inverse problems associated with fractional-order PDEs is still in its infancy.
We shall see that the inverse problem \(P_{t}\) is well-posed if \(0<t<\tau\) and ill-posed if \(t=0\). This observation and the subsequent analysis lead us to the conclusion that \(\{P_{t}:0<t<\tau\}\) gives a _regularization family_ for obtaining stable approximate solutions for the ill-posed inverse problem (\(P_{0}\)). To our knowledge, no study is carried out using the above observation, though various regularization methods are discussed recently (see, e.g. [11, 1, 3, 5], and the references there in). We shall also provide estimates for the error \(\|u_{\alpha}(t,\cdot)-u_{\alpha}(0,\cdot)\|_{2}\) under certain a priori source condition.
An outline of this paper is as follows. In section 2, we discuss the ill-posedness of the time fractional backward heat conduction problem (TFBHCP). Section 3 deals an operator theoretic formulation of the inverse problems. The new regularization family for the ill-posed inverse problem \(P_{0}\) is introduced and its convergence is proved in Section 4. In Section 5 and 6, we derive the error estimates for the noisy data and source conditions, respectively. Finally, we perform some numerical experiments to validate the theoretical results in Section 7.
## 2. Ill-Posednss of the Inverse Problem
Let \(g=u_{\alpha}(\cdot,\tau)\). Then, from equation (6), we have
\[g=\sum_{n=1}^{\infty}E_{\alpha}(-\lambda_{n}^{2}\tau^{\alpha})\langle f_{0}, \varphi_{n}\rangle\varphi_{n}. \tag{7}\]
Hence,
\[\langle f_{0},\varphi_{n}\rangle=\frac{\langle g,\varphi_{n}\rangle}{E_{ \alpha}(-\lambda_{n}^{2}\tau^{\alpha})}\quad\forall\,n\in\mathbb{N}. \tag{8}\]
This shows that \(g:=u_{\alpha}(\cdot,\tau)\) must satisfy the _Picard condition_:
\[\sum_{n=1}^{\infty}\frac{|\langle g,\varphi_{n}\rangle|^{2}}{E_{\alpha}(-\lambda _{n}^{2}\tau^{\alpha})^{2}}<\infty. \tag{9}\]
By Lemma 1.1, it is to be observed that
\[\frac{1}{C_{2}}\Gamma(1-\alpha)(1+\lambda_{n}^{2}\tau^{\alpha})\leq\frac{1}{E _{\alpha}(-\lambda_{n}^{2}\tau^{\alpha})^{2}}\leq\frac{1}{C_{1}}\Gamma(1- \alpha)(1+\lambda_{n}^{2}\tau^{\alpha}).\]
Therefore, Picard condition (9) on \(g\) is equivalent to the requirement
\[\sum_{n=1}^{\infty}(1+\lambda_{n}^{2}\tau^{\alpha})|\langle g,\varphi_{n} \rangle|^{2}<\infty \tag{10}\]
which is again equivalent to
\[\sum_{n=1}^{\infty}n^{2}|\langle g,\varphi_{n}\rangle|^{2}<\infty.\]
Using (8), the representation of \(u_{\alpha}(\cdot,t)\) in (6) takes the form
\[u_{\alpha}(\cdot,t)=\sum_{n=1}^{\infty}\frac{E_{\alpha}(-\lambda_{n}^{2}t^{ \alpha})}{E_{\alpha}(-\lambda_{n}^{2}\tau^{\alpha})}\langle g,\varphi_{n} \rangle\varphi_{n} \tag{11}\]
so that
\[\|u_{\alpha}(\cdot,t)\|^{2}=\sum_{n=1}^{\infty}\Big{|}\frac{E_{\alpha}(- \lambda_{n}^{2}t^{\alpha})}{E_{\alpha}(-\lambda_{n}^{2}\tau^{\alpha})}\Big{|} ^{2}|\langle g,\varphi_{n}\rangle|^{2}. \tag{12}\]
Again, using Lemma 1.1, we have
\[\frac{C_{1}}{C_{2}}\frac{(1+\lambda_{n}^{2}\tau^{\alpha})}{(1+\lambda_{n}^{2} t^{\alpha})}\leq\frac{E_{\alpha}(-\lambda_{n}^{2}t^{\alpha})}{E_{\alpha}(- \lambda_{n}^{2}\tau^{\alpha})}\leq\frac{C_{2}}{C_{1}}\frac{(1+\lambda_{n}^{2} \tau^{\alpha})}{(1+\lambda_{n}^{2}t^{\alpha})} \tag{13}\]
Note that,
\[\frac{(1+\lambda_{n}^{2}\tau^{\alpha})}{(1+\lambda_{n}^{2}t^{\alpha})}\geq \frac{\lambda_{n}^{2}\tau^{\alpha}}{(\lambda_{n}^{2}+\lambda_{n}^{2}t^{\alpha })}=\frac{\tau^{\alpha}}{(1+t^{\alpha})},\]
and for \(0<t\leq\tau\),
\[\frac{(1+\lambda_{n}^{2}\tau^{\alpha})}{(1+\lambda_{n}^{2}t^{\alpha})}\leq \frac{(\lambda_{n}^{2}+\lambda_{n}^{2}\tau^{\alpha})}{\lambda_{n}^{2}t^{\alpha }}=\frac{(1+\tau^{\alpha})}{t^{\alpha}}.\]
Hence, for \(0<t\leq\tau\),
\[\frac{C_{1}}{C_{2}}\frac{\tau^{\alpha}}{(1+t^{\alpha})}\leq\frac{E_{\alpha}(- \lambda_{n}^{2}t^{\alpha})}{E_{\alpha}(-\lambda_{n}^{2}\tau^{\alpha})}\leq \frac{C_{2}}{C_{1}}\frac{(1+\tau^{\alpha})}{t^{\alpha}}. \tag{14}\]
Hence, the representation (12) together with (14) imply that if \(0<t\leq\tau\), then
\[\frac{C_{1}}{C_{2}}\frac{\tau^{\alpha}}{(1+t^{\alpha})}\|g\|\leq\|u_{\alpha}( \cdot,t)\|\leq\frac{C_{2}}{C_{1}}\frac{(1+\tau^{\alpha})}{t^{\alpha}}\|g\| \tag{15}\]
and if \(t=0\), then (12) and (13) imply
\[\|u_{\alpha}(\cdot,0)\|^{2}\geq\Big{(}\frac{C_{1}}{C_{2}}\Big{)}^{2}\sum_{n=1}^{ \infty}(1+\lambda_{n}^{2}\tau^{\alpha})^{2}|\langle g,\varphi_{n}\rangle|^{2}. \tag{16}\]
Note that, by (10) the series on the right hand side of the above inequality converges. However, corresponding to an initial temperature \(\tilde{u}_{\alpha}(\cdot,0)\), if the temperature at time \(\tau\) is \(\tilde{g}\), then the above arguments lead to
\[\|u_{\alpha}(\cdot,0)-\tilde{u}_{\alpha}(\cdot,0)\|^{2}\geq\Big{(}\frac{C_{1} }{C_{2}}\Big{)}^{2}\sum_{n=1}^{\infty}(1+\lambda_{n}^{2}\tau^{\alpha})^{2}| \langle g-\tilde{g},\varphi_{n}\rangle|^{2}.\]
From this we obtain
\[\|u_{\alpha}(\cdot,0)-\tilde{u}_{\alpha}(\cdot,0)\|\geq(1+\lambda_{n}^{2}\tau ^{\alpha})\Big{(}\frac{C_{1}}{C_{2}}\Big{)}\|g-\tilde{g}\|\quad\forall\,n\in \mathbb{N}. \tag{17}\]
This shows that small error in \(g\) can lead to large deviation in the solution \(u_{\alpha}(\cdot,0)\), even when the the data satisfy the Picard condition as in (9). Thus, from the inequalities in (15) and (17), we can infer the following.
**Theorem 2.1**.: _Let \(C_{1},C_{2}\) and \(\alpha\) be as in Lemma 1.1. Then the TFBHCP \((P_{t})\) is well-posed for \(0<t<\tau\) and the problem \((P_{0})\) is ill-posed._
What we are interested in is to find stable approximate solutions for the ill-posed inverse problem \((P_{0})\).
## 3. Operator Theoretic formulation of the inverse problems
For a few observations on operators that appear in this section, we shall make use of the following proposition based on basic results from functional analysis. For the sake of completeness of the exposition, we provide its proof as well.
**Proposition 3.1**.: _Let \(\mathcal{H}\) be an infinite dimensional separable Hilbert space and let \(\{v_{n}:n\in\mathbb{N}\}\) be an orthonormal basis of \(\mathcal{H}\). Let \((\mu_{n})\) be a bounded sequence of real numbers and \(A:\mathcal{H}\to\mathcal{H}\) be defined by_
\[Av=\sum_{n=1}^{\infty}\mu_{n}\langle v,v_{n}\rangle_{\mathcal{H}}v_{n},\quad v \in\mathcal{H},\]
_where \(\langle\cdot,\cdot\rangle_{\mathcal{H}}\) denotes the inner product on \(\mathcal{H}\). Then \(A\) is a self-adjoint, bounded linear operator. Further, we have the following:_
* _If_ \(\mu_{n}\to 0\)_, then_ \(A\) _is a compact operator._
* _If_ \(\mu_{n}\neq 0\) _for all_ \(n\in\mathbb{N}\)_, then_ \(A\) _is injective and range of_ \(A\) _is dense._
_._
3. _If there exists_ \(c_{0}>0\) _such that_ \(|\mu_{n}|\geq c_{0}\) _for all_ \(n\in\mathbb{N}\)_, then_ \(A\) _is injective, range of_ \(A\) _is closed, and its inverse from the range is continuous._
_In particular, if assumptions in (ii) and (iii) are satisfied, then \(A\) is bijective and its inverse is continuous._
Proof.: Using the boundedness of \((\mu_{n})\) it follows from Riesz-Fischer theorem (cf. [7]) that \(A\) bounded linear operator on \(\mathcal{H}\) with \(\|A\|\leq\sup\{|\mu_{n}|:n\in\mathbb{N}\}\). Also, since \(\mu_{n}\in\mathbb{R}\), we that \(A\) is a self-adjoint operator.
(i) Suppose \(\mu_{n}\to 0\). For \(n\in\mathbb{N}\), let \(A_{n}:\mathcal{H}\to\mathcal{H}\) be defined by
\[A_{n}v=\sum_{j=1}^{n}\mu_{j}\langle v,v_{j}\rangle_{\mathcal{H}}v_{j},\quad v \in\mathcal{H}.\]
Then we see that
\[\|A-A_{n}\|\leq\sup\{|\mu_{j}|:j>n\}\to 0.\]
Since each \(A_{n}\) is a finite rank operator, it follows that (cf. Theorem 9.- in [7]) \(A\) is a compact operator.
(ii) Suppose \(\mu_{n}\neq 0\) for all \(n\in\mathbb{N}\). Then for \(v\in\mathcal{H}\), we have
\[Av=0\iff\mu_{n}\langle v,v_{n}\rangle_{\mathcal{H}}=0\ \forall\,n\in\mathbb{N} \iff\langle v,v_{n}\rangle_{\mathcal{H}}=0\ \forall\,n\in\mathbb{N}\iff v=0.\]
Hence, \(A\) is injective. Now, to see that \(R(A)\), the range of \(A\), is dense in \(\mathcal{H}\), let \(w\in\mathcal{H}\) be such that \(\langle Av,w\rangle=0\) for all \(v\in\mathcal{H}\). Then, in particular, we have
\[\mu_{n}\langle\varphi_{n},w\rangle=\langle A\varphi_{n},w\rangle=0\quad \forall n\in\mathbb{N}.\]
From this, using again the fact that \(\mu_{n}\neq 0\) for all \(n\in\mathbb{N}\), we have \(w=0\). Thus, we have proved that \(R(A)^{\perp}=\{0\}\), which implies, by projection theorem, that \(R(A)\) is dense.
(iii) Suppose there exists \(c_{0}>0\) such that \(|\mu_{n}|\geq c_{0}\) for all \(n\in\mathbb{N}\). Then we have
\[\|Av\|_{\mathcal{H}}\geq c_{0}\|v\|_{\mathcal{H}}\quad\forall\,v\in\mathcal{H}.\]
From this, the conclusions follow.
The last part of the theorem is obvious.
Now, consider the operator \(A_{\alpha}:L^{2}[0,\pi]\to L^{2}[0,\pi]\) defined by
\[A_{\alpha}f=\sum_{n=1}^{\infty}E_{\alpha}(-\lambda_{n}^{2}\tau^{\alpha}) \langle f,\varphi_{n}\rangle\varphi_{n},\ f\in L^{2}[0,\pi].\]
Using Lemma 1.1, we see that
\[E_{\alpha}(-\lambda_{n}^{2}\tau^{\alpha})\to 0\quad\text{as}\quad n\to\infty.\]
Hence, by Proposition 3.1, \(A_{\alpha}\) is a compact operator of infinite rank. Hence, in view of equation (7), the fact that the problem \((P_{0})\) is ill-posed also follows from the observation that (7) is same as solving the compact operator equation
\[A_{\alpha}f=g, \tag{18}\]
which is an ill-posed problem.
Again, by Proposition 3.1, \(A_{\alpha}\) is one-one and its range is dense in \(L^{2}[0,\pi]\). Hence, if \(g\in L^{2}[0,\pi]\) satisfies the the Picard condition (9), then it is in the range of \(A_{\alpha}\) and \(f_{0}:=u_{\alpha}(\cdot,0)\) is the the generalized solution of the operator equation (18), that is,
\[f_{0}=A_{\alpha}^{\dagger}g,\]
where \(A_{\alpha}^{\dagger}\) denotes the Moore-Penrose inverse of \(A_{\alpha}\) (cf. [8]).
Next, we observe from (6) that
\[\langle f_{0},\varphi_{n}\rangle=\frac{\langle u_{\alpha}(\cdot,t),\varphi_{n }\rangle}{E_{\alpha}(\lambda_{n}^{2}t^{\alpha})}\quad\forall\,n\in\mathbb{N},\]
so that equation (7) leads to
\[g=\sum_{n=1}^{\infty}\frac{E_{\alpha}(-\lambda_{n}^{2}\tau^{\alpha})}{E_{ \alpha}(-\lambda_{n}^{2}t^{\alpha})}\langle u_{\alpha}(\cdot,t),\varphi_{n} \rangle\varphi_{n}. \tag{19}\]
Interchanging \(t\) and \(\tau\) in (14), we obtain
\[\frac{C_{1}}{C_{2}}\frac{t^{\alpha}}{(1+\tau^{\alpha})}\leq\frac{E_{\alpha}(- \lambda_{n}^{2}\tau^{\alpha})}{E_{\alpha}(-\lambda_{n}^{2}t^{\alpha})}\leq \frac{C_{2}}{C_{1}}\frac{(1+t^{\alpha})}{\tau^{\alpha}}.\]
Hence, by Proposition 3.1, it follows that for \(0<t<\tau\), \(B_{\alpha,t}:L^{2}[0,\pi]\to L^{2}[0,\pi]\) defined by
\[B_{\alpha,t}f=\sum_{n=1}^{\infty}\frac{E_{\alpha}(-\lambda_{n}^{2}\tau^{ \alpha})}{E_{\alpha}(-\lambda_{n}^{2}t^{\alpha})}\langle f,\varphi_{n}\rangle \varphi_{n},\quad f\in L^{2}[0,\pi],\]
is a bijective bounded linear operator with continuous inverse. Thus, the problem \((P_{t})\) of recovering \(u_{\alpha}(\cdot,t)\) from \(g\), which corresponds to the equation (19) is same as the problem of solving the operator equation
\[B_{\alpha,t}f=g,\]
which is a well-posed problem.
## 4. The Regularization
In view of the expression (11), for each \(\alpha\in(0,1)\) and \(t\in(0,\tau)\), we define the map \(R_{t,\alpha}:L^{2}[0,\pi]\to L^{2}[0,\pi]\) as
\[R_{t,\alpha}\psi=\sum_{n=1}^{\infty}\frac{E_{\alpha}(-\lambda_{n}^{2}t^{\alpha })}{E_{\alpha}(-\lambda_{n}^{2}\tau^{\alpha})}\langle\psi,\varphi_{n}\rangle \varphi_{n},\quad\psi\in L^{2}[0,\pi]. \tag{20}\]
In view of (14), we see that \(R_{t,\alpha}:L^{2}[0,\pi]\to L^{2}[0,\pi]\) is a well-defined bounded linear operator with
\[\|R_{t,\alpha}\|\leq\frac{C_{2}}{C_{1}}\Big{(}\frac{1+\tau\alpha}{t^{\alpha}} \Big{)}.\]
Note that if \(g=u_{\alpha}(\cdot,\tau)\), then by (11),
\[R_{t,\alpha}g=u_{\alpha}(\cdot,t),\quad 0<t<\tau.\]
The following theorem shows that, for each \(\alpha\in(0,1)\), the family \(\{R_{t,\alpha}:0<t<\tau\}\) of operators defined above is a regularization family for the ill-posed inverse problem \(P_{0}\).
**Theorem 4.1**.: _If \(g=u_{\alpha}(\cdot,\tau)\), then_
\[\|u_{\alpha}(\cdot,0)-u_{\alpha}(\cdot,t)\|\to 0\quad\text{as}\quad t\to 0.\]
Proof.: By the representation of \(u_{\alpha}(\cdot,t)\) in (6), we have
\[u_{\alpha}(\cdot,t)-u_{\alpha}(\cdot,0)=\sum_{n=1}^{\infty}[E_{\alpha}(- \lambda_{n}^{2}t^{\alpha})-1]\langle f_{0},\varphi_{n}\rangle\varphi_{n}.\]
so that
\[\|u_{\alpha}(\cdot,t)-u_{\alpha}(\cdot,0)\|^{2}=\sum_{n=1}^{\infty}|E_{\alpha }(-\lambda_{n}^{2}t^{\alpha})-1|^{2}|\langle f_{0},\varphi_{n}\rangle|^{2}. \tag{21}\]
From the definition of \(E_{\alpha}(\cdot)\), we have
\[E(z)-1 = \sum_{k=1}^{\infty}\frac{z^{k}}{\Gamma(\alpha k+1)}=z\sum_{k=1}^ {\infty}\frac{z^{k-1}}{\Gamma(\alpha k+1)}\] \[= z\sum_{k=0}^{\infty}\frac{z^{k}}{\Gamma(\alpha k+\alpha+1)}=zE_ {\alpha,\alpha+1}(z),\]
where for \(\alpha,\beta>0\), \(E_{\alpha,\beta}(\cdot)\) is the _generalized Mittag-Leffler function_ defined by
\[E_{\alpha,\beta}(z)=\sum_{k=0}^{\infty}\frac{z^{k}}{\Gamma(\alpha k+\beta)}.\]
Hence, we have
\[E_{\alpha}(-\lambda_{n}^{2}t^{\alpha})-1=(-\lambda_{n}^{2}t^{\alpha})E_{ \alpha,\alpha+1}(-\lambda_{n}^{2}t^{\alpha}). \tag{22}\]
It is known that (cf. [10]) that if there exists \(C>0\) such that for \(0<\alpha<2\) and \(\beta\in\mathbb{R}\),
\[|E_{\alpha,\beta}(z)|\leq\frac{C}{1+|z|}\]
for all \(z\in\mathbb{C}\) and for \(\mu<|\arg(z)|\leq\pi\), where
\[\frac{\pi\alpha}{2}<\mu<\min\{\pi,\pi\alpha\}.\]
Now, taking \(z=-\lambda_{n}^{2}t^{\alpha}\) and \(0<\alpha<1\), we have \(\arg(z)=\pi\) and \(\min\{\pi,\pi\alpha\}=\pi\alpha\). Hence, in this case the required conditions on \(\arg(z)\) is automatically satisfied. Thus, we have
\[|E_{\alpha,\alpha+1}(-\lambda_{n}^{2}t^{\alpha})|\leq\frac{C}{1+\lambda_{n}^{ 2}t^{\alpha}}.\]
Hence,
\[|E_{\alpha}(-\lambda_{n}^{2}t^{\alpha})-1|=|-\lambda_{n}^{2}t^{\alpha}E_{ \alpha,\alpha+1}(-\lambda_{n}^{2}t^{\alpha})|\leq\frac{C\lambda_{n}^{2}t^{ \alpha}}{1+\lambda_{n}^{2}t^{\alpha}}\leq C. \tag{23}\]
In particular, \(|E_{\alpha}(-\lambda_{n}^{2}t^{\alpha})-1|\to 0\) as \(t\to 0\) for each \(n\in\mathbb{N}\). Thus,
\[|E_{\alpha}(-\lambda_{n}^{2}t^{\alpha})-1|^{2}|\langle f_{0},\varphi_{n} \rangle|^{2}\to 0\text{ as }\,t\to 0\text{ for each }\,n\in\mathbb{N}\]
and
\[|E_{\alpha}(-\lambda_{n}^{2}t^{\alpha})-1|^{2}|\langle f_{0},\varphi_{n} \rangle|^{2}\leq C^{2}|\langle f_{0},\varphi_{n}\rangle|^{2}\]
with \(\sum_{n=1}^{\infty}|\langle f_{0},\varphi_{n}\rangle|^{2}\leq\|f_{0}\|^{2}\). Hence, by the dominated convergence theorem, the relation (21) implies that
\[\|u_{\alpha}(\cdot,t)-u_{\alpha}(\cdot,0)\|\to 0\text{ as }\quad t\to 0.\]
This completes the proof.
## 5. Error Estimate under Noisy Data
If the data is noisy, say we have \(\tilde{g}\in L^{2}[0,\pi]\) in place of \(g\) such that
\[\|g-\tilde{g}\|\leq\delta\]
for some known noise level \(\delta>0\), then using the expression in (11), the corresponding solution at \(t\) can be taken as
\[\tilde{u}_{\alpha}(\cdot,t)=R_{\alpha,t}\tilde{g},\]
where \(R_{\alpha,t}\) is defined as in (20). Thus,
\[\tilde{u}_{\alpha}(\cdot,t)=\sum_{n=1}^{\infty}\frac{E_{\alpha}(-\lambda_{n} ^{2}t^{\alpha})}{E_{\alpha}(-\lambda_{n}^{2}\tau^{\alpha})}\langle\tilde{g}, \varphi_{n}\rangle\varphi_{n}. \tag{24}\]
Hence, we obtain
\[\|u_{\alpha}(\cdot,t)-\tilde{u}_{\alpha}(\cdot,t)\|^{2}=\sum_{n=1}^{\infty}\Big{|} \frac{E_{\alpha}(-\lambda_{n}^{2}t^{\alpha})}{E_{\alpha}(-\lambda_{n}^{2}\tau^{ \alpha})}\Big{|}^{2}|\langle g-\tilde{g},\varphi_{n}\rangle|^{2}\]
so that using (14),
\[\|u_{\alpha}(\cdot,t)-\tilde{u}_{\alpha}(\cdot,t)\|\leq\frac{C_{2}}{C_{1}} \Big{(}\frac{1+\tau^{\alpha}}{t^{\alpha}}\Big{)}\delta.\]
Thus, we have proved:
**Theorem 5.1**.: _For \(0<t<\tau\),_
\[\|u_{\alpha}(\cdot,t)-\tilde{u}_{\alpha}(\cdot,t)\|\leq\frac{C_{2}}{C_{1}} \Big{(}\frac{1+\tau^{\alpha}}{t^{\alpha}}\Big{)}\delta.\]
Thus we have proved the following theorem.
**Theorem 5.2**.: _For \(0<t<\tau\),_
\[\|u_{\alpha}(\cdot,0)-\tilde{u}_{\alpha}(\cdot,t)\|\leq\|u_{\alpha}(\cdot,0)- u_{\alpha}(\cdot,t)\|+\frac{C_{2}(1+\tau^{\alpha})}{C_{1}}\frac{\delta}{t^{ \alpha}},\]
_where (by Theorem 4.1) \(\|u_{\alpha}(\cdot,0)-u_{\alpha}(\cdot,t)\|\to 0\) as \(t\to 0\)._
## 6. Error Estimates Under Source Conditions
**Assumption (A):** There exists an index function \(\varphi:(0,\infty)\to[0,\infty)\) such that
\[\|u_{\alpha}(\cdot,t)-f_{0}\|\leq c_{0}\varphi(t) \tag{25}\]
for some \(c_{0}>0\).
**Theorem 6.1**.: _Under the Assumption (A), let \(\psi(t):=t\varphi(t)\) for \(t>0\), and for \(\delta>0\), let \(t_{\delta}:=\psi^{-1}(\delta)\). Then_
\[\|f_{0}-\tilde{u}_{\alpha}(\cdot,t_{\delta})\|=O(\varphi(\psi^{-1}(\delta)).\]
Proof.: By (25) and Theorem 5.1,
\[\|f_{0}-\tilde{u}_{\alpha}(\cdot,t)\| \leq \|u_{\alpha}(\cdot,t)-u_{\alpha}(\cdot,0)\|+\|u_{\alpha}(\cdot,t) -\tilde{u}_{\alpha}(\cdot,t)\|\] \[\leq c_{0}\varphi(t)+\,\frac{C_{2}(1+\tau^{\alpha})}{C_{1}}\frac{ \delta}{t^{\alpha}}\]
Note that
\[\varphi(t)=\frac{\delta}{t^{\alpha}}\iff\psi(t):=t^{\alpha}\varphi(t)=\delta.\]
Hence, by choosing \(t=t_{\delta}:=\psi^{-1}(\delta)\), we obtain
\[\|f_{0}-\tilde{u}_{\alpha}(\cdot,t)\|=O(\varphi(t_{\delta}))=O(\varphi(\psi^{ -1}(\delta)).\]
Now, we specify an index function \(\varphi\) and a source set \(M_{\varphi}\) such that (25) is satisfied whenever \(f_{0}\in M_{\varphi}\).
Let \(K_{\alpha}:L^{2}[0,\pi]\to L^{2}[0,\pi]\) be defined by
\[K_{\alpha}\varphi=\sum_{n=1}^{\infty}E_{\alpha}(-\lambda_{n}^{2}\tau^{\alpha}) \langle\varphi,\varphi_{n}\rangle\varphi_{n},\quad\varphi\in L^{2}[0,\pi].\]
Since \(E_{\alpha}(-\lambda_{n}^{2}\tau^{\alpha})\in\mathbb{R}\) and \(E_{\alpha}(-\lambda_{n}^{2}\tau^{\alpha})\to 0\) as \(n\to\infty\), it follows that (cf. [7]) \(K_{\alpha}\) is a compact, self-adjoint operator and
\[K_{\alpha}\varphi_{n}=E(-\lambda_{n}^{2}\tau^{\alpha})\varphi_{n}\quad\forall \,n\in\mathbb{N}. \tag{26}\]
In view of (7), we see that \(f_{0}:=u_{\alpha}(\cdot,0)\) is the solution of the compact operator equation
\[K_{\alpha}f=g.\]
Let
\[\mathcal{M}_{\alpha,\rho}:=\{K_{\alpha}u:\|u\|\leq\rho\}. \tag{27}\]
and assume that
\[f_{0}\in\mathcal{M}_{\alpha,\rho}.\]
Then \(f_{0}=K_{\alpha}u\) for some \(u\in L^{2}[0,\pi]\) with \(\|u\|\leq\rho\), and hence by (26), we have
\[\langle f_{0},\varphi_{n}\rangle=\langle K_{\alpha}u,\varphi_{n}\rangle= \langle u,K_{\alpha}\varphi_{n}\rangle=E_{\alpha}(-\lambda_{n}^{2}\tau^{ \alpha})\langle u,\varphi_{n}\rangle\quad\forall\,n\in\mathbb{N}.\]
Hence, from (21), we have
\[\|u_{\alpha}(\cdot,t)-f_{0}\|^{2} = \sum_{n=1}^{\infty}|E_{\alpha}(-\lambda_{n}^{2}t^{\alpha})-1|^{2 }|\langle f_{0},\varphi_{n}\rangle|^{2}\] \[= \sum_{n=1}^{\infty}|E_{\alpha}(-\lambda_{n}^{2}t^{\alpha})-1|^{2 }||E_{\alpha}(-\lambda_{n}^{2}\tau^{\alpha})|^{2}\langle u,\varphi_{n}\rangle| ^{2}\]
Now, (23) and Lemma 1.1 imply that
\[|E_{\alpha}(-\lambda_{n}^{2}t^{\alpha})-1|\,|E_{\alpha}(-\lambda_{n}^{2}\tau^ {\alpha})|\leq C_{\alpha}\frac{\lambda_{n}^{2}t^{\alpha}}{(1+\lambda_{n}^{2}t ^{\alpha})(1+\lambda_{n}^{2}\tau^{\alpha})},\]
where
\[C_{\alpha}:=\frac{CC_{2}}{\Gamma(1-\alpha)}, \tag{28}\]
with \(C\) and \(C_{2}\) as in (22) and Lemma 1.1, respectively. Note that
\[\frac{\lambda_{n}^{2}t^{\alpha}}{(1+\lambda_{n}^{2}t^{\alpha})(1+\lambda_{n}^{ 2}\tau^{\alpha})}=\frac{(\lambda_{n}^{2}t^{\alpha})/(\lambda_{n}^{2}\tau^{ \alpha})}{(1+\lambda_{n}^{2}t^{\alpha})}\frac{\lambda_{n}^{2}\tau^{\alpha}}{(1 +\lambda_{n}^{2}\tau^{\alpha})}\leq\frac{t^{\alpha}}{\tau^{\alpha}}.\]
Thus, we arrive at the estimate
\[\|u_{\alpha}(\cdot,t)-f_{0}\|^{2}\leq C_{\alpha}^{2}\Big{(}\frac{t^{\alpha}}{\tau ^{\alpha}}\Big{)}^{2}\sum_{n=1}^{\infty}|\langle u,\varphi_{n}\rangle|^{2}.\]
Thus, we have proved the following theorem.
**Theorem 6.2**.: _If \(f_{0}\in\mathcal{M}_{\alpha,\rho}\), then_
\[\|u_{\alpha}(\cdot,t)-f_{0}\|\leq\rho C_{\alpha}\frac{t^{\alpha}}{\tau^{\alpha }},\]
_where \(\mathcal{M}_{\alpha,\rho}\) and \(C_{\alpha}\) are as in (27) and (28), respectively._
**Remark 6.3**.: Theorem 6.2 shows that the function \(\varphi\) defined by
\[\varphi(t)=t^{\alpha},\quad t>0,\]
satisfies the Assumption (A) with \(c_{0}=\rho C_{\alpha}/\tau^{\alpha}\).
In view of Theorem 6.2 and Theorem 5.2, if \(f_{0}\in\mathcal{M}_{\alpha,\rho}\), then we have
\[\|f_{0}-\tilde{u}_{\alpha}(\cdot,t)\| \leq \|u_{\alpha}(\cdot,t)-u_{\alpha}(\cdot,0)\|+\|u_{\alpha}(\cdot,t )-\tilde{u}_{\alpha}(\cdot,t)\|\] \[\leq \rho C_{\alpha}\frac{t^{\alpha}}{\tau^{\alpha}}+\frac{C_{2}(1+ \tau^{\alpha})}{C_{1}}\frac{\delta}{t^{\alpha}}.\]
Now,
\[\frac{t^{\alpha}}{\tau^{\alpha}}=\frac{\delta}{t^{\alpha}}\iff t^{2\alpha}= \tau^{\alpha}\delta\iff t^{\alpha}=\sqrt{\tau^{\alpha}\delta}\iff\frac{t^{ \alpha}}{\tau^{\alpha}}=\sqrt{\frac{\delta}{\tau^{\alpha}}}.\]
Thus, we have proved the following theorem.
**Theorem 6.4**.: _If \(f_{0}\in\mathcal{M}_{\alpha,\rho}\) and \(t_{\delta}:=\sqrt{\tau}\delta^{1/\alpha}\), then_
\[\|\tilde{u}_{\alpha}(\cdot,t_{\delta})-f_{0}\|\leq\Big{(}\rho C_{\alpha}+ \frac{C_{2}(1+\tau)}{C_{1}}\Big{)}\sqrt{\frac{\delta}{\tau^{\alpha}}}.\]
_where \(\mathcal{M}_{\alpha,\rho}\) and \(C_{\alpha}\) are as in (27) and (28), respectively. In particular,_
\[\|\tilde{u}_{\alpha}(\cdot,t_{\delta})-f_{0}\|=O(\sqrt{\delta}).\]
**Remark 6.5**.: By the definition of \(\mathcal{M}_{\alpha,\rho}\) and from the standard regularization theory (cf. [2, 8]), it follows that the estimate obtained in Theorem 6.4 is optimal for the source set \(\mathcal{M}_{\alpha,\rho}\).
## 7. Numerical Illustrations
In this section, we shall consider some numerical examples to illustrate the level of approximation of the regularized solutions.
For numerical computations, we divide the given space domain \([0,\pi]\) into a finite number of equal subintervals with step size \(h\) where \(h=x_{i+1}-x_{i},\ i=0,1,2,\ldots,N-1\) with \(x_{0}=0\), and \(x_{N}=\pi\). All the simulations are carried out using MATLAB R2022a with step size \(h=\pi/100\).
We have observed that the problem of finding \(u_{\alpha}(\cdot,t)\), \(0<t\leq\tau\) from the knowledge of \(g:=u_{\alpha}(\cdot,\tau)\) is a well-posed problem, and \(\|u_{\alpha}(\cdot,t)-f_{0}\|\to 0\) as \(t\to 0\) (see Theorem 3.1).
For computational purpose, we take \(\tau=1\) and consider \(g=u_{\alpha}(\cdot,\tau)\) obtained from (6) by taking \(f_{0}(x)=x(\pi-x)e^{-x},\ 0\leq x\leq\pi\), and then compute \(u_{\alpha}(\cdot,t)\) according to the formula (11).
To approximate the integrals involved in the computation of \(g\) and \(u_{\alpha}(\cdot,t)\), we make use of the composite trapezoidal rule. For the illustration of the convergence \(\|u_{\alpha}(\cdot,t)-f_{0}\|_{L^{2}}\to 0\) as \(t\to 0\), we take \(t_{i}=10^{-(i+2)}\) for \(i=1,2,\ldots,7\) and compute the exact expression \(f_{0}(x)\) as well as \(u_{\alpha}(x,t_{i})\) and show in Figures 1 to 4 for various \(\alpha\in\{0.2,\ 0.4,\ 0.6,\ 0.8\}\).
Figure 1. Solution profiles of \(f_{0}(x)\) and \(u_{\alpha}(x,t_{i})\) for \(\alpha=0.2\)
Figure 2. Solution profiles of \(f_{0}(x)\) and \(u_{\alpha}(x,t_{i})\) for \(\alpha=0.4\)
Figure 3. Solution profiles of \(f_{0}(x)\) and \(u_{\alpha}(x,t_{i})\) for \(\alpha=0.6\)
In Table 1, we show the error \(\|u_{\alpha}(\cdot,t)-f_{0}\|_{L^{2}}\) for \(t=t_{i}\), \(i=1,2,\ldots,7\) for different values of \(\alpha\). We observe that when \(t\) decreases, the error \(\|u_{\alpha}(\cdot,t)-f_{0}\|_{L^{2}}\) decreases. This validates our theoretical result in Theorem 4.1.
For the illustration of the case with the noisy data, we take \(\tilde{g}(x)=g(x)+\frac{\delta}{2}\) with some noise level \(\delta>0\). Note that \(\|\tilde{g}-g\|_{L^{2}}\leq\delta\). Now we take \(t_{\delta}:=\sqrt{\tau}\,\delta^{1/\alpha}\) as per Theorem 6.3 and compute \(\tilde{u}_{\alpha}(x,t_{\delta})\) using the formula (24) for several values of \(\delta\) and for some values of \(\alpha\).
Taking \(\delta_{i}=10^{-(i+2)},i=1,2,\ldots,7\) we compute the exact expression \(f_{0}\) as well as \(\tilde{u}_{\alpha}(x,t_{\delta_{i}})\) and show them in Figures 5 to 8 for various \(\alpha\in\{0.2,\ 0.4,\ 0.6,\ 0.8\}\).
Figure 6. Solution profiles of \(f_{0}(x)\) and \(\tilde{u}_{\alpha}(x,t_{\delta_{i}})\) for \(\alpha=0.4\)
Figure 7. Solution profiles of \(f_{0}(x)\) and \(\tilde{u}_{\alpha}(x,t_{\delta_{i}})\) for \(\alpha=0.6\)
In Table 2, we show the error \(\|\tilde{u}_{\alpha}(\cdot,t_{\delta})-f_{0}\|_{L^{2}}\) for different values of \(\alpha\) and \(\delta_{i},i=1,2,\ldots,7\). Note that when \(\delta\) value decreases, the error \(\|\tilde{u}_{\alpha}(\cdot,t_{\delta})-f_{0}\|_{L^{2}}\) decreases. This validates our theoretical result in Theorem 6.3.
Acknowledgement:The first author M. Thamban Nair gratefully acknowledges the support received from BITS Pilani, K.K. Birla Goa Campus, where he is a Visiting Professor from August 1, 2023 after superannuation from I.I.T. Madras, Chennai.
|
2309.04503 | Quantum Algorithm for Maximum Biclique Problem | Identifying a biclique with the maximum number of edges bears considerable
implications for numerous fields of application, such as detecting anomalies in
E-commerce transactions, discerning protein-protein interactions in biology,
and refining the efficacy of social network recommendation algorithms. However,
the inherent NP-hardness of this problem significantly complicates the matter.
The prohibitive time complexity of existing algorithms is the primary
bottleneck constraining the application scenarios. Aiming to address this
challenge, we present an unprecedented exploration of a quantum computing
approach. Efficient quantum algorithms, as a crucial future direction for
handling NP-hard problems, are presently under intensive investigation, of
which the potential has already been proven in practical arenas such as
cybersecurity. However, in the field of quantum algorithms for graph databases,
little work has been done due to the challenges presented by the quantum
representation of complex graph topologies. In this study, we delve into the
intricacies of encoding a bipartite graph on a quantum computer. Given a
bipartite graph with n vertices, we propose a ground-breaking algorithm qMBS
with time complexity O^*(2^(n/2)), illustrating a quadratic speed-up in terms
of complexity compared to the state-of-the-art. Furthermore, we detail two
variants tailored for the maximum vertex biclique problem and the maximum
balanced biclique problem. To corroborate the practical performance and
efficacy of our proposed algorithms, we have conducted proof-of-principle
experiments utilizing IBM quantum simulators, of which the results provide a
substantial validation of our approach to the extent possible to date. | Xiaofan Li, Prasenjit Mitra, Rui Zhou, Wolfgang Nejdl | 2023-09-08T04:43:05Z | http://arxiv.org/abs/2309.04503v1 | # Quantum Algorithm for Maximum Biclique Problem
###### Abstract
Identifying a biclique with the maximum number of edges bears considerable implications for numerous fields of study and application, such as detecting anomalies in E-commerce transactions, discerning protein-protein interactions in biological studies, and refining the efficacy of social network recommendation algorithms. However, the inherent NP-hardness of this problem significantly complicates the matter. The prohibitive time complexity of existing algorithms is the primary bottleneck constraining the application scenarios. Another obstacle resides in the ever-increasing energy requirements for running these algorithms. The escalating power consumption not only exacerbates the economic cost but also poses environmental concerns, making it increasingly difficult to deploy these solutions in applications sustainably. Aiming to address these challenges, we present an unprecedented exploration of a quantum computing approach to solve the issues. Efficient quantum algorithms, as a crucial future direction for handling NP-hard problems, are presently under intensive investigation. Regular advancements in quantum hardware have gradually made quantum computing a more accessible, faster, and cost-effective tool. Its potential has already been proven in practical areas such as cybersecurity, marking a promising future for this technology. However, in the field of quantum algorithms for graph databases, little work has been done due to the challenges presented by the quantum representation of complex graph topologies. In this study, we delve into the intricacies of encoding a bipartite graph on a quantum computer. We further design a sub-procedure capable of recognizing whether a given subgraph constitutes a biclique of a given size. Given a bipartite graph with \(n\) vertices, we propose a ground-breaking algorithm, dubbed qMBS. This novel methodology can pinpoint a solution within \(O^{*}(2^{\frac{n}{2}})\) iterations of the subprocedure, illustrating a quadratic speed-up in terms of time complexity compared to the state-of-the-art algorithms. Further expanding the utility of qMBS, we detail two variants tailored for the maximum vertex biclique problem and the maximum balanced biclique problem. To corroborate the practical performance and efficacy of our proposed algorithms, we have conducted proof-of-principle experiments utilizing advanced quantum simulators available to date. The important feature of qMBS is its reversible computing manner, which, according to Landauer's Principle, holds substantial promise in dealing with applications with significantly reduced power consumption in the near future.
biclique, graph database, quantum algorithm
## I Introduction
**Problem.** A bipartite graph, represented as \(G(L,R,E)\), is structured around two separate and non-overlapping vertex sets, \(L\) and \(R\), and an edge set \(E\) that is a subset of the Cartesian product of \(L\) and \(R\), or \(E\subseteq L\times R\). A biclique is a particular type of subgraph, which consists of two vertex sets, \(A\) and \(B\). Here, \(A\) is a subset of \(L\) (\(A\subseteq L\)), and \(B\) is a subset of \(R\) (\(B\subseteq R\)). The distinguishing feature of a biclique is that every vertex in set \(A\) is connected to - or neighbors with - every vertex in set \(B\). Among all such bicliques, a maximum biclique is defined as the one that possesses the greatest number of edges. This essentially refers to the largest complete bipartite subgraph. The focus of this study is the Maximum Biclique Problem (MBP), the challenge of finding such a maximum biclique within a given bipartite graph.
**Significance.** The concept of a biclique is foundational to an array of applications across diverse fields:
1. In the realm of E-commerce, the anomaly detection process [1, 2] often necessitates identifying clusters of customers who collectively purchase a set of products. Such coordinated behaviors frequently flag potential instances of fraudulent product ranking manipulation. Therefore, identifying the maximum biclique could aid in pinpointing the largest group involved in illicit click activities within E-commerce networks, thus curbing fraudulent activities.
2. In the field of biological studies, protein-protein interactions [3, 4, 5, 6, 7] are crucial. Researchers strive to uncover groups of human proteins that interact with the same set of viral proteins, such as those belonging to HIV and SARS-CoV-2, the virus that causes COVID-19. Therefore, finding the largest biclique can lead to the discovery of the most significant disease-causing protein group, potentially offering breakthroughs in combating viruses like HIV or COVID-19.
3. The strategy for social network recommendation systems [8] often relies on recognizing sets of users who exhibit shared interests, thereby enhancing the efficacy of targeted advertising. Identifying the user group with the highest potential market value for advertising could drastically improve the efficiency and return on investment of marketing strategies.
**Uncharted opportunity.** The Maximum Biclique Problem (MBP) has been identified as NP-hard [9], and it has been convincingly demonstrated that it is highly challenging, if not impossible, to develop a polynomial time algorithm that boasts
a substantial approximation ratio [10, 11]. The current state-of-the-art solution for the MBP [12] has a time complexity of \(O(2^{n})\). The prohibitive time complexity of the state-of-the-art is the primary bottleneck constraining the application scenarios. A promising alternative, however, is offered by quantum computing, an emerging technology set to revolutionize computational paradigms for NP-hard problems in the foreseeable future [13]. The inherent parallelism and the unique properties of quantum superposition and entanglement provide novel pathways for solving these problems [14]. Particularly, Quantum algorithms like Shor's algorithm for factorization and Grover's search algorithm showcase potential exponential and quadratic speedups, respectively, over their classical counterparts [15, 16]. Recent studies have highlighted that a specialized quantum algorithm is likely to provide a quadratic speed-up in terms of time complexity over a classical algorithm when applied to NP-hard problems [17, 18, 19, 20]. The challenge to the actual implementation of quantum algorithms is the current stage of quantum hardware, which, as of now, has not yet reached the fault-tolerant quantum computing regime, also known as quantum error correction, necessary for running quantum algorithms at scale [13]. A promising direction to address this challenge is the extensive integration of quantum computing with classical computing using cloud computing platforms [21]. Given the current rate of progress in the field, researchers estimate that a timeline of 5 to 15 years may be plausible for quantum computers to be ready to solve large-scale, real-world database problems [13, 22]. However, compiling classical graph problems into the quantum computation model that can run on large-scale Quantum Processing Units (QPUs) is a non-trivial task due to the constraints in qubit operations [23] and quantum representation of complex graph topologies. One of the contemporary challenges in the field lies in the development of quantum algorithms for the extant NP-hard problems in graph databases. These algorithms should be viable for execution on prospective large-scale QPUs, while also being amenable to simulation and proof-of-principle experimentation on the limited-scale QPUs that currently prevail in the quantum computing landscape.
Another crucial advantage of quantum computing resides in an often-overlooked fact of computational theory: energy consumption. As data sizes continue to grow, the energy resources, such as electricity, consumed by a normal computer typically escalate alongside the curve defined by time complexity. Normal computers consume energy in the process of computation due to Landauer's Principle [24]. According to this principle, when a computer erases a bit of information, a minimum amount of energy must be dissipated into the environment. Normal computers continually erase information during computation, as their operational logic is based on irreversible logic gates. For example, an AND gate maps two bits of information into one bit, and it is not possible to infer the original two-bit input from the resultant one-bit output. In contrast, quantum algorithms employ reversible gates for computation, thereby avoiding information erasure. In principle, this makes the process dissipation-free. In practicality, some energy dissipation is still required for system stability and to provide immunity from noise. Nevertheless, quantum computing, when utilized in combination with appropriately designed algorithms, still presents a promising strategy for graph computation that is significantly more energy-efficient than normal computing [25].
**Our approach.** In this research, we focus on addressing the challenge of reducing the time complexity of the existing maximum biclique algorithms when executed on large-scale QPUs. We develop a novel, reversible quantum algorithm termed as qMBS. This algorithm exhibits a time complexity of \(O^{*}(2^{\frac{n}{2}})\), where \(n\) represents the number of vertices.1 Our approach utilizes the foundational framework of Grover's search algorithm [26], a quantum circuit-based solution designed for unstructured database searches. The crux of Grover's search algorithm lies in an _oracle_, a mechanism that _identifies_ the query item. For our application, this oracle is utilized to (1) ascertain whether a subgraph forms a biclique, and (2) determine the size of the subgraph. We innovatively design this oracle using reversible computational units, known as quantum gates, to execute these two tasks. Consequently, our comprehensive algorithm, qMBS, achieves a quadratic speed-up over the state-of-the-art in terms of time complexity [12] with significantly reducing energy dissipation.
Footnote 1: For a quantum algorithm, quantum complexity classes apply.
We highlight our principal contributions below.
* We introduce a versatile design that encodes a bipartite graph into a quantum circuit. This approach is broadly applicable to an array of biclique problems, including the maximum vertex biclique problem and maximum balanced biclique problem.
* We illustrate the mapping of our problem into this framework by utilizing qMBS, an innovative algorithm developed by adapting the principles of Grover's search. qMBS incorporates a dedicated oracle to ascertain whether a given subgraph is a biclique of a specified size. With a time complexity of \(O^{*}(2^{\frac{n}{2}})\), our approach provides a quadratic speed-up over the state-of-the-art in terms of complexity. Further, its inherent reversible computing mechanism promises an economical computation manner in the near future.
* We conduct proof-of-principle experiments utilizing state-of-the-art quantum simulators, validating the practical performance and efficacy of our proposed algorithms.
**Roadmap.** The remainder of the paper is organized as follows. Section II reviews the preliminaries. Section III introduces our algorithm qMBS and its variants for other biclique problems. Section IV conducts experimental studies. Related works and conclusion are in Section V and Section VI.
## II Preliminaries
In this section, we will revisit some of the fundamental concepts related to bicliques and provide a succinct introduction to quantum computing, specifically focusing on the computational model of quantum circuits. Subsequently, we
will present Grover's search as the fundamental framework that underpins our proposed methodologies.
### _Maximum Biclique Problem_
Our study focuses on an unweighted and undirected bipartite graph, denoted as \(G(L,R,E)\). Here, \(L\) and \(R\) represent two separate sets of vertices, while \(E\subseteq L\times R\) signifies the set of edges. The graph's size is characterized by \(n=\left|L\right|+\left|R\right|\) (number of vertices) and \(m=\left|E\right|\) (number of edges). When referring to a subgraph \(C\), we also use \(C\) to denote its subset of vertices, for clarity within the given context. Consequently, we denote \(L(C)=C\cap L\) and \(R(C)=C\cap R\) to express the intersection of subgraph \(C\) with vertex sets \(L\) and \(R\), respectively. A biclique is a complete bipartite subgraph of \(G\):
**Definition 1** (**Biclique**).: _Given a bipartite graph \(G(L,R,E)\), a biclique \(C\) is a subgraph of \(G\), \(s.t.\) for each pair of \(u\in L(C)\) and \(v\in R(C)\), the edge \((u,v)\in E\) exists._
**Definition 2** (**Maximum Biclique Problem (MBP)**).: _Given a bipartite graph, find a biclique with the maximum edge number._
MBP is NP-hard [9], and it is difficult to find a polynomial time algorithm with a promising approximation ratio [10, 11]. The state-of-the-art [12] has a time complexity \(O(2^{n})\). In this work, we propose an algorithm to solve MBP in \(O^{*}(2^{\frac{n}{2}})\).
### _Quantum Mechanics/Computing_
Quantum mechanics studies how to describe the state of a microscopic system (e.g., an atom), and how such a state evolves over time. Quantum computing focuses on how to (1) encode a computation problem into the state of a microscopic system; (2) evolve such a state into the final solution state. Mathematically, a quantum state is represented as a vector, and the principles governing its evolution are characterized through vector rotations. In the context of this study, we engage with the most basic quantum system, the state of which is referred to as a qubit:
**Definition 3** (**Qubit**).: _A qubit is a vector with a unit norm in a two-dimensional complex linear space:_
\[\left|q\right\rangle=\alpha\left|0\right\rangle+\beta\left|1\right\rangle \tag{1}\]
_Here we use the notation \(\left|\cdot\right\rangle\) to denote a vector. \(\left|0\right\rangle\) and \(\left|1\right\rangle\) are base vectors of the space. The complex coefficients \(\alpha\) and \(\beta\) are called amplitudes, which satisfy \(|\alpha|^{2}+|\beta|^{2}=1\)._
A qubit sets itself apart from a traditional bit, whose state is strictly either 0 or 1. In contrast, the state of a qubit is a superposition, \(\alpha\left|0\right\rangle+\beta\left|1\right\rangle\), which is neither strictly \(\left|0\right\rangle\) nor \(\left|1\right\rangle\). This can be visualized as a composite vector distinct from both of the base vectors. Owing to the continuous nature of the coefficients, the amount of information that a qubit can theoretically hold is limitless, offering an intuitive understanding of the superior potential of quantum computing compared to conventional computing. However, this information cannot be directly accessed because a measurement of the superposition state \(\left|q\right\rangle\) will result in a random _collapse_ to the base state \(\left|0\right\rangle\) with a probability of \(|\alpha|^{2}\), or to \(\left|1\right\rangle\) with a probability of \(|\beta|^{2}\). Therefore, it is crucial to devise skillfully designed quantum algorithms to manage the information encoded within a qubit.
When considering a system comprised of \(n\) qubits, the state of the system is expressed using a tensor product. For instance, the state of a two-qubit system is represented as follows:
\[\left|q_{comp}\right\rangle =\left|q_{1}\right\rangle\left|q_{2}\right\rangle \tag{2}\] \[=(\alpha_{1}\left|0\right\rangle+\beta_{1}\left|1\right\rangle)( \alpha_{2}\left|0\right\rangle+\beta_{2}\left|1\right\rangle)\] \[=\alpha_{1}\alpha_{2}\left|00\right\rangle+\alpha_{1}\beta_{2} \left|01\right\rangle+\alpha_{2}\beta_{1}\left|10\right\rangle+\beta_{1}\beta_ {2}\left|11\right\rangle\]
We will use \(\left|ij\right\rangle\) to denote \(\left|i\right\rangle\left|j\right\rangle\). State evolves by vector rotation, which is described by matrix multiplication:
\[\left|q_{initial}\right\rangle\xrightarrow{\text{over time}}\left|q_{final} \right\rangle=U\left|q_{initial}\right\rangle \tag{3}\]
\(U\) is a unitary matrix satisfying \(U^{\dagger}U=I\), where \(\dagger\) is conjugate transpose and \(I\) is the identity matrix. E.g., a matrix \(X\) evolves a qubit by turning \(\left|0\right\rangle\) into \(\left|1\right\rangle\) and turning \(\left|1\right\rangle\) into \(\left|0\right\rangle\):
\[X\left|q\right\rangle=\alpha X\left|0\right\rangle+\beta X\left|1\right\rangle= \alpha\left|1\right\rangle+\beta\left|0\right\rangle \tag{4}\]
Another matrix utilized in our study is the Hadamard matrix \(H\). This matrix transforms \(\left|0\right\rangle\) into an equal superposition state \((\left|0\right\rangle+\left|1\right\rangle)/\sqrt{2}\) and transforms \(\left|1\right\rangle\) into \((\left|0\right\rangle-\left|1\right\rangle)/\sqrt{2}\). Typically, the \(H\) matrix is used for initial state preparation. By operating on the equal superposition state, a single quantum operation can potentially act on all possible states concurrently, harnessing the power of quantum parallelism.
If we explicitly write the two base vectors as \(\left|0\right\rangle=[1,0]^{T},\left|1\right\rangle=[0,1]^{T}\), then it can be verified that the matrix \(X\) and \(H\) can be written as
\[X=\begin{bmatrix}0&1\\ 1&0\end{bmatrix},\quad H=\frac{\sqrt{2}}{2}\begin{bmatrix}1&1\\ 1&-1\end{bmatrix} \tag{5}\]
### _Computation Model: Quantum Circuit_
A quantum circuit provides a visual representation of state evolution, consisting of quantum wires (representing qubits) and quantum gates (representing matrices). Figure 1 displays a basic quantum circuit where the qubit \(\left|0\right\rangle\) transitions to \(\left|1\right\rangle\) after the application of the \(X\) matrix, i.e., a quantum \(X\) gate. The progression towards the right signifies the flow of time. The \(X\) gate bears a resemblance to a logical NOT gate as it effectively flips a bit. However, the crucial difference lies in their target state: while the logical NOT gate operates on a definite state (either \(0\) or \(1\)), the quantum \(X\) gate acts on a superposition state (refer to Eq. 4). This capability for parallel operation is a key reason behind the remarkable speed of quantum algorithms.
Another quantum gate utilized in this study is the controlled-\(X\) gate, also known as the CNOT gate (see Figure 2). The control qubit, designated by a solid circle on a quantum wire,
Fig. 1: A toy quantum circuit
dictates the operation on the target qubit. If the control qubit is in state \(\ket{1}\), the \(X\) gate operates on the target qubit; if the control qubit is in state \(\ket{0}\), the target qubit remains unaltered. For convenience, we will subsequently represent the \(X\) gate with a circle encompassing a cross. Alternatively, the target can be flipped when the control qubit is in state \(\ket{0}\), denoted by a hollow circle. Controlled gates can be further specified with additional control qubits. In this case, the target qubit will only be operated on when all control qubits align with their respective base states. A CNOT gate with \(k\) control qubits is denoted as a C\({}^{k}\)NOT gate. Examples are provided in Figure 3.
### _Grover's Search_
The framework of our algorithm is Grover's search, which was initially designed for unstructured database search:
**Definition 4** (**Unstructured Database Search**).: _Given \(\mathcal{X}=\{0,1,...,2^{n}-1\}\) to be a set of \(2^{n}\) integers, a function \(f:\mathcal{X}\rightarrow\{0,1\}\) satisfies that there exists a unique \(x_{s}\in\mathcal{X}\), s.t. \(f(x_{s})=1\), whereas for all the other \(x\in\mathcal{X}\) and \(x\neq x_{s},f(x)=0\). The problem is to find the \(x_{s}\)._
Every integer \(x\in\mathcal{X}\) can be represented as an \(n\)-bit string. Consequently, we can express \(x\) as the tensor product of \(n\) qubit base states. For instance, the number \(3\) can be written as \(\ket{0...011}\), or more succinctly, \(\ket{3}\). The fundamental strategy is to operate concurrently on all \(2^{n}\) base states (integers) through superposition. The process iteratively increases the amplitude of the solution state \(\ket{x_{s}}\) until it substantially exceeds the amplitudes of other base states. The algorithm is presented as Algorithm 1.
```
0: A set of integers \(\mathcal{X}=\{0,1,...,2^{n}-1\}\), the discriminant function \(f:\mathcal{X}\rightarrow\{0,1\}\);
0: The integer \(x_{s}\) that satisfies \(f(x_{s})=1\);
1: Prepare an equal superposition state \(\frac{1}{\sqrt{2^{n}}}\sum_{i=0}^{2^{n}-1}\ket{i}\);
2: Use a black box to flip the amplitude sign of the solution base state \(\ket{x_{s}}\), i.e., from \(+\frac{1}{\sqrt{2^{n}}}\ket{x_{s}}\) to \(-\frac{1}{\sqrt{2^{n}}}\ket{x_{s}}\);
3: Use a diffusion operator to inverse the amplitude of each base state about the average of all the amplitudes;
4: Repeat Step 2&3 for \(\lfloor\frac{\pi}{4}\sqrt{2^{n}}\rfloor\) times, then measure the final state;
5: Output the binary string read from the final state as \(x_{s}\);
```
**Algorithm 1** Grover's Search Algorithm
**Explanation:**
1. The equal superposition state is prepared by using \(n\)\(H\) gates to act on \(n\) initial state \(\ket{0}\)s: \[\underbrace{(H\ket{0})(H\ket{0})...(H\ket{0})}_{n}=\frac{1}{\sqrt{2^{n}}}\sum _{i=0}^{2^{n}-1}\ket{i}\] The result is illustrated as Figure 3(a). In this figure, We utilize a bar graph to represent a superposition state, where the x-axis corresponds to different base states (basis vectors), and the y-axis indicates the amplitude of each base state. In this illustration, we take \(n=3\) as an example, resulting in a total of \(2^{3}=8\) base states depicted on the graph. As this superposition state is an equal superposition, the amplitudes of all eight base states are of identical height.
2. A critical component, referred to as an oracle (the black box), is fundamental to Grover's search. It essentially _recognizes_ the solution base state \(\ket{x_{s}}\). The resulting state after this step is displayed in Figure 3(b). Here we observe that the amplitude of the solution base state has been flipped below the x-axis; in other words, its amplitude has been multiplied by a negative sign. The amplitudes of the other non-solution base states remain unchanged. If we compute the average of the amplitudes of all base states, due to the presence of a negative amplitude, the average is slightly less than the amplitudes of the non-solution base states. We mark this average with a dashed line and as can be seen, this average dashed line is slightly lower than the amplitudes of the non-solution base states.
3. We denote any arbitrary amplitude as \(\alpha\) and the average amplitude as \(\overline{\alpha}\). The diffusion operator transforms any amplitude \(\alpha\) into \(2\overline{\alpha}-\alpha\), effecting an inversion about the average. Please refer to Figure 3(c). Compared to Figure 3(a), we observe that the amplitudes of all non-solution base states have decreased, whereas only the amplitude of the solution base state has increased. The actual effect of Line 2 and Line 3 is to suppress the amplitudes of the non-solution base states while amplifying the amplitude of the solution base state.
4. Steps 2 and 3 cause an increment in the amplitude of the solution state by \(O(1/\sqrt{2^{n}})\). After approximately \(\lfloor\frac{\pi}{4}\sqrt{2^{n}}\rfloor\) iterations, the solution amplitude will be near 1. Consequently, after measurement, the superposition will collapse into the solution state.
Note that if there are \(M\) solutions, then finding one solution requires only \(\lfloor\frac{\pi}{4}\sqrt{2^{n}/M}\rfloor\) iterations. If \(M\) is unknown, the Quantum Counting algorithm [27] can estimate \(M\)'s value. Intuitively, given a bipartite graph with \(n\) vertices, there exist a total of \(2^{n}\) subgraphs. Hence, the search for the maximum biclique can be equated to identifying a solution among the \(2^{n}\) subgraphs. The crucial aspect is to devise a dedicated oracle that can recognize the desired state and flip its amplitude sign. We subsequently demonstrate the construction of such
Fig. 3: Other types of controlled-gate
Fig. 2: Two representations of a CNOT gate
an innovative oracle using a quantum circuit. As the diffusion operator is a universal aspect across various problems, owing to space constraints, we will not delve into its details. We summarize the notations utilized throughout this paper in Table I.
## III A Quantum Algorithm for MBP
Given a bipartite graph \(G(L,R,E)\) with \(n\) vertices and \(m\) edges, the problem is to find one vertex subset that is a biclique with the maximum size among \(2^{n}\) subsets. We encode \(n\) vertices by \(n\) qubits using one-hot encoding, i.e., using the binary digit \(1\) or \(0\) to represent whether a vertex is present or absent. Then any \(n\)-qubit base state can be interpreted as a vertex set. Our proposed Quantum Maximum Biclique Search (qMBS) uses Grover's search with an oracle to search for a biclique with a given size \(k\in[1,m]\), and uses a binary search to find the maximum \(k_{M}\).
In this section, we design the oracle by partitioning it into two parts:
* Part I checks whether a base state is a biclique;
* Part II checks whether a base state has a given size \(k\).
For better illustration, we use the graph in Figure 4(a) as an example thereafter, where \(L=\{v_{1},v_{2}\},R=\{u_{1},u_{2}\},E=\{e_{1},e_{2},e_{3}\}\). Given a vertex set that is interpreted as a base state \(\ket{v_{1}v_{2}u_{1}u_{2}}\) (e.g., \(\{v_{1},u_{2}\}\) is represented by \(\ket{1001}\) or \(\ket{9}\)), the first task is to determine whether it is a biclique by a quantum circuit. We can get some intuitions by introducing the virtual graph \(G^{\prime}(L,R,E^{\prime})\), where \(E^{\prime}=L\times R\). The virtual graph \(G^{\prime}\) uses \(||L||R|\) virtual edges to connect all the pairs of vertices between \(L\) and \(R\) (Figure 4(b)). For a base state \(\ket{x}\), where \(x\in[0,2^{n}-1]\), a necessary and sufficient condition of it being a biclique is: if the virtual subgraph induced by \(\ket{x}\) contains a virtual edge \(e^{\prime}_{k}\), then the corresponding real edge \(e_{k}\) must exist in the real graph. i.e., \(e_{k}\) and \(e^{\prime}_{k}\) must be both present or both absent:
\[\ket{x}\text{ is a biclique}\Longleftrightarrow\bigwedge_{k=1}^{|L||R|} \overline{(e_{k}\oplus e^{\prime}_{k})}=1 \tag{6}\]
Here \(\oplus\) is the XOR logic (modulo two addition).
### _Oracle Part I: Biclique Checking_
**Real and virtual graph encoding.** Figure 6 shows the example quantum circuit. The \(n\) vertices are represented by \(n\) qubits \(\{\ket{v_{i}},\ket{u_{i}}\}\). We further use \(|L||R|\) auxiliary qubits \(\{\ket{e_{i}}\}\) to represent real edges and \(|L||R|\) auxiliary qubits \(\{\ket{e^{\prime}_{i}}\}\) for virtual edges. An auxiliary indicator \(\ket{bic}\) records the checking result. All the auxiliaries (edges and the indicator) are initially set to be \(\ket{0}\). Note that although a real edge \(e_{4}\) does not exist in the real graph 4(a), we still introduce the qubit \(\ket{e_{4}}\) because we use \(\ket{e_{4}}\equiv\ket{0}\) to mark its absence, and it will be used to compare with \(\ket{e^{\prime}_{4}}\). For any real edge \(e_{k}\in E\) that connects two vertices \(v_{i}\) and \(u_{j}\), we use a C\({}^{2}\)NOT gate to connect \(\ket{v_{i}},\ket{u_{j}}\) and \(\ket{e_{k}}\) with \(\ket{e_{k}}\) being the target. Please refer to the dashed box with the title _real edges_ in Figure 6. Given a base state (vertex set) \(\ket{x}\) with \(x\in[0,2^{n}-1]\), these C\({}^{2}\)NOT gates actually activate all the real edge qubits induced by \(\ket{x}\) to be \(\ket{1}\). For example, given \(\ket{x}=\ket{5}=\ket{0101}\), the real edge \(\ket{e_{3}}\) will be activated to be \(\ket{1}\), whereas \(\ket{e_{1}},\ket{e_{2}}\) and \(\ket{e_{4}}\) are all kept in \(\ket{0}\). Similarly, we construct all the virtual edges using C\({}^{2}\)NOT gates according to the virtual graph. Please refer to the dashed box with the title _virtual
Fig. 4: Illustration of the Grover’s search with \(n=3\)
Fig. 5: Example graph
\begin{table}
\begin{tabular}{l l} \hline \hline Notation & Meaning \\ \hline \(G(L,R,E)\) & graph \(G\) with vertex set \(L\), \(R\) and edge set \(E\) \\ \(G^{\prime}(L,R,E^{\prime})\) & virtual graph with the virtual edge set \(E^{\prime}=L\times R\) \\ \(C\) & subgraph or the vertex set of a subgraph \\ \(\ket{q}\) & quantum state, i.e., a complex vector with norm 1 \\ \(X\) & quantum not gate \\ \(H\) & Hadamard gate \\ \(\mathbb{C}^{k\text{NOT}}\) & control-NOT gate with \(k\) control qubits \\ \(\oplus\) & XOR logic/modulo two addition \\ \(\bigwedge\) & AND logic \\ \(U,U^{-1},U^{\dagger}\) & unitary matrix, its inverse and conjugate transpose \\ \hline \hline \end{tabular}
\end{table} TABLE I: Notations
edges_ in Figure 6. By now, we have encoded the real graph and the virtual graph into the circuit. Given a base state \(\ket{x}\), the induced real and virtual edge qubits will be activated to be \(\ket{1}\).
**Real and virtual edge comparison.** The remaining work is to compare each \(\ket{e_{k}}\) with the corresponding \(\ket{e_{k}^{\prime}}\) by Eq. 6. The XOR logic is implemented by a CNOT gate, because \(CNOT\ket{e_{k}}\ket{e_{k}^{\prime}}=\ket{e_{k}}\ket{e_{k}\oplus e_{k}^{\prime}}\), where the XOR result is stored into the virtual edge qubit. After using \(\ket{L}\ket{R}\) CNOT gates to act on all the pairs of real and virtual edges, the virtual edge set \(\{\ket{e_{k}^{\prime}}\}\) transforms to \(\{\ket{e_{k}\oplus e_{k}^{\prime}}\}\). The last two steps are first implementing the NOT logic to all the \(\ket{e_{k}\oplus e_{k}^{\prime}}\)s, then using the AND logic to combine them and store the result into the \(\ket{bic}\). These two steps are accomplished by a C\({}^{\left|L\right|R\right|}\)NOT gate with hollow circles. Please refer to the dashed box with the title _biclique check_ in Figure 6. According to Eq. 6, if a base state \(\ket{x}\) is a biclique, the indicator \(\ket{bic}\) will be flipped from \(\ket{0}\) to \(\ket{1}\). Note that if \(\ket{x}\) contains only vertices in \(L\) or \(R\), we still mark it as a biclique with size 0.
**Example results.** If we represent a physical state as \(\ket{v_{1}v_{2}u_{1}u_{2}}\ket{bic}\), then the input to the algorithm, that is, the initial state, is:
\[\ket{state}=\ket{0000}\ket{\mathbf{0}} \tag{7}\]
The first step of the algorithm is to use Hadamard gates to prepare an equation superposition state. After being acted by four Hadamard gates, the initial state has been evolved into:
\[\begin{split}\ket{state}=&\frac{1}{4}\big{[}\ket{0000 }+\ket{0001}+\ket{0010}+\ket{0011}\\ &+\ket{0100}+\ket{0101}+\ket{0110}+\ket{0111}\\ &+\ket{1000}+\ket{1001}+\ket{1010}+\ket{1011}\\ &+\ket{1100}+\ket{1101}+\ket{1110}+\ket{1111}\big{)}\big{]} \mathbf{0}\end{split} \tag{8}\]
The appearance of the \(1/4\) coefficient here is because we require the entire state, viewed as a vector, to be normalized. After being processed by the biclique check circuit, the vertex qubits \(\ket{v_{1}v_{2}u_{1}u_{2}}\) should be entangled with the biclique check qubit \(\ket{bic}\), which means, \(\ket{bic}\) should classify all the base states (subgraphs) into two categories: bicliques (marked by \(\ket{bic}=1\)) or non-bicliques (marked by \(\ket{bic}=0\)). Then, the result state is shown as:
\[\begin{split}\ket{state}=&\frac{1}{4}\big{[}(\ket{000 0}+\ket{1000}+\ket{0100}+\ket{0010}+\ket{0001})\\ &+\ket{1010}+\ket{0101}+\ket{0110}+\ket{1110}+\ket{0111})\big{)} \mathbf{1}\end{split} \tag{9}\]
Note that in this case, we treat both the empty set and each single vertex as a biclique. This does not affect our search for the maximum biclique, because by definition, both types of bicliques have a size of 0. In the next step when checking the biclique size, the algorithm will automatically disregard bicliques of size 0.
**Summary:** The biclique check circuit first encodes the real and virtual edges, then uses the XOR logic to compare each pair of them, finally uses the AND logic to store the check result into the indicator. In this circuit, the qubit number is
\[n+\left|L\right|\middle|R\middle|+\left|L\right|\middle|R\middle|+1=O(n^{2}) \tag{10}\]
The number of CNOT gates is
\[m+\left|L\right|\middle|R\middle|+\left|L\right|\middle|R\middle|+1=O(n^{2}) \tag{11}\]
### _Oracle Part II: Edge Counting_
The next task is to determine the sizes of the biclique state \(\ket{x}\)s. Given a biclique \(C\), the idea is to count the vertex number in \(L(C)\), \(R(C)\), and then multiply the two numbers.
Fig. 6: Biclique check quantum circuit
Fig. 7: Edge count / size check quantum circuit
**Vertex count.** To count the vertices in \(L(C)\), we check each vertex in \(L\) one by one and see whether such a vertex is in \(C\). We introduce a set of auxiliary qubits \(\{\left|cl_{ij}\right\rangle\}\) to record the truth value of the proposition that _by now we have checked \(i\) vertices in \(L\) and found that there are exactly \(j\) vertices contained by \(C\), where \(i\in[0,|L|],j\in[0,i]\)_. Updating \(cl_{ij}\) by checking vertices in \(L\) one by one is a dynamic programming procedure: given \(v_{i+1}\) as the truth value whether \((i+1)\)th vertex in \(L\) is present in \(C\), the transition equation is
\[\begin{split}& cl_{i+1\ j}=cl_{ij}\land\overline{v_{i+1}}\\ & cl_{i+1\ j+1}=cl_{ij}\wedge v_{i+1}\end{split} \tag{12}\]
The initial value \(cl_{00}=1\). After checking all the vertices in \(L\), we get \(|L|+1\) values: \(\{cl_{|L|\ j}\}\), where \(j\in[0,|L|]\). Among these truth values there is only a single \(1\), then the corresponding index \(j\) is actually \(|L(C)|\). The circuit design can be read from the transition equation Eq. 12. Each equation is implemented by a C\({}^{2}\)NOT gate, with the L.H.S. being the target qubit. Similar to Eq. 6, \(\overline{v_{i}}\) corresponds to a control qubit marked by a hollow circle. Please refer to the dashed box with the title _left vertex count_ in Figure 7. Here we use \(\left|bic\right\rangle\) to replace the initial \(\left|cl_{00}\right\rangle\) because we only consider the \(\left|x\right\rangle\)s that are bicliques. We count the vertices of \(R(C)\) in the same way and store the result into \(\{\left|car_{|R|\ j}\right\rangle\},j\in[0,|R|]\).
**Multiplication.** Next we have to multiply \(|L(C)|\) and \(|R(C)|\). We introduce a set of auxiliary qubits \(\{\left|ce_{k}\right\rangle\}\) to record the result, where \(k\in[1,|L||R|]\). The multiplication is realized by a mapping
\[\begin{split}\left(\left|cl_{|L|\ i}\right\rangle,\left|car_{|R| \ j}\right\rangle\right)\mapsto\left|ce_{i\cdot j}\right\rangle\end{split} \tag{13}\]
i.e., if \(\left|x\right\rangle\) has \(i\) vertices in \(L\) and \(j\) vertices in \(R\), then the edge number will be \(i\cdot j\). The mapping is realized by a C\({}^{2}\)NOT gate with the target being \(\left|ce_{i\cdot j}\right\rangle\). Note that there will be only a single \(\left|1\right\rangle\) in \(\{\left|ce_{k}\right\rangle\}\) since each \(\left|x\right\rangle\) has a unique size. Please refer to the dashed box with the title _edge count_ in Figure 7.
**Example results.** Now we ignore \(\left|bic\right\rangle\) and use \(\left|v_{1}v_{2}u_{1}u_{2}\right\rangle\left|ce_{1}ce_{2}ce_{3}ce_{4}\right\rangle\) to represent the states. When \(ce_{i}\) of one particular state is flipped from \(0\) to \(1\), it means that this state represents a biclique with size \(i\). Then the states being processed by the edge count circuit has been evolved into:
\[\begin{split}\left|state\right\rangle=&\frac{1}{4} \big{(}(|1010\rangle+|0101\rangle+|0110\rangle)|\mathbf{1000}\rangle\\ &+(|1110\rangle+|0111\rangle)|\mathbf{0100}\rangle\\ &+\sum\left|\text{Other States}\right\rangle|\mathbf{0000}\rangle \big{)}\end{split} \tag{14}\]
We see that each biclique has been marked by a corresponding qubit \(\left|1\right\rangle\) according to its size, e.g., \(|0111\rangle\left|\mathbf{0100}\right\rangle\) means the biclique \(\{u_{1},v_{2},u_{2}\}\) has two edges due to that its \(\left|ce_{2}\right\rangle=\left|1\right\rangle\). Given an arbitrary size \(k\), through the implementation of two quantum circuits - biclique check and edge count - we have successfully classified all subgraphs into two categories. The first category comprises bicliques of size \(k\), while the second encompasses all remaining subgraphs. Thus far, we have completed the second step in the Grover search process, namely, distinguishing between solutions and non-solutions and subsequently marking them accordingly.
**Input:** Graph \(G(L,R,E)\), size \(k\);
**Output:** A biclique with size \(k\) or \(\emptyset\);
```
1: Prepare the initial state to be an equal superposition of \(2^{n}\) possible subsets of \(L\cup R\);
2: Use the oracle described in Section III-A&III-B with \(\left|O\right\rangle\) to flip the amplitude signs of the \(k\)-biclique states;
3: Use a diffusion operator to inverse the amplitude of each base state about the amplitude average;
4: Repeat Line 2&3 for \(\lfloor\frac{\pi}{4}\sqrt{2^{n}/M}\rfloor\) times, then measure the final state of the \(n\) vertex qubits;
5: Output the \(k\)-biclique or \(\emptyset\);
```
**Algorithm 2** Quantum \(k\)-Biclique Search: qKBS
**Summary.** We use a dynamic programming circuit to count \(\left|L(C)\right|\) and \(\left|R(C)\right|\) for a base state \(\left|x\right\rangle\), and then use a multiplication mapping to store the truth value of the proposition that \(\left|x\right\rangle\)_is a size-\(k\) biclique_ into \(\left|ce_{k}\right\rangle\). In this circuit, the qubit number of \(\left|cl\right\rangle,\left|cr\right\rangle\) and \(\left|ce\right\rangle\) is
\[\frac{(2+|L|)(\left|L|-1)}{2}+\frac{(2+|R|)(\left|R|-1\right\rangle)}{2}+|L||R| =O(n^{2}) \tag{15}\]
The number of CNOT gates is
\[(1+|L|)|L|+(1+|R|)|R|+|L||R|=O(n^{2}) \tag{16}\]
### _Our Algorithm: qMBS_
Before proposing the final algorithm qMBS to find the maximum biclique, we first present a subprocedure qKBS that finds a size-\(k\) biclique. Recall that at Step 2 of the Grover's search, we need to flip the _amplitude_ of a size-\(k\) biclique state \(\left|x\right\rangle\) (please refer to Figure 3(a), 3(b)). For this purpose, we introduce an oracle qubit \(\left|O\right\rangle\) which is initially set to be \(\left|1\right\rangle\), and transforms to \((\left|0\right\rangle-\left|1\right\rangle)/\sqrt{2}\) after being operated by an \(H\) gate. By now we have \(\left|x\right\rangle\left|ce_{k}\right\rangle\left|O\right\rangle=\left|x\right\rangle \left|1\right\rangle(\left|0\right\rangle-\left|1\right\rangle)/\sqrt{2}\). We then use a CNOT gate to act on \(\left|ce_{k}\right\rangle\) and \(\left|O\right\rangle\), after which \(\left|O\right\rangle\) transforms to \(-(\left|0\right\rangle-\left|1\right\rangle)/\sqrt{2}\). Since the negative sign can be moved backward across the tensor product, the \(\left|x\right\rangle\left|ce_{k}\right\rangle\left|O\right\rangle\) now transforms to \(-\left|x\right\rangle\left|ce_{k}\right\rangle\left|O\right\rangle\). Therefore, the amplitude of \(\left|x\right\rangle\) is flipped by a negative sign. Now we can assemble the complete oracle with the Grover's search framework to find a size-\(k\) biclique as the Quantum \(k\)-Biclique Search Algorithm (qKBS, shown in Algorithm 2).
Note that in Algorithm 2, \(M\) denotes the number of size-\(k\) bicliques in the graph, which can be estimated by the quantum counting algorithm [27]. The circuit for searching a size-\(1\) biclique is shown in Figure 8. Here we use \(\left|aux\right\rangle\) to summarize the auxiliary qubits. \(U_{bic}\) is the biclique checking circuit and
\(U_{size}\) is the edge counting circuit. Since we need to place all the auxiliary qubits to their initial states after each iteration, the inverses \(U_{size}^{-1}=U_{size}^{\dagger}\) and \(U_{bic}^{-1}=U_{bic}^{\dagger}\) are applied sequentially. Due to that the inverse of a CNOT gate is itself, \(U^{\dagger}\) contains exactly the same gates as \(U\) with a reverse order. Now we can present our algorithm qMBS to search for a maximum biclique as the Quantum Maximum Biclique Search Algorithm (qMBS, shown in Algorithm 3).
**Resource requirement.** Landauer's Principle [24] asserts that there is a minimum possible amount of energy required to erase one bit of information, known as the Landauer limit. This limit is given as \(kT_{k}\ln(2)\), where \(k\) is Boltzmann's constant and \(T_{k}\) is the temperature of the system. An normal computer continuously erases information during the computing process because its underlying logic of computing is to use irreversible logic gates, e.g., AND gate map two bits of information into one bit, and is not possible to infer the two-bit input from the one-bit output. Due to the property of the unitary matrix \(U^{-1}=U^{\dagger}\), all the quantum gates in qMBS are reversible. This indicates that most of the computation units (except for the final measurements) require far less energy resource than normal algorithms according to Landauer's Principle. Even though in practice dissipation is required for system stability and immunity from noise, our algorithms are still promising for computation on large-scale graphs in terms of energy consumption in near future.
### _Complexity Analysis_
The space complexity is quantified by the qubit number, and the time complexity is quantified by the gate number. According to the analysis in Section III-A&III-B, the qKBS and qMBS have the same space complexity \(O(n^{2})\). The number of CNOT gates in an oracle is \(O(n^{2})\) because \(U\) and \(U^{\dagger}\) contain the same number of gates. The number of CNOT gates in the diffusion operator is \(O(n)\)[25]. The number of iterations of the oracle and the diffusion is \(O(\sqrt{2^{n}})\). The number of \(H\) gates for preparing the equal superposition is \(O(n)\). Therefore, the total time complexity of qKBS is \(O(n+(n^{2}+n)\sqrt{2^{n}})=O(n^{2}\sqrt{2^{n}})\), which is \(O^{*}(\sqrt{2^{n}})\). The qMBS involves at most \(O(\log m)=O(\log n)\) iterations of qKBS, so the total time complexity is \(O(n^{2}\log n\sqrt{2^{n}})\), which is also \(O^{*}(\sqrt{2^{n}})\).
### _Maximum Vertex Biclique Problem and Maximum Balanced Biclique Problem: qMBSv and qMBSb_
**Maximum Vertex Biclique Problem.** The maximum vertex biclique problem that searches for a biclique \(C\) with the maximum number of vertices \(|L(C)\cup R(C)|\). To propose a quantum algorithm to solve this problem, we only need to replace the multiplication mapping of the edge counting of qMBS with an addition mapping to count vertices:
\[(\left|cl_{|L|\ i}\right.,\left|cr_{|R|\ j}\right.)\mapsto\left|cv_{i+j}\right. \tag{17}\]
Figure 9 shows the quantum circuit. The only difference between Figure 7 and Figure 9 is the third dashed box, where we use the addition mapping to replace the multiplication mapping \((\left|cl_{|L|\ i}\right.,\left|cr_{|R|\ j}\right.)\mapsto\left|cc_{i:j}\right.)\). We name the algorithm to find a maximum vertex biclique as qMBSv.
**Maximum Balanced Biclique Problem.** The maximum balanced biclique problem searches for a maximum vertex biclique \(C\) with \(|L(C)|=|R(C)|\). To propose a quantum variants of qMBSv for this problem, we only need to restrict the addition mapping with a condition that \(i=j\). Figure 10 shows the quantum circuit. The only difference between Figure 9 and Figure 10 is the third dashed box, where we restrict that the addition can only be performed for two equal numbers, e.g., \(i=j=1\) or \(i=j=2\). We name the algorithm to find a maximum balanced biclique as qMBSb. It can be verified that the time and space complexities of qMBSv and qMBSb are same as qMBS.
1. We test our algorithms for the example graph in Figure (a)a, the size of which is comparable to existing works on quantum circuits for clique problems (details in Table II). To provide a comprehensive discussion about the algorithm behaviors of searching bicliques with all possible sizes, instead of a binary search, we implement qMBS by calling qKBS sequentially from \(k=1\) to \(k=m\).
2. We compare qMBS with the state-of-the-art [12] across 10 synthetic datasets, where the number of vertices in the datasets ranges from 6 to 10, and the number of edges ranges from 3 to 23. The graph size is significantly larger than that of existing quantum graph database works [28, 29] (details in Table II). For fairness, we utilize the complete binary search version of the qMBS algorithm.
All the experiments are conducted in Python 3.8 with Qiskit and tested on IBM simulators (details in Table III).
### _Error probability convergence_
Given the inherent indeterminacy of quantum computing, there exists a probability of error whereby, upon measurement, the final state incorrectly collapses into a non-solution state. This inherent indeterminacy is a fundamental characteristic of quantum computing and cannot be theoretically eradicated. The error probability is at most \(\pi^{2}/(4T)^{2}\), where \(T\) denotes the number of iterations [25]. For a relatively low value of \(T\), we can execute the qKBS algorithm multiple times, e.g., \(c\) times. This approach reduces the error probability to \(\pi^{2}/(4T)^{2c}\). As such, the error rate is anticipated to rapidly diminish to a level that is significantly lower than the thermal noise inherent in physical devices. This allows our algorithms to be safely employed in practical settings to procure precise solutions. To evaluate the practical error rate, we execute our algorithms with 20K shots and measure the final states to report the frequency distribution across 16 possible base states (ranging from \(|0000\rangle\) to \(|1111\rangle\)). All the algorithms undergo testing on three simulators. However, due to the high similarity in distributions, we only present the results obtained from the QASM simulator.
Figure (a)a presents the results of the qMBS algorithm. After state preparation, the distribution of base states generally appears uniform, hence it is referred to as an equal superposition. To locate a biclique of size 1, we proceed with a single iteration of Steps 2 and 3 in the qKBS algorithm. The results yield three significant peaks at \(|0101\rangle\), \(|0110\rangle\), and \(|1010\rangle\). These peaks correspond to the three bicliques of size 1, namely \(\{v_{2},u_{2}\}\), \(\{v_{2},u_{1}\}\), and \(\{v_{1},u_{1}\}\). The probability of error, defined by the final state not collapsing into one
\begin{table}
\begin{tabular}{l l r r} \hline Problem & Time complexity \& Work & \(n\) & \(m\) \\ \hline Maximum clique & \(O^{*}(2^{\frac{n}{2}})\)[28] & & 2 & 4 \\ \(k\)-clique & \(O^{*}(2^{\frac{n}{2}})\)[29] & & 4 & 4 \\ Maximum biclique & \(O^{*}(2^{\frac{n}{2}})\) [qMBS] & & 10 & 22 \\ \hline \end{tabular}
\end{table} TABLE II: Comparison of dataset sizes
Fig. 10: Vertex addition for maximum balanced biclique search
\begin{table}
\begin{tabular}{l l l} \hline \hline Simulator & Qubits & Type \\ \hline QASM & 32 & General, context-aware \\ Statevector & 32 & Schrödinger wavefunction \\ MPS & 100 & Matrix product state \\ \hline \end{tabular}
\end{table} TABLE III: Simulators
Fig. 9: Vertex addition for maximum vertex biclique search
of these three peaks, is calculated to be 4.87%. This value is significantly lower than the theoretically guaranteed error rate of \(\pi^{2}/(4T)^{2}\). To discover a biclique of size 2, two iterations are required. Interestingly, after the first iteration (denoted as \(itr\)\(1\) in Figure 10(a)), two prominent peaks can be observed at \(|0111\rangle\) and \(|1110\rangle\), corresponding to the bicliques \(v_{2},u_{1},u_{2}\) and \(v_{1},v_{2},u_{1}\), respectively. If we measure the state at this point, the associated error rate stands at 21.59%. Upon the completion of the second iteration, the peaks become more pronounced and the error rate is significantly reduced to 5.53%. When endeavoring to find a biclique of size 3, the qKBS algorithm encounters difficulties as the oracle fails to mark a solution state. Consequently, the diffusion operator lacks a specific target to amplify, resulting in the states following a uniform distribution after two iterations. Ultimately, the qMBS algorithm identifies a biclique of size 2 as the optimal solution. In the case of the qMBSv algorithm, the results presented in Figure 10(b) closely resemble those of Figure 10(a). This is because a biclique with one edge inherently corresponds to a biclique with two vertices. As for the qMBSb algorithm depicted in Figure 10(c), given that the number of edges in a balanced biclique can only be a square number, there is no need to search for bicliques of sizes 2 and 3. The error probabilities are generally around 5% for these small instances, which provides a practical effectiveness guarantee (that decreases proportional to \(1/T^{2}\)) for larger datasets in future applications. This indicates the robustness of our quantum algorithms, promising significant potential for tackling larger and more complex problems.
### _Efficiency_
We evaluate the performance of all algorithms on three simulators, where the reported running time is calculated as an average over 20,000 executions. The results are presented in Figure 10(d). Focusing initially on the results from the QASM simulator, we observe the following. For the qMBS algorithm, the state preparation phase is completed in a swift 2.8 nanoseconds (ns), a duration that is negligible in comparison to the time required for the subsequent iterations. The first iteration, which is aimed at identifying a biclique of size 1, requires 315ns. Subsequently, the first and second iterations
Fig. 11: State distribution and running time
to search for a biclique of size 2 consume 405ns and 230ns respectively. The cumulative duration for the two rounds of iterations targeted at identifying a biclique of size 3 amounts to 600ns. The entirety of the qMBS algorithm run therefore requires approximately 1500ns (1200ns), where the values in parentheses show the time if binary search applies. The performance of the qMBSv algorithm closely mirrors that of the qMBS algorithm due to the similar structure and operations they share. On the other hand, the qMBSb algorithm, despite each iteration consuming a similar duration to those in the qMBS and qMBSv algorithms, has a noticeably fewer number of iterations. This is because it only needs to consider bicliques whose sizes are square numbers, thereby reducing the computational complexity. Consequently, the running time of the qMBSb algorithm is approximately 900ns (900ns).
The results obtained from the Statevector simulator closely parallel those of QASM. However, a remarkable speed-up is evident when using the MPS (Matrix Product State) simulator, which owes its efficiency to the effective representation of matrix product states. For the qMBS algorithm, the MPS simulator spends a mere 3ns preparing the initial equal superposition. The identification of a 1-size biclique takes 23.8ns. The two iterations that target a 2-size biclique are completed in 21.5ns and 4ns respectively, whereas the two iterations aimed at discovering a 3-size biclique consume only 5.1ns and 0.8ns. In total, the entire running time is about 55ns (or 30ns if binary search is applied) for qMBS. The qMBSv and qMBSb algorithms register similar timings at 55ns (or 30ns with binary search) and 30ns respectively. Here, the values in parentheses denote the results obtained using the binary search strategy. The significant speed-up observed on the MPS simulator suggests that our algorithms generate states with low levels of entanglement, which the matrix product state representation can simulate efficiently.
### _Comparison with state-of-the-art_
Due to the limitations of existing hardware, even though our algorithm outperforms the state-of-the-art in terms of complexity and resource consumption, large-scale QPUs are not yet prepared to test the algorithm on large datasets. Nevertheless, we still aspire to compare our algorithm with the state-of-the-art on small datasets. Given that the largest quantum simulator available to us currently supports up to 100 qubits, our algorithm can be tested on bipartite graphs of about 10 vertices with the MPS simulator. To make the test results more generally meaningful, we have examined a total of 10 synthetic datasets with vertex counts ranging from 6 to 10. We denote a dataset as \(D_{i,j}\), where \(i\) represents the vertex number of the dataset and \(j\) represents the edge number of the dataset. For each identical size \(i\), we selected two different \(j\) values, one small and one large, to ensure that the experiment covers both small and large biclique situations. The datasets and experimental results are shown in Table IV. The reported running time is calculated as an average over 20,000 executions.
We observe that across all datasets, qMBS is approximately an order of magnitude faster than MBC\({}^{*}\). The efficiency of qMBS is affected by both the number of vertices and the number of edges in the dataset. As the dataset size grows, the increase in running time is slower compared to MBC\({}^{*}\), which is a result of the efficiency boost brought about by the quadratic speed-up of qMBS in terms of time complexity. With the increase in the number of iterations, the error probability decreases exponentially. For a graph with 10 vertices, the error probability is already less than \(10^{-4}\). Therefore, when actually applied to large-scale datasets, this error probability is generally lower than the thermodynamic noise of the device and can be neglected.
### _Summary_
In summary, the experimental results underscore the proficiency of our proposed algorithms. They quickly evolve the initial state into the solution state, and maintain the error probability at a negligible level even with small iteration numbers. Compared to the state-of-the-art method [12], qMBS demonstrates an order-of-magnitude improvement in efficiency on small datasets, and the growth rate of its running time is slower than state-of-the-art methods as the size of the graph increases. This underlines the practical potential of our proposed algorithms in quantum computation, promising rapid and accurate solution-finding in future.
## V Related Works
Related works can be categorized into two types: those on biclique-related problems, and those on quantum database or graph database algorithms.
**Biclique problems.** There are mainly four types of biclique problems. **Maximal biclique enumeration** finds all the maximal bicliques within a bipartite graph. A time delay algorithm was proposed by the work [30], aiming to strike a balance between computational efficiency and resource usage. The work [31] combined backtracking with a branch-and-bound framework to filter out unpromising search branches. Parallel algorithms with shared memory were designed by [32]. Pivot-based algorithms with index and batch-pivots-based algorithm
\begin{table}
\begin{tabular}{l l l l l l l l l l l} \hline \hline Dataset & \(D_{6,3}\) & \(D_{6,6}\) & \(D_{7,6}\) & \(D_{7,11}\) & \(D_{8,5}\) & \(D_{8,14}\) & \(D_{9,4}\) & \(D_{9,18}\) & \(D_{10,7}\) & \(D_{10,23}\) \\ \hline Maximum biclique size & 2 & 4 & 4 & 9 & 3 & 12 & 3 & 16 & 4 & 20 \\ Running time of MBC\({}^{*}\) (ns) & 573.3 & 563.4 & 583.5 & 581.5 & 627.3 & 639.6 & 803.2 & 807.9 & 925.8 & 931.6 \\ Running time of qMBS (ns) & 43.3 & 44.2 & 49.5 & 54.3 & 62.7 & 64.4 & 68.5 & 68.2 & 77.9 & 79.5 \\ Error probability & \(<10^{-2}\) & \(<10^{-2}\) & \(<10^{-3}\) & \(<10^{-3}\) & \(<10^{-3}\) & \(<10^{-3}\) & \(<10^{-4}\) & \(<10^{-4}\) & \(<10^{-4}\) & \(<10^{-4}\) \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Comparison with state-of-the-art
for sparse bipartite graphs were proposed by [33] and [34] respectively. **Maximum vertex biclique search** studies the problem of finding a biclique with maximum number of vertices, which is polynomially solvable [35]. This problem was solved by formulating it as an instance of integer linear programming (ILP) [36], or by being reduced to finding maximum flow in a constructed flow network [37]. **Maximum edge biclique search** was proved to be NP-hard [9]. An ILP solver was proposed by the work [38]. The work [12] proposed a progressive-bounding framework for large graphs. A probabilistic algorithm using a Monte Carlo subspace clustering approach was designed by [39]. The work [40] studied the parameterized maximum biclique problem that determines if there exists a biclique with at least a given number of edges. Beside, the work [41] solved this problem by ILP on a general graph. The problem of **maximum balanced biclique search** looks for a maximum edge/vertex biclique \(C\) with \(|L(C)|=|R(C)|\). The work [42] proposed a branch and bound approach with symmetry breaking technique, based on which the work [43] designed an upper bound estimation method for further branch pruning. Algorithms for dense and sparse bipartite graphs were proposed by [44]. Besides, heuristic approaches were also studied by [45, 46, 47, 48, 49, 50]. All of the aforementioned problems (except for the maximal biclique enumeration) can be solved in a quantum manner by qMBS and its variants. There are still some variants of the maximum biclique problem, e.g., personalized maximum biclique search [51], maximal balanced signed biclique enumeration [52], and vertex coverage for top-\(k\) bicliques [53]. Given the generality of the bipartite graph encoding proposed in our work, these biclique tasks will also benefit from this encoding method, thereby facilitating researchers to propose corresponding quantum algorithms in the future.
**Quantum database algorithms.** There has been a recent surge of work concerning quantum database algorithms. The work [54] explored the transformative impact that quantum algorithms may have on the field of databases in both the immediate and near future. The problem of multiple query optimization was studied on adiabatic quantum annealer by the work [55]. The work [56] proposed circuit-based quantum algorithms for join order optimization, based on which a variational quantum circuit [57] and a quantum annealer algorithm [58] were proposed for this problem. Quantum computing has also invigorated research in graph databases. Quantum walks have been employed for graph traversal [59], and quantum PageRank algorithms show potential advantages over classical methods [60]. Hardware like D-Wave's quantum annealing machines are tackling graph problems [61], and quantum machine learning algorithms aim to leverage potential quantum benefits for graph data [62, 63]. Quantum algorithms have also been studied for clique problems, where a clique is a complete subgraph of a general graph. For the **maximum clique problem**, the work [64] studied an oracle-based Grover's search, after which a concrete quantum circuit was designed by [28]. Different computation models were also studied for the problem, e.g., quantum adiabatic evolution [65] and quantum annealing [66]. However, these models are typically problem-specific and not as flexible as the quantum circuit when being generalized for other problems. For the \(k\)**-clique problem**, the work [67] utilized quantum subset finding algorithms to find a size-\(k\) clique with a small \(k\). Grover's search based algorithms were also studied by [29]. These works cannot be applied to the biclique problems due to the bipartition restriction. To the best of our knowledge, our work is the first to study a quantum approach for the biclique problems. For all the quantum algorithms mentioned above, which are based on the quantum circuit computational model for the clique problem, it is worth noting that, while they are currently restricted by hardware limitations and not yet applicable on large-scale datasets, the theoretical acceleration in algorithmic complexity, coupled with the rapid advancement of quantum hardware in recent years, fosters hope that these quantum algorithms will outperform classical ones in real-world applications on large datasets in the near future.
## VI Conclusion and Future Works
In this work, we explored the potential of utilizing QPUs to expedite graph database algorithms, and have proposed a class of biclique problem algorithms based on quantum circuits. Specifically, we delved into the Maximum Biclique Problem (MBP) from a quantum perspective. A novel reversible quantum circuit was conceived for the purpose of determining whether a given subgraph constitutes a biclique of a certain size. Utilizing this, we introduced a quantum algorithm, qMBS, designed to address the MBP with a time complexity of \(O^{*}(2^{\frac{n}{2}})\). Remarkably, this presents a quadratic acceleration compared to the state-of-the-art in terms of time complexity. Furthermore, we elaborated on two extensions of qMBS to solve the Maximum Vertex Biclique problem and Maximum Balanced Biclique problem, broadening its applicability. To assess the practical performance of our proposed solutions, we conducted proof-of-principle experiments using state-of-the-art quantum simulators. These experimental results provided a substantial validation of our approach to the extent possible to date. The incorporation of reversible computing in our algorithms enhances their potential to handle real-world datasets in an energy-efficient manner, which adds significant value, considering the increasing importance of sustainability in computing. As quantum hardware continues to evolve, we anticipate our proposed algorithms will contribute to quantum computing's capability to tackle challenging problems efficiently in the near future.
Future works will pivot towards another vital class of problems in graph databases: enumeration problems, such as maximal biclique/clique enumeration. In the context of NP-hard problem complexities, a search space of exponentially large branches can be perfectly accommodated in a superposition state within the \(2^{n}\)-dimensional space spanned by \(n\) qubits. As a result, harnessing quantum algorithms to expedite enumeration problems in graph databases will constitute a significant direction for upcoming endeavors. |
2309.05609 | Connections between resonant inelastic x-ray scattering and
complementary x-ray spectroscopies: probing excitons at Al K and L$_1$ edges
of $α$-Al$_2$O$_3$ | We present an ab initio study of neutral core and valence electronic
excitations in {\alpha}-Al2O3 by solving the Bethe-Salpeter equation (BSE) of
many-body perturbation theory within an all-electron framework. Calculated
spectra at the Al K and L1 edges are in remarkable agreement with available
experiments from X-ray absorption (XAS) and X-ray Raman spectroscopy once
excitonic effects are taken into account. The combination of the BSE spectra
for the two techniques confirms the dipole-forbidden nature of the exciton
prepeak as suggested by recent calculations based on density-functional theory.
Moreover, we make predictions for resonant inelastic X-ray scattering (RIXS)
spectra at K and L1 edges, which strikingly fully overlap also beyond an
independent-particle picture. The RIXS calculations reveal two distinct regimes
as a function of incoming photon energy. Below and at the XAS threshold, we
observe Raman-like features, characterised by strong excitonic effects, which
we directly compare to peaks in the loss function. Above the XAS threshold,
instead, fluorescence features become predominant: RIXS spectra can be well
described and analyzed within an independent-particle approximation showing
similarity with the X-ray emission spectrum. | M. Laura Urquiza, Matteo Gatti, Francesco Sottile | 2023-09-11T16:48:29Z | http://arxiv.org/abs/2309.05609v2 | Connection between inelastic x-ray scattering and complementary x-ray spectroscopies: probing excitons at Al K and L\({}_{1}\) edges of \(\alpha\)-Al\({}_{2}\)O\({}_{3}\)
###### Abstract
We present an ab initio study of core excitations at the aluminum K and L\({}_{1}\) edges in \(\alpha\)-Al\({}_{2}\)O\({}_{3}\)within an all-electron many-body perturbation theory (MBPT) framework. Calculated XAS reveals excellent agreement with experiments, highlighting the dipole-forbidden nature of the pre-peak, which in experiments is enabled by \(sp\) mixing due to atomic vibrations. Non-resonant inelastic X-ray scattering (NRIXS) is employed to go beyond the dipole approximation and probe transition channels with s, p, and d character, enhancing multipole transitions that contribute to the pre-peak. The RIXS spectra at K and L\({}_{1}\) edges are remarkably similar, opening the way to soft X-ray RIXS experiments to probe semi-core \(s\) states. The RIXS calculations reveal two distinct regimes based on the behavior with incoming photon energy (\(\omega_{1}\)). For \(\omega_{1}\) in resonance with the XAS threshold, we observe Raman-like behavior, where the RIXS spectra show significant dependence on \(\omega_{1}\), reflecting the coupling between absorption and emission processes. For higher \(\omega_{1}\), above the XAS threshold, the study reveals fluorescence features that appear at constant emission energy, and can be explained via X-ray emission spectroscopy (XES).
## I Introduction
In the last twenty years there has been a huge progress in both experiments [1] and in the theory [2; 3; 4] of various X-ray spectroscopies, which has led to a surge interest in their application across chemistry, physics, biology, and materials science [5]. In general, core-level spectroscopies have become an essential tool for the study of a vast number of systems, as they provide element and orbital specific information of the local chemical environment and electronic structure of materials. Moreover, another crucial characteristic of these methods, especially when using hard X-rays, is their ability to provide bulk sensitivity.
X-ray absorption (XAS), also referred as X-ray absorption near-edge spectroscopy (XANES), probes electronic transition from core to unoccupied states, while X-ray emission (XES) describes the decay of valence electrons to a core hole, providing information on the occupied states. Non-resonant inelastic X-ray scattering (NRIXS), also called X-ray Raman scattering (XRS), provides momentum dependent information on the structure factor. On the other hand, resonant inelastic X-ray scattering (RIXS) is a complementary core spectroscopy technique [6; 7] that probes neutral excitations through a scattering process. RIXS involves the absorption of a x-ray photon, which is tuned to resonate with an specific core level, and a subsequently relaxation of a valence electron to fill the core hole accompanied by the emission of a photon.
The simplest theoretical approach to compute and analyze XAS (XES) spectrum is through the unoccupied (occupied) projected densities of states (PDOS) of the absorbing atom, using the angular momentum component that fulfills dipole selection rules. Although this method has been widely used to interpret experimental spectra, this is a very rough approximation since it does not consider the perturbation caused by the presence of the electron-hole pair in the case of XAS and the hole in the case of XES. In insulators, the core hole dramatically affects the absorption spectrum by the formation of a core exciton, which manifest at the onset of the spectrum and is often the main feature. Electron-hole correlation plays a crucial role also in RIXS and NRIXS, as both techniques involve neutral excitations.
With the purpose of improving the description of neutral excitations, a variety of methods have been developed at different levels of approximation ranging from real space multiple-scattering within Green's function formalism and the muffin-tin approximation [8; 9; 10] and cluster models [11; 12; 13] to many-body perturbation theory (MBPT) [14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. Within the context of MBPT, the Bethe Salpeter equation (BSE) [24; 25] represents the state of the art for calculating neutral excitations, not only in the optical region, but also in the X-ray regime.
The aim of this work is to conduct an in-depth analysis of RIXS spectra, while accounting for coherence and excitonic effects throughout the entire process, and to establish a connection with alternative spectroscopy techniques that also assess neutral excitations in materials, specifically XAS, XES and NRIXS. Our study is focused on \(\alpha\)-Al\({}_{2}\)O\({}_{3}\), chosen due to its status as a prototypical wide-band gap insulator with a broad range of applications, including catalysis, ceramics, and electronics. We approach the problem from a theoretical perspective, using the BSE within an all-electron framework, which in addition to being recognized as a cutting-edge approach for studying neutral excitations, has consistently demonstrated its accuracy in describing similar experiments conducted on other wide-band gap materials.
We calculated XAS spectra at the K edge of Al, in \(\alpha\)-Al\({}_{2}\)O\({}_{3}\) reproducing quite well experiments available
in literature. The calculations also predict the existence of dark excitations at energy levels corresponding to a pre-peak observed at the onset of the experimental spectrum. In the calculations, these dark excitons emerge from dipole-forbidden transitions, which are enabled in XAS experiments due to vibrational effects.[26; 27]. NRIXS calculations give access to this pre-peak, by including monopole and multipole terms, finding excellent agreement with experiments. Finally, we demonstrate that both K and L\({}_{1}\) edges yield equivalent information in both XAS and RIXS spectra. To our knowledge, no previous comparisons of this nature have been made in the field of X-ray spectroscopy, marking a significant breakthrough. Consequently, based on our estimation of background intensity, it becomes feasible to investigate transitions from the s to the conduction band using soft X-ray experiments, opening up new avenues for soft x-ray XAS and RIXS.
This work is organized as follows. In Sec. II, we briefly present the basic concepts of the approach for computing the X-ray absorption spectrum (Sec. II.1) and inelastic scattering (Sec. II.2), together with a summary of the computational details (Sec. II.3). In Sec. III we discuss the calculated XAS and NRIXS spectrum at K edge (Sec. III.1), and RIXS for K and L\({}_{1}\) edges (Sec. III.2). Finally, in Sec. IV, we present the main conclusions and an outlook.
## II Theoretical framework and computational details
### X-ray absorption
The XAS spectrum is given by the imaginary part of the longitudinal macroscopic dielectric function[28]\(\epsilon_{M}(\mathbf{q},\omega)=1/\epsilon_{G=0,G^{\prime}=0}^{-1}(\mathbf{q}=0,\omega)\). It can be calculated according to the Fermi's golden rule, which within the independent particle approximation (IPA) is given by:
\[\text{Im}\epsilon_{M}(\omega)=\frac{8\pi^{2}}{\Omega\omega}\text{Im}\sum_{vc \mathbf{k}}\big{|}\langle\varphi_{c\mathbf{k}}|\mathbf{e}\cdot\mathbf{p}| \varphi_{v\mathbf{k}}\rangle\big{|}^{2}\delta(\omega-(E_{c\mathbf{k}}-E_{v \mathbf{k}})) \tag{1}\]
where \(|\varphi_{v\mathbf{k}}\rangle\) and \(\langle\varphi_{c\mathbf{k}}|\) are the one particle Kohn-Sham wave functions[29] of the valence and conduction states, respectively, with energies \(E_{v\mathbf{k}}\) and \(E_{c\mathbf{k}}\); \(\mathbf{e}\) is the polarization vector of the incident photon and \(\mathbf{p}\) is the momentum operator.
In the long wavelength limit \(q\to 0\), eq. (1) can be written as:
\[\text{Im}\epsilon_{M}(\omega)=\lim_{\mathbf{q}\to 0}\frac{8\pi^{2}}{\Omega q^{2}} \bigg{|}\sum_{vc\mathbf{k}}\tilde{\rho}_{vc\mathbf{k}}(\mathbf{q})\bigg{|}^{2 }\delta(\omega-E_{vc\mathbf{k}}), \tag{2}\]
with the interband transition energies given by \(E_{vc\mathbf{k}}=E_{c\mathbf{k}}-E_{v\mathbf{k}}\) and the oscillator strengths \(\tilde{\rho}_{vc\mathbf{k}}(\mathbf{q})\) defined as:
\[\tilde{\rho}_{vc\mathbf{k}}(\mathbf{q})=\langle\varphi_{v\mathbf{k}-\mathbf{q }}|e^{-i\mathbf{q}\cdot\mathbf{r}}|\varphi_{c\mathbf{k}}\rangle \tag{3}\]
Equation (2) does not capture the full picture because it ignores the electron-hole correlation which plays a crucial role in XAS[30]. The Bethe-Salpeter equation[24] (BSE), for the two-particle correlation function \(L\), is nowadays the state-of-the-art approach for simulating and predicting optical[31; 32; 33] and core spectra[34; 35; 14; 36; 23] in solids. The BSE, within the GW approximation[37] for the self-energy, reads:
\[L(1234)= L_{0}(1234)+\int L_{0}(1256)[v(57)\delta(56)\delta(78) \tag{4}\] \[+W(56)\delta(57)\delta(68)]L(7834)d5d6d7d8\]
Here the arguments 1 to 8 represent the position, time, and spin coordinates \((\mathbf{r},t,\sigma)\), \(L_{0}\) is the product of two one-particle Green's functions, \(v\) is the bare Coulomb interaction, and \(W\) is statically screened Coulomb interaction, that we calculate in the random-phase approximation (RPA). This Dyson like equation can be rewritten as an eigenvalue problem in the basis of independent transitions (\(vc\mathbf{k}\)), by defining the two-particle excitonic Hamiltonian[38] (\(H_{\text{exc}}\)), within the Tamm-Dancoff approximation[39]:
\[H_{\text{exc}}A_{\lambda}^{vc\mathbf{k}}(\mathbf{q})=A_{\lambda}^{vc\mathbf{ k}}(\mathbf{q})E_{\lambda}(\mathbf{q})\quad, \tag{5}\]
whose matrix elements are calculated as:
\[\langle v\mathbf{ck}|H_{\text{exc}}|v^{\prime}c^{\prime}\mathbf{k}^{\prime} \rangle=E_{vc\mathbf{k}}\delta_{vv^{\prime}}\delta_{cc^{\prime}}\delta_{kk^{ \prime}}+\langle v\mathbf{ck}|\bar{v}_{c}-W|v^{\prime}c^{\prime}\mathbf{k}^{ \prime}\rangle. \tag{6}\]
Here \(\bar{v}\) is the Coulomb interaction without its macroscopic component (\(\bar{v}_{c}=4\pi/|\mathbf{q}+\mathbf{G}|\) for \(\mathbf{G}\neq 0\) and \(\bar{v}_{c}=0\) for the \(\mathbf{G}=0\) component). The first term of the Hamiltonian (6), which is purely diagonal, recovers back the IPA expression (2), the term \(\bar{v}\) includes crystal local field effect[40; 41], and the term \(W\) describe the electron-hole correlation. Therefore, one could study each contributions by turning on/off the different terms. Throughout this article, we will refer to IPA, RPA and BSE spectra to the ones that consider the first term, the first two terms or the complete equation, respectively.
Finally, by solving eq (5), the XAS spectrum (1) can be calculated in terms of eigenvectors \(A_{\lambda}\) and eigenvalues \(E_{\lambda}\) as:
\[\text{Im}\epsilon_{M}(\omega)=\lim_{\mathbf{q}\to 0}\frac{8\pi^{2}}{ \Omega q^{2}}\sum_{\lambda}\bigg{|}\sum_{vc\mathbf{k}}A_{\lambda}^{vc\mathbf{ k}}\tilde{\rho}_{vc\mathbf{k}}(\mathbf{q})\bigg{|}^{2}\delta(\omega-E_{ \lambda}), \tag{7}\]
### Inelastic x-ray scattering
The transition rate for a photon inelastically scattered, obtained from a perturbative treatment of the electron-photon interaction up to second order can be calculated
according to the Fermi's Golden Rule as:
\[\frac{d^{2}\sigma}{d\Omega_{2}d\omega_{2}}\propto \sum_{F}\left|\langle F|\hat{T}|I\rangle+\sum_{N}\frac{\langle F| \hat{T}|N\rangle\langle N|\hat{T}|I\rangle}{\omega_{1}-(E_{N}-E_{I})+i\eta} \right|^{2}\] \[\times\delta(\omega_{1}-\omega_{2}-(E_{F}-E_{I})) \tag{8}\]
where \(|I\rangle\), \(|N\rangle\) and \(|F\rangle\) are the many-body electronic states of initial, intermediate and final states, respectively, with energies \(E_{I}\), \(E_{N}\) and \(E_{F}\), \(\hat{T}\) is the transition operator, and \(i\eta\) gives a Lorentzian broadening that represents the core lifetime. The first order term is associated to NRIXS, while the second order yields RIXS. In general, the first order amplitude dominates over the second order one, resulting in only NRIXS process. However, when \(\omega_{1}\) is in resonance with a specific excitation energy of the material, the second order term becomes predominant, probing RIXS cross section.
One can approximate the transition operator \(\hat{T}\) by the electron-photon interaction terms [42]\(\mathbf{A}^{2}\) and \(\mathbf{A}\cdot\mathbf{p}\) (where \(A\) is the vector potential) for in the first order term and second one, respectively. Hence, the resulting NRIXS and RIXS intensities are described by the generalized Kramers-Heisenberg formula [43; 6; 7; 44]:
\[\frac{d^{2}\sigma}{d\Omega_{2}d\omega_{2}}= r_{0}^{2}\bigg{(}\frac{\omega_{2}}{\omega_{1}}\bigg{)}\sum_{F} \left|\langle F|\mathbf{e}_{1}\cdot\mathbf{e}_{2}^{*}\sum_{j}e^{-i\mathbf{Q} \cdot\mathbf{r}_{j}}|I\rangle+\right.\] \[\left.\sum_{N}\frac{\langle F|e^{i\mathbf{K}_{1}\cdot\mathbf{r}_ {j}}\mathbf{e}_{2}^{*}\cdot\mathbf{p}|N\rangle\langle N|e^{-i\mathbf{K}_{2} \cdot\mathbf{r}_{j}}\mathbf{e}_{1}\cdot\mathbf{p}|I\rangle}{\omega_{1}-(E_{N }-E_{I})+i\eta}\right|^{2}\] \[\delta(\omega_{1}-\omega_{2}-E_{F}+E_{I}) \tag{9}\]
Here, \(r_{0}=e^{2}/mc^{2}\) is the classical electron radius, \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\) are the polarization vectors of the incident and outgoing photon, with momentum \(\mathbf{K}_{1}\) and \(\mathbf{K}_{2}\), and energies \(\omega_{1}\) and \(\omega_{2}\), and \(\mathbf{Q}=\mathbf{K}_{1}-\mathbf{K}_{2}\) is the momentum transfer. NRIXS and RIXS are momentum and polarization resolved, independently on the momentum carried out by the photon, which can be important (negligible) in the hard (soft) X-ray regime. Because of the scattering nature of the technique, a polarization change in the incoming photon enables a controllable momentum transfer to the sample, allowing to map dispersion relations.
_NRIXS:_
Now, focusing in the non-resonant scattering, the cross section can be related to the dynamical structure factor [45]\(S(\mathbf{q},\omega)\), according to:
\[\frac{d^{2}\sigma}{d\Omega_{2}d\omega_{2}}=\bigg{(}\frac{d\sigma}{d\Omega_{2} }\bigg{)}_{Th}S(\mathbf{q},\omega) \tag{10}\]
with the Thomson cross section \((d\sigma/d\Omega_{2})_{Th}=r_{0}^{2}(\omega_{2}/\omega_{1})(\mathbf{e}_{1} \cdot\mathbf{e}_{2}^{*})\). Using the fluctuation-dissipation theorem, one can write the structure factor in terms of the macroscopic dielectric function as:
\[S(\mathbf{q},\omega)=-\frac{q^{2}}{4\pi^{2}n}\ \text{Im}\bigg{[}\frac{1}{ \epsilon_{M}(\mathbf{q},\omega)}\bigg{]} \tag{11}\]
with \(n\) representing the average electron density. Similarly to eq. (7), the inverse macroscopic dielectric function can be described using the BSE, leading to the following NRIXS cross section:
\[\frac{d^{2}\sigma}{d\Omega_{2}d\omega_{2}}\propto\frac{8\pi^{2}}{\Omega q^{2} }\sum_{\lambda}\bigg{|}\sum_{vc\mathbf{k}}A_{\lambda}^{vc\mathbf{k}}\tilde{ \rho}_{vc\mathbf{k}}(\mathbf{q})\bigg{|}^{2}\delta(\omega-E_{\lambda}) \tag{12}\]
Here \(A_{\lambda}^{vc\mathbf{k}}\) are eigenvectors of the BSE Hamiltonian (6) considering the full \(v_{c}\), i.e. including the long term component (\(\mathbf{G}=0\)).
_RIXS:_
In this case, the cross section can be calculated in terms of excitation pathways [21; 22] within the MBPT formalism:
\[\frac{d^{2}\sigma}{d\Omega d\omega}\propto\text{Im}\sum_{\lambda_{\sigma}} \frac{\left|\sum_{\lambda_{\mu}}\frac{t^{(1)},\,t^{(2)}}{\omega_{1}-E_{\lambda_ {\mu}}+i\eta}\right|^{2}}{\omega_{loss}-E_{\lambda_{\sigma}}+i\eta}\, \tag{13}\]
with
\[t^{(1)}=\sum_{\mu c\mathbf{k}}A_{\lambda_{\mu}}^{vc\mathbf{k}}\langle c \mathbf{k}|\mathbf{e}_{1}\cdot\mathbf{p}|\mu\mathbf{k}\rangle \tag{14}\]
\[t^{(2)}=\sum_{vc\mathbf{k}}\sum_{\mu}A_{\lambda_{\sigma}}^{vc\mathbf{k}} \langle\mu\mathbf{k}|\mathbf{e}_{2}^{*}\cdot\mathbf{p}|v\mathbf{k}\rangle \big{[}A_{\lambda_{\mu}}^{vc\mathbf{k}}\big{]}^{*} \tag{15}\]
We should note that eq. (13) has been derived by considering dipole and Tamm-Dancoff approximations, limiting the approach to only direct RIXS.
### Computational details
The calculations have been performed using the full-potential all-electron (AE) approach, as implemented in the Exciting code [46; 47]. The Kohn-Sham ground-state wave functions have been calculated within the local density approximation [29] (LDA). We adopted the experimental lattice parameter [48]\(a_{0}=5.128\) A and \(\alpha=55.287\) in the rhombohedral primitive cell. The BZ is sampled with a shifted \(6\times 6\times 6\) k-grid, using plane waves expansion with a cutoff energy of 12 Hartree. The AE approach includes muffin-tin (MT) spheres with a radius of 2 bohr and 1.45 bohr for aluminum and oxygen, respectively.
BSE calculations are performed on a \(8\times 8\times 8\) shifted \(\mathbf{k}\)-grid by (0.05, 0.15, 0.25). Local-field effects are included up to a cut-off \(|\mathbf{G}+\mathbf{q}|_{max}=4\)\(a_{0}^{-1}\), maintaining a cut-off energy for the plane waves (PW) of 7 Hartree, in XAS and RIXS calculations. The NRIXS calculations are performed with a PW cut-off of 10 Hartrees
and \(|{\bf G}+{\bf q}|_{max}=7~{}a_{0}^{-1}\) To obtain the RPA screening \(W\) we used the same parameters as in the BSE, including 100 conduction bands. The BSE Hamiltonian was constructed considering 12 occupied states and 60 unoccupied states.
The RIXS cross section, was calculated with the BRIXS code [21; 49], considering the first (lowest-energy) 17000 and 8000 BSE eigenvectors and eigenvalues for the core and valence excitations, respectively.
## III Results and discussion
### XAS and NRIXS at K edge of \(\alpha\)-Al\({}_{2}\)O\({}_{3}\)
#### iii.1.1 X-ray absorption
Since the final state in XAS is the intermediate state of RIXS, studying X-ray absorption spectrum at the K and L\({}_{1}\) edges of Al is important to understand later the RIXS spectra. Fig. 1 compares experimental absorption spectrum [26; 27] with the calculated IPA, RPA and BSE spectra for two directions (xy and z which is perpendicular and parallel, respectively, to the c axis in the crystal structure). Here the intensity of the experiment has been arbitrarily normalized to match the BSE intensity of the main peak. We considered a broadening of 0.7 eV corresponding to the core-hole lifetime reported in literature [50].
We find that the BSE spectrum reproduce very well the experimental features, except for the pre-peak (A) observed at \(\sim 1566\) eV, which corresponds to transitions from the \(1s\) state to the bottom of the conduction band at the \(\Gamma\) point, with mostly \(s\) character, and therefore dipole forbidden. By analyzing the eigenvalues (and eigenvectors) of the BSE Hamiltonian, we confirm the dark character of the exciton at 1566 eV due to dipole selection rule. Previous theoretical and experimental works [51; 26; 52] have confirmed that atomic vibration enhance \(sp\) hybridization at the bottom of the conduction band, by deviating the Al atoms from their centrosymmetric positions, enabling \(1s\to 3p\) atomic-like transitions.
However, if we use a smaller broadening (inset in Fig. 1), it is possible to identify a small contribution to the pre-peak A in the polarization direction parallel to the z axis. Moreover, a pre-peak A' (also observed in our previous calculations [53] for L\({}_{1}\) edge) can be distinguished at higher energies than A, corresponding to transitions where most of the contribution comes from from \(1s/2s\) states to the first conduction band, at a given \({\bf k}\) between the high symmetry points \(\Gamma\) and \(T\). The non zero oscillator strengths, resulting from these transitions confirm that already at the electronic level some sp hybridization is possible. By comparing with RPA and IPA results, the BSE spectrum highlights the importance of the electron hole interaction that give rise to excitonic effects, as already discussed for the L\({}_{1}\) edge [53].
While in principle \(s\to cb\) transitions could be proved from either \(n=1\) or 2 levels, theoretical and experimental efforts have predominantly focused on K edges. Therefore, one of the goals of this article is to demonstrate the validity of soft X-ray L\({}_{1}\) edge RIXS by comparing against K edge results. In order to ensure a rigorous analysis, before addressing the RIXS discussion, we compare their XAS spectra. The results, shown in Fig. 1, confirm that both edges yield the same features.
#### iii.1.2 Non-resonant inelastic X-ray scattering
A complementary technique that permits to go beyond the dipole selection rules is NRIXS. The scattering nature of this technique provides access to multipole transition channels and probes final states with \(s\), \(p\)\(d\), etc, character. Therefore, it is the ideal approach for addressing transitions that lead to the prepeak (A) in the experimental XAS of Fig. 1 which, otherwise, is only captured when accounting for vibrational effects.
Non resonant inelastic x-ray scattering, also called X-ray Raman Scattering (XRS), bears a lot of similarities with electron energy loss spectroscopy (EELS) as they are both proportional to the dynamic structure factor \(S({\bf q},\omega)\). In the limit \({\bf q}\to 0\), it also becomes similar to the XAS cross section as it essentially probes the same electronic transitions. From the mathematical point of view the structure factor eq. (11), alternatively expressed
Figure 1: XAS spectrum at the K edge of Al in \(\alpha\)-Al\({}_{2}\)O\({}_{3}\)using a Lorentzian broadening of 0.7 eV. The two experimental spectra were normalized to match the maximum of the BSE intensity. The spectrum from Cabaret _et. al._[26] was shifted 1 eV to higher energies to match the measurements of Fulton _et. al._[27]. The inset shows XAS calculated with a broadening of 0.5 eV. The absorption spectra at the L\({}_{1}\) edge will be discussed in Sec. III.2
as:
\[S(\mathbf{q},\omega)\propto\frac{\text{Im }\epsilon_{\text{M}}(\mathbf{q}, \omega)}{[\text{Re }\epsilon_{\text{M}}(\mathbf{q},\omega)]^{2}+[\text{Im }\epsilon_{\text{M}}(\mathbf{q},\omega)]^{2}}\simeq\text{Im }\epsilon_{\text{M}}(\mathbf{q},\omega) \tag{16}\]
can be approximated as the imaginary part of the dielectric function, since at high energy excitations we have \(\text{Im }\epsilon_{\text{M}}(\mathbf{q},\omega)\ll\text{Re }\epsilon_{\text{M}}(\mathbf{q},\omega)\to 1\), therefore \(S(\mathbf{q},\omega)\propto\text{Im }\epsilon_{\text{M}}(\mathbf{q},\omega)=\text{XAS}\). In this case, the direction of the momentum transfer \(\mathbf{q}\) plays the role of the XAS polarization vector \(\varepsilon\) of the incident X-ray beam. Therefore, the anisotropy of the x-ray edge can be studied by varying \(\mathbf{q}\) in the same way it is studied in XAS by varying the direction of \(\mathbf{e}\). However, as the magnitude of \(\mathbf{q}\) increases, contributions from other (dipole forbidden) excitation channels become important, and Eq. (16) does not represent XAS anymore.
The NRIXS spectrum for \(\mathbf{q}=0\) and \(\mathbf{q}=8.9\) A\({}^{-1}\) is shown in Fig 2. In order to compare with experiments conducted on a powder sample of \(\alpha\)-Al\({}_{2}\)O\({}_{3}\), we calculated multiple directions and then plotted the average. From the results it can be seen that in the low momentum transfer regime the scattering cross section is dominated by the dipole allowed excitations, resulting in a spectrum similar to XAS.
It is interesting to notice that in the absorption spectrum shown in Fig. 1 for the k edge, X-ray photons carry a momentum transfer of \(|\mathbf{q}|=0.12\) A\({}^{-1}\). Therefore, besides the temperature effects that give rise to dipole transitions in pre-peak A, the non-negligible momentum transfer could enhance even further the small electronic contribution observed at the pre-peak A in XAS.
### Resonant inelastic x-ray scattering at L\({}_{1}\) and K edges
The RIXS spectra at the \(L_{1}\) and K edges is shown in Fig. 3. Here, every curve, for different incoming photon energy \(\omega_{i}\), was normalized to its maximum. The lifetime of the core hole was considered to 0.1 eV for both edges, while the typical expected values are 0.42 eV for K edge and 0.7 eV for the L\({}_{1}\)edge [50]. The plot evidences the remarkable agreement between the two edges.
Based on the dependence of RIXS features with the incoming energy \(\omega_{1}\) one can identify two regimes. At energies close to the XAS onset, the RIXS vs energy loss spectrum behaves in a _Raman_ like way. In this case, the energy loss (\(\omega_{loss}\)) transferred to the material is independent of \(\omega_{1}\), i.e. when the incident photon energy is increased, the energy of the outgoing photons \(\omega_{2}\) increases by the same amount. At energies beyond the XAS threshold one can identify a _fluorescence_ behavior, where the peaks that appear at certain \(\omega_{loss}\) shift accordingly with \(\omega_{1}\), yielding a constant \(\omega_{2}\). Due to the different nature of these two process, we will analyze them in more detail, separately.
#### iii.2.1 RIXS at the XAS threshold
Fig. 4 presents L\({}_{1}\) RIXS spectra as a function of the outgoing photon (\(\omega_{2}\)), for incoming energies taken every 0.2 eV between 107.4 and 108.8 eV. Here, it can be observed a drastic dependence of the spectra shape, with the energy of the incoming photon \(\omega_{1}\), highlighting the coupling between the absorption and emission processes. Such changes in the spectrum with only small variations in the incoming energy can not be explained in terms of XAS modulation.
Studies carried out in Ref. [55; 56; 57] propose that Raman loses could arise from dispersive emission effects.
As a result of the dipole approximation and the \(\mathbf{k}\) conservation in the absorption-emission process, the allowed transitions from core to conduction band and from occupied states to the core hole, lie on a vertical line through the band structure. Before and just at the XAS threshold, the system is excited to a intermediate virtual state, enabled by the core hole lifetime. Therefore, by spanning over different \(\omega_{1}\), it is possible to deexcite from different crystal momentum in the valence, emitting a photon with energy \(\omega_{2}\) such that maintains the same \(\omega_{loss}\) (given by conservation of energy term: \(\omega_{loss}-E_{\lambda_{o}}+i\eta\), described in eq. (13)), thus capturing the dispersion of the valence band in the Raman loses. This is a rather simplified explanation within the many-body picture, since the excitations do not correspond to a specific valence or conduction band, instead they are a mixing between them. Moreover, there are transitions that fall within the bandgap without necessarily corresponding to virtual states. Nevertheless, the observed constant \(\omega_{loss}\) (or shifting in \(\omega_{2}\) with \(\omega_{1}\)), means that RIXS for ener
Figure 2: Calculated BSE NRIXS spectrum averaged over yz directions. The experimental NRIXS spectra of \(\alpha\)-Al\({}_{2}\)O\({}_{3}\)was taken at 300 K, for momentum transfer \(|q|=10\) Å\({}^{-1}\) The calculations have been shifted to match the experimental curve, extracted from Ref [54].
gies bellow the XAS threshold 1) do not represent simply emission, 2) the shift in \(\omega_{2}\) are associate to dispersion in many-body transitions from different crystal momentum.
To establish a connection with other spectroscopic techniques, we compared the inelastic loses obtained in RIXS and NRIXS for valence excitations. While it could be expected that RIXS Raman features would appear at the same loss energies as in the structure factor (since the two expressions share the same energy conservation term in eq. (9)), a direct correspondence does not appear evident. In the first place, this is because selection rules applying in the two spectroscopies are different, thus enhancing or vanishing different features. The most evident case is the missing features in the RIXS spectra below 9 eV, corresponding to dipole forbidden transitions from the \(1/2s\) states to the bottom of the conduction band in the intermediate state, while in valence NRIXS they are visible since they are allowed excitations from the top valence (with mostly p character) to bottom conduction. In the second reason for disagreements between the two methods, is the denominator in the second term of eq. (9) depending on the incoming photon energy, which enhance features at resonant energies, modifying the intensities of the loss peaks.
#### iii.2.2 RIXS above the XAS threshold
Fig. 5 presents BSE and IPA RIXS spectra for incoming energies above the XAS threshold. For energies above 110 eV, the two approximations provide quite similar results. However, notable differences appear for incoming energies at the XAS onset, confirming that IPA is not enough to have a good description of the RIXS process.
The features observed at constant emission energy are indicative of a two-step fluorescence process, where absorption and emission are independent. To complete our analysis, we also compare RIXS with the X-ray emission spectrum (XES). The XES spectrum aligns well with the angular component \(p\) in the Partial Density of States (PDOS) of alumina [53], enabling dipole-allowed transitions to fill the \(2s\) core hole. While XES and RIXS involve different oscillator strengths, with XES being calculated as \(|\langle\mu\mathbf{k}|\mathbf{e}_{2}^{*}\cdot\mathbf{p}|v\mathbf{k}\rangle|^{2}\), and RIXS fluorescence governed by the \(t^{(2)}\) factor in equation eq. (15) (the latter imposes a more stringent condition, requiring both core and opti
Figure 3: RIXS spectra obtained at the K and L\({}_{1}\) edge, spanning a range of incoming energies \(\omega_{i}\) with increments of 0.2 eV. Here, each curves was normalized to its maximum. To facilitate a meaningful comparison between the two edges, equivalent \(\omega_{i}\) points were selected from each XAS spectrum. This comparison was performed without shifting the spectrum to match experimental results, as was done in Fig. 1 and 2. In both cases a value 0.1 eV was employed for the two broadenings in eq. (9).
cal eigenvectors to be simultaneously bright), the comparison between the two methods yields a high degree of agreement.
## IV Conclusions
We have presented an in-deep analysis of RIXS spectra, considering coherence and excitonic effects throughout the process. Additionally, we aimed to establish a connection between RIXS and complementary spectroscopy techniques that assess neutral excitations in materials, specifically XAS, XES, and NRIXS. We have applied the BSE approach to study core and semi-core excitations of corundum \(\alpha\)-Al\({}_{2}\)O\({}_{3}\), a widely used material due its optical and structural properties.
Our investigation of XAS spectra at the K and L\({}_{1}\) edges of aluminum was essential for understanding subsequent RIXS spectra. Comparative analysis between experimental absorption spectra and calculated spectra (IPA, RPA, and BSE) at different crystal directions (xy and z) demonstrated that the BSE method effectively reproduces most features of the experimental data. The experimental pre-peak A observed at 1566 eV, representing dipole-forbidden transitions in the calculations at 0K, was predicted to be a dark exciton. However, it became bright when considering finite momentum transfer in NRIXS calculations, by assessing multipole contributions.
Remarkably, our study revealed that RIXS spectra at the K and L\({}_{1}\) edges exhibited a high degree of agreement, permitting the use of soft X-ray RIXS for studying the L\({}_{1}\) edge and extract same information as the traditionally explored K edge. For energies near the XAS onset, the RIXS spectra showed a Raman-like behavior, where energy loss remained constant as the incident photon energy (\(\omega_{1}\)) was varied. Comparative analysis of the Raman losses with NRIXS shed light on differences in selection rules and intensity enhancements between the two techniques. Beyond the XAS threshold, RIXS displayed two-step fluorescence behavior, with peaks shifting in accordance with \(\omega_{1}\), highlighting the loss of coherence between the absorption and the emission processes. The agreement with XES emphasizes the similarities between fluorescence and emission, even though the two methods
Figure 4: The blue curves presents RIXS spectra as a function of the energy loss \(\omega_{loss}\). The spectra was calculated at the L\({}_{1}\) edge, for \(\omega_{i}\) values in resonance with the XAS threshold, obtained under the same conditions as explained in Fig 3. The dark yellow curve correspond to the NRIXS structure factor for optical excitations calculated at q=0. The black dotted lines connect Raman peaks with their equivalent inelastic losses in the NRIXS spectrum.
Figure 5: Comparison between BSE (blue curves) and IPA (dark yellow curve) RIXS as a function of the emission energy (\(\omega_{o}\)) at the L\({}_{1}\) edge for \(\omega_{i}\) values in resonance and far from resonance. The fluorescence losses are compared with the XES spectrum plotted in blue at the top.
involve the calculation of completely different oscillator strengths.
Collectively, these findings contribute to a deeper understanding of the RIXS process, its behavior in different energy regimes, and its connection with other spectroscopic techniques, facilitating further insights into materials' electronic properties.
###### Acknowledgements.
We acknowledge valuable discussions with Christian Vorwerk. We thank the French Agence Nationale de la Recherche (ANR) for financial support (Grant Agreements No. ANR-19-CE30-0011). Computational time was granted by GENCI (Project No. 544).
|
2309.16113 | Stević-Sharma type operators between Bergman spaces induced by
doubling weights | Using Khinchin's inequality, Ger$\check{\mbox{s}}$gorin's theorem and the
atomic decomposition of Bergman spaces, we estimate the norm and essential norm
of Stevi\'c-Sharma type operators from weighted Bergman spaces $A_\omega^p$ to
$A_\mu^q$ and the sum of weighted differentiation composition operators with
different symbols from weighted Bergman spaces $A_\omega^p$ to $H^\infty$.The
estimates of those between Bergman spaces remove all the restrictions of a
result in [Appl. Math. Comput.,{\bf 217}(2011),8115--8125]. As a by-product, we
also get an interpolation theorem for Bergman spaces induced by doubling
weights. | Juntao Du, Songxiao Li, Zuoling Liu | 2023-09-28T02:42:56Z | http://arxiv.org/abs/2309.16113v1 | # Stevic-Sharma type operators between Bergman spaces induced by doubling weights
###### Abstract.
Using Khinchin's inequality, Gersgorin's theorem and the atomic decomposition of Bergman spaces, we estimate the norm and essential norm of Stevic-Sharma type operators from weighted Bergman spaces \(A_{\omega}^{p}\) to \(A_{\mu}^{q}\) and the sum of weighted differentiation composition operators with different symbols from weighted Bergman spaces \(A_{\omega}^{p}\) to \(H^{\infty}\). The estimates of those between Bergman spaces remove all the restrictions of a result in [Appl. Math. Comput., **217**(2011), 8115-8125]. As a by-product, we also get an interpolation theorem for Bergman spaces induced by doubling weights.
_Keywords_: Stevic-Sharma type operator; differentiation composition operator; Bergman space; doubling weight.
2010 Mathematics Subject Classification: 30H20, 47B10, 47B35. \(\dagger\) Corresponding author The work was supported by NNSF of China (Nos. 12371131 and 12271328), Guangdong Basic and Applied Basic Research Foundation (No. 2022A1515012117) and Projects of Talents Recruitment of GDUPT(No. 2022rcyj2008), Project of Science and Technology of Maoming (No. 2023417) and STU Scientific Research Initiation Grant (No. NTF23004).
For a radial weight \(\omega\), let \(\omega(S(\lambda))=\int_{S(\lambda)}\omega(z)dA(z)\). Obviously, \(\omega(S(\lambda))\approx(1-|\lambda|)\hat{\omega}(\lambda)\). See [12, 10, 13] and references therein for more properties of doubling weight.
For \(0<p<\infty\) and a given \(\omega\in\hat{\mathcal{D}}\), the Bergman space \(A^{p}_{\omega}\) with doubling weight consists of all functions \(f\in H(\mathbb{D})\) such that
\[\|f\|^{p}_{A^{p}_{\omega}}=\int_{\mathbb{D}}|f(z)|^{p}\omega(z)dA(z)<\infty,\]
where \(dA\) is the normalized area measure on \(\mathbb{D}\). As usual, we denote \(A^{p}_{\alpha}\) as the standard weighted Bergman space induced by the radial weight \(\omega(z)=(\alpha+1)(1-|z|^{2})^{\alpha}\) with \(-1<\alpha<\infty\). Throughout this paper, we assume that \(\hat{\omega}(z)>0\) for all \(z\in\mathbb{D}\). Otherwise \(A^{p}_{\omega}=H(\mathbb{D})\). Let \(H^{\infty}\) denote the bounded analytic function space, i.e.,
\[H^{\infty}=\left\{f\in H(\mathbb{D}):\|f\|_{H^{\infty}}=\sup_{z\in\mathbb{D}}| f(z)|<\infty\right\}.\]
Let \(S(\mathbb{D})\) be the set of all analytic self-maps of \(\mathbb{D}\). For \(n\in\mathbb{N}\cup\{0\},\varphi\in S(\mathbb{D})\), and \(u\in H(\mathbb{D})\), the generalized weighted composition operator \(uD^{(n)}_{\varphi}\) is defined by
\[uD^{(n)}_{\varphi}f=u\left(f^{(n)}\circ\varphi\right),\quad f\in H(\mathbb{D}).\]
The operator \(uD^{(n)}_{\varphi}\) was introduced by Zhu in [27]. The generalized weighted composition operator is also called a weighted differentiation composition operator (see [16, 17, 18]). When \(n=0\), \(uD^{(n)}_{\varphi}\) is the weighted composition operator \(uC_{\varphi}\). In particular, when \(n=0\) and \(u\equiv 1\), \(uD^{(n)}_{\varphi}\) is the composition operator \(C_{\varphi}\). By using the pull-back measure, the first two authors of this paper and Shi [3] estimated the norm and essential norm of weighted composition operators between Bergman spaces induced by doubling weights. At the same time, Liu [9] independently characterized the boundedness and compactness of weighted differentiation composition operator \(uD^{(n)}_{\varphi}:A^{p}_{\omega}\to L^{q}_{\nu}\) when \(0<p,q<\infty\), \(\omega\in\mathcal{D}\) and \(\nu\) is a positive Borel measure on \(\mathbb{D}\). For more discussion on composition operators and weighted composition operators, we refer to [2, 4, 6, 15, 19, 26] and the references therein. When \(u\equiv 1\), \(uD^{(n)}_{\varphi}\) is the differentiation composition operator \(D^{(n)}_{\varphi}\). When \(u\equiv 1\) and \(\varphi(z)=z\), \(uD^{(n)}_{\varphi}\) is the \(n\)-th differentiation operator \(D^{(n)}\). So, the generalized weighted composition operator attracted a lot of attentions since it covers a lot of classical operators. See [16, 17, 18, 27, 28, 29, 30] for further information and results on generalized weighted composition operators on analytic function spaces.
In 2011, Stevic, Sharma and Bhat [19] introduced a operator \(T_{u_{0},u_{1},\varphi}\) as follows.
\[T_{u_{0},u_{1},\varphi}=u_{0}D^{(0)}_{\varphi}+u_{1}D^{(1)}_{\varphi}.\]
This operator and its extension \(\sum_{k=0}^{n}u_{k}D^{(n)}_{\varphi_{k}}\) are called Stevic-Sharma type operators by some authors, where \(\{u_{k}\}_{k=0}^{n}\subset H(\mathbb{D})\) and \(\{\varphi_{k}\}_{k=0}^{n}\subset S(\mathbb{D})\). In [19], Stevic, Sharma and Bhat characterized the boundedness of \(T_{u_{0},u_{1},\varphi}:A^{p}_{\alpha}\to A^{p}_{\alpha}\) with the assumption
\[u_{0}\in H^{\infty}\ \ \text{or}\ \ \sup_{z\in\mathbb{D}}\frac{|u_{1}(z)|}{1-| \varphi(z)|^{2}}<\infty. \tag{2}\]
Two natural questions are raised.
**Q1.** Whether the condition (2) can be removed?
**Q2.** What about the operator \(u_{0}D^{(0)}_{\varphi_{0}}+u_{1}D^{(1)}_{\varphi_{1}}\) when \(\varphi_{0}\neq\varphi_{1}\)?
See, for example, [5, 7, 21, 22, 23, 24] for some investigations about these operators.
By using Khinchin's inequality, Gersgorin's theorem, and the atomic decomposition of weighted Bergman spaces, we give a positive answer to question **Q1**. Moreover, we extend it to a more general case and completely estimate the norm and essential norm of the operator
\[T_{n,\varphi,\vec{u}}=\sum_{k=0}^{n}u_{k}D^{(k)}_{\varphi}\]
from \(A^{p}_{\omega}\) to \(A^{q}_{\mu}\) with \(\omega,\mu\in\mathcal{D}\) and \(1\leq p,q<\infty\), where \(\vec{u}=(u_{0},u_{1},\cdot\cdot\cdot,u_{n})\) with \(\{u_{k}\}_{k=0}^{n}\subset H(\mathbb{D})\) and \(\varphi\in S(\mathbb{D})\). The first result of this paper is stated as follows.
**Theorem 1**.: _Suppose \(1\leq p,q<\infty,\omega,\mu\in\mathcal{D}\), \(\varphi\in S(\mathbb{D})\) and \(\{u_{k}\}_{k=0}^{n}\subset H(\mathbb{D})\). Then,_
\[\|T_{n,\varphi,\vec{u}}\|_{A^{p}_{\omega}\to A^{q}_{\mu}}\approx\sum_{k=0}^{n }\|u_{k}D^{(k)}_{\varphi}\|_{A^{p}_{\omega}\to A^{q}_{\mu}}.\]
_Moreover, if \(T_{n,\varphi,\vec{u}}:A^{p}_{\omega}\to A^{q}_{\mu}\) is bounded, then_
\[\|T_{n,\varphi,\vec{u}}\|_{e,A^{p}_{\omega}\to A^{q}_{\mu}}\approx\sum_{k=0}^{ n}\|u_{k}D^{(k)}_{\varphi}\|_{e,A^{p}_{\omega}\to A^{q}_{\mu}}.\]
By Remark 6 and Theorem B in Section 2, Theorem 1 completely characterizes the norm and essential norm of \(T_{n,\varphi,\vec{u}}:A^{p}_{\omega}\to A^{q}_{\mu}\)
Recall that the essential norm of a bounded operator \(T:X\to Y\) is defined by
\[\|T\|_{e,X\to Y}=\inf\Big{\{}\|T-K\|_{X\to Y};K:X\to Y\text{ is compact}\Big{\}}.\]
Here \(X\) and \(Y\) are Banach spaces. Obviously, \(T\) is compact if and only if \(\|T\|_{e,X\to Y}=0\).
For the question **Q2**, Acharyya and Ferguson [1] characterized the compactness of the operator \(\mathcal{T}_{n,\vec{\varphi},\vec{u}}:A^{p}_{\alpha}\to H^{\infty}\), where
\[\mathcal{T}_{n,\vec{\varphi},\vec{u}}=\sum_{k=0}^{n}u_{k}D^{(k)}_{\varphi_{k}}.\]
Here \(\vec{u}=(u_{0},u_{1},\cdots,u_{n})\) with \(\{u_{k}\}_{k=0}^{n}\subset H(\mathbb{D})\) and \(\vec{\varphi}=(\varphi_{0},\varphi_{1},\cdots,\varphi_{n})\) with \(\{\varphi_{k}\}_{k=0}^{n}\subset S(\mathbb{D})\). In this paper, we extend [1, Theorem 2] to the case of Bergman spaces \(A^{p}_{\omega}\) with \(\omega\in\hat{\mathcal{D}}\).
**Theorem 2**.: _Suppose \(n\in\mathbb{N}\cup\{0\}\), \(1\leq p<\infty\), \(\omega\in\hat{\mathcal{D}}\), \(\{u_{k}\}_{k=0}^{n}\subset H(\mathbb{D})\) and \(\{\varphi_{k}\}_{k=0}^{n}\subset S(\mathbb{D})\). Then,_
\[\|\mathcal{T}_{n,\vec{\varphi},\vec{u}}\|_{A^{p}_{\omega}\to H^{\infty}}\approx \sum_{k=0}^{n}\sup_{z\in\mathbb{D}}\frac{|u_{k}(z)|}{(1-|\varphi_{k}(z)|^{2})^{ k}\omega(S(\varphi_{k}(z)))^{\frac{1}{p}}}.\]
_Moreover, if \(\mathcal{T}_{n,\vec{\sigma},il}:A^{p}_{\omega}\to H^{\infty}\) is bounded, then_
\[\|\mathcal{T}_{n,\vec{\sigma},il}\|_{e,A^{p}_{\omega}\to H^{\infty}}\approx\, \sum_{k=0}^{n}\limsup_{|\omega_{k}(z)|\to 1}\frac{|u_{k}(z)|}{(1-|\varphi_{k}(z)|^{2})^{k} \omega(S(\varphi_{k}(z)))^{\frac{1}{p}}}.\]
By Theorem 2, it is easy to check that
\[\|u_{k}D^{(k)}_{\varphi_{k}}\|_{A^{p}_{\omega}\to H^{\infty}}\approx\sup_{z \in\mathbb{D}}\frac{|u_{k}(z)|}{(1-|\varphi_{k}(z)|^{2})^{k}\omega(S(\varphi_{ k}(z)))^{\frac{1}{p}}}\]
and
\[\|u_{k}D^{(k)}_{\varphi_{k}}\|_{e,A^{p}_{\omega}\to H^{\infty}}\approx\limsup _{|\varphi_{k}(z)|\to 1}\frac{|u_{k}(z)|}{(1-|\varphi_{k}(z)|^{2})^{k}\omega(S( \varphi_{k}(z)))^{\frac{1}{p}}}.\]
Therefore, Theorem 2 can be stated similarly as Theorem 1.
**Theorem 2\({}^{\prime}\).**_Suppose \(n\in\mathbb{N}\cup\{0\}\), \(1\leq p<\infty\), \(\omega\in\hat{\mathcal{D}}\), \(\{u_{k}\}_{k=0}^{n}\subset H(\mathbb{D})\) and \(\{\varphi_{k}\}_{k=0}^{n}\subset S(\mathbb{D})\). Then,_
\[\|\mathcal{T}_{n,\vec{\sigma},il}\|_{A^{p}_{\omega}\to H^{\infty}}\approx\, \sum_{k=0}^{n}\|u_{k}D^{(k)}_{\varphi_{k}}\|_{A^{p}_{\omega}\to H^{\infty}}.\]
_Moreover, if \(\mathcal{T}_{n,\vec{\sigma},il}:A^{p}_{\omega}\to H^{\infty}\) is bounded, then_
\[\|\mathcal{T}_{n,\vec{\sigma},il}\|_{e,A^{p}_{\omega}\to H^{\infty}}\approx\, \sum_{k=0}^{n}\|u_{k}D^{(k)}_{\varphi_{k}}\|_{e,A^{p}_{\omega}\to H^{\infty}}.\]
The sufficiency of Theorems 1 and 2 are easy to verify. To establish the necessity part, we need the following interpolation theorem, which has its own interesting. Here and henceforth, \(\delta_{ij}\) is the Dirac function, that is, \(\delta_{ij}=1\) when \(i=j\) and \(\delta_{ij}=0\) when \(i\neq j\).
**Theorem 3**.: _Let \(1\leq p<\infty\), \(n\in\mathbb{N}\cup\{0\}\), \(\omega\in\hat{\mathcal{D}}\). Then there is a positive constant \(C\) such that for all \(\Lambda=\{\lambda_{j}\}_{j=0}^{n}\subset\mathbb{D}\) and \(J\in\{0,1,\cdots,n\}\), there exists \(f_{\Lambda,J}\in A^{p}_{\omega}\) satisfying \(\|f_{\Lambda,J}\|_{A^{p}_{\omega}}\leq C\) and_
\[f^{(j)}_{\Lambda,J}(\lambda_{j})=\,\frac{\delta_{jj}}{(1-|\lambda_{j}|^{2})^{ j}\omega(S(\lambda_{j}))^{\frac{1}{p}}},\,\,\,j=0,1,\cdots,n. \tag{3}\]
_Moreover, if \(J\) is fixed, the functions \(\{f_{\Lambda,J}\}\) converge to 0 uniformly on compact subsets of \(\mathbb{D}\) as \(|\lambda_{J}|\to 1\)._
The rest of this paper is organized as follows. In Section 2, we will gather the necessary preliminaries. Sections 3, 4, and 5 are dedicated to the proofs of Theorem 1, Theorem 3, and Theorem 2, respectively.
Throughout this paper, the letter \(C\) will represent constants, which may vary from one occurrence to another. For two positive functions \(f\) and \(g\), we use the notation \(f\lesssim g\) to denote that there exists a positive constant \(C\), independent of the arguments, such that \(f\leq Cg\). Similarly, \(f\approx g\) indicates that \(f\lesssim g\) and \(g\lesssim f\).
## 2. preliminaries
In this section, we state some lemmas which will be used in the proof of main results of this paper. For brief, for any given \(\alpha>0\), let
\[\omega_{[\alpha]}(z)=(1-|z|^{2})^{\alpha}\omega(z),\ \ z\in\mathbb{D}.\]
**Lemma 4**.: _Suppose \(\alpha>0,\omega\in\mathcal{D}\). Then \(\omega_{[\alpha]}\in\mathcal{D}\) and \(\widehat{\omega_{[\alpha]}}\approx\hat{\omega}_{[\alpha]}\)._
Proof.: For brief, let \(\eta=\omega_{[\alpha]}\). Let \(C\) and \(K\) be those in (1). For all \(t\in[0,1)\), we have \(\hat{\eta}(t)\lesssim(1-t)^{\alpha}\hat{\omega}(t)\) and
\[\hat{\eta}(t)\geq\int_{t}^{1-\frac{1-t}{K}}(1-s^{2})^{\alpha}\omega(s)ds\approx (1-t)^{\alpha}\left(\hat{\omega}(t)-\hat{\omega}\bigg{(}1-\frac{1-t}{K}\bigg{)} \right)\gtrsim(1-t)^{\alpha}\hat{\omega}(t).\]
Then, we have \(\hat{\eta}(t)\approx(1-t)^{\alpha}\hat{\omega}(t)\) and then
\[\hat{\eta}(t)\lesssim(1-t)^{\alpha}\hat{\omega}\bigg{(}\frac{1+t}{2}\bigg{)} \approx\hat{\eta}\bigg{(}\frac{1+t}{2}\bigg{)}.\]
Thus, \(\eta\in\hat{\mathcal{D}}\). Since \(\frac{\hat{\eta}(t)}{(1-t)^{\alpha}}\) is essentially decreasing, by Lemma B in [14], \(\eta\in\mathcal{D}\). The proof is complete.
In [12], the authors characterized the Littlewood-Paley formula on Bergman spaces induced radial weights. For the benefit of readers, we state it as follows.
**Theorem A**.: _Let \(\omega\) be a radial weight, \(0<p<\infty\) and \(k\in\mathbb{N}\). Then, for all \(f\in H(\mathbb{D})\),_
\[\int_{\mathbb{D}}|f(z)|^{p}\omega(z)dA(z)\approx\sum_{j=0}^{k-1}|f^{(j)}(0)|^ {p}+\int_{\mathbb{D}}|f^{(k)}(z)|^{p}(1-|z|^{2})^{kp}\omega(z)dA(z)\]
_if and only if \(\omega\in\mathcal{D}\)._
**Lemma 5**.: _Assume \(1\leq p<\infty,n\in\mathbb{N},\omega\in\mathcal{D}\) and \(Y\) is a Banach space. Let \(T:A^{p}_{\omega_{[np]}}\to Y\) be a bounded linear operator. Then the following statements hold._
\[\|TD^{(n)}\|_{A^{p}_{\omega}\to Y}\approx\|T\|_{A^{p}_{\omega_{[np]}}\to Y},\ \ \|TD^{(n)}\|_{e,A^{p}_{\omega}\to Y}\approx\|T\|_{e,A^{p}_{\omega_{[np]}}\to Y}.\]
Proof.: Let \(\eta=\omega_{[np]}\). Since
\[\|TD^{(n)}\|_{A^{p}_{\omega}\to Y}=\sup_{f\neq 0}\frac{\|TD^{(n)}f\|_{Y}}{\|f\|_{A^{ p}_{\omega}}}=\sup_{f^{(n)}\neq 0}\frac{\|TD^{(n)}f\|_{Y}}{\|f\|_{A^{p}_{\omega}}}, \tag{4}\]
by Theorem A we have
\[\|TD^{(n)}\|_{A^{p}_{\omega}\to Y}\geq\sup_{f\neq 0\atop f(0)\mapsto-f^{(n-1)}(0) =0}\frac{\|TD^{(n)}f\|_{Y}}{\|f\|_{A^{p}_{\omega}}}\approx\sup_{f^{(n)}\neq 0 }\frac{\|Tf^{(n)}\|_{Y}}{\|f^{(n)}\|_{A^{p}_{\eta}}}=\|T\|_{A^{p}_{\eta}\to Y},\]
and
\[\|TD^{(n)}\|_{A^{p}_{\omega}\to Y}\lesssim\sup_{f^{(n)}\neq 0}\frac{\|Tf^{(n)}\|_{ Y}}{\|f^{(n)}\|_{A^{p}_{\eta}}}=\|T\|_{A^{p}_{\eta}\to Y}.\]
Therefore,
\[\|TD^{(n)}\|_{A^{p}_{\omega}\to Y}\approx\|T\|_{A^{p}_{\eta}\to Y}. \tag{5}\]
Suppose \(K:A^{p}_{\omega}\to Y\) is compact. Let \((If)(z)=\int_{0}^{z}f(\xi)d\xi\). By Theorem A, \(I^{n}:A^{p}_{\eta}\to A^{p}_{\omega}\) is bounded. So, \(KI^{n}:A^{p}_{\eta}\to Y\) is compact. By (5),
\[\|T\|_{e,A^{p}_{\eta}\to Y}\leq\|T-KI^{n}\|_{A^{p}_{\eta}\to Y}\approx\|TD^{(n) }-KI^{n}D^{(n)}\|_{A^{p}_{\omega}\to Y}.\]
By (4) and Theorem A,
\[\|TD^{(n)}-KI^{n}D^{(n)}\|_{A^{p}_{\omega}\to Y} =\sup_{f^{(n)}\neq 0}\frac{\|(TD^{(n)}-KI^{n}D^{(n)})f\|_{Y}}{\|f\|_{A^{p}_ {\omega}}}\] \[\lesssim\sup_{f^{(n)}\geq 0\atop f^{(n)}\to-f^{(n-1)}(0)=0}\frac{\|(TD^{(n )}-KI^{n}D^{(n)})f\|_{Y}}{\|f\|_{A^{p}_{\omega}}}\] \[=\sup_{f^{(n)}\geq 0\atop f^{(n)}\to-f^{(n-1)}(0)=0}\frac{\|(TD^{(n )}-K)f\|_{Y}}{\|f\|_{A^{p}_{\omega}}}\] \[\leq\|TD^{(n)}-K\|_{A^{p}_{\omega}\to Y}.\]
Thus, \(\|T\|_{e,A^{p}_{\eta}\to Y}\lesssim\|TD^{(n)}\|_{e,A^{p}_{\omega}\to Y}\).
Conversely, suppose \(K^{\prime}:A^{p}_{\eta}\to Y\) is compact. By Theorem A, \(D^{(n)}:A^{p}_{\omega}\to A^{p}_{\eta}\) is bounded. So, \(K^{\prime}D^{(n)}:A^{p}_{\omega}\to Y\) is compact. By (5),
\[\|TD^{(n)}\|_{e,A^{p}_{\omega}\to Y}\leq\|TD^{(n)}-K^{\prime}D^{(n)}\|_{A^{p}_ {\omega}\to Y}\approx\|T-K^{\prime}\|_{A^{p}_{\eta}\to Y}.\]
Therefore, \(\|TD^{(n)}\|_{e,A^{p}_{\omega}\to Y}\lesssim\|T\|_{e,A^{p}_{\eta}\to Y}\). The proof is complete.
**Remark 6**.: _If \(1\leq p,q<\infty,\omega,\eta\in\mathcal{D}\), \(k\in\mathbb{N}\cup\{0\}\), for any \(u\in H(\mathbb{D})\) and \(\varphi\in S(\mathbb{D})\), by Lemma 5, we have_
\[\|uD^{(k)}_{\varphi}\|_{A^{p}_{\omega}\to A^{q}_{\eta}}\approx\|uC_{\varphi} \|_{A^{p}_{\omega[k]}\to A^{q}_{\eta}},\ \ \|uD^{(k)}_{\varphi}\|_{e,A^{p}_{\omega}\to A^{q}_{\eta}}\approx\|uC_{\varphi} \|_{e,A^{p}_{\omega[k]}\to A^{q}_{\eta}}.\]
The norm and essential norm of \(uC_{\varphi}:A^{p}_{\omega}\to A^{q}_{\mu}\) were investigated in [3]. To state them, we need some more notations. When \(\omega\in\hat{\mathcal{D}}\) and \(0<p<\infty\), if \(\gamma>0\) is large enough, let
\[f_{\lambda,\gamma,\omega,p}(z)=\left(\frac{1-|\lambda|^{2}}{1-\overline{ \lambda}z}\right)^{\gamma}\frac{1}{\omega(S(\lambda))^{\frac{1}{p}}},\ \lambda,z\in\mathbb{D}.\]
According to [10, Lemma 3.1], \(\|f_{\lambda,\gamma,\omega,p}\|_{A^{p}_{\omega}}\approx 1\). For brevity, we denote \(f_{\lambda,\gamma,\omega,p}\) by \(f_{\lambda,\gamma}\). When \(u\in H(\mathbb{D}),\varphi\in S(\mathbb{D}),0<q<\infty,\mu\in\hat{\mathcal{D}}\), for any measurable set \(E\subset\mathbb{D}\), let
\[\nu_{u,\varphi,q,\mu}(E)=\int_{\varphi^{-1}(E)}|u(z)|^{q}\mu(z)dA(z).\]
Then, \(\|uC_{\varphi}f\|_{A^{q}_{\mu}}=\|f\|_{L^{q}_{\nu_{u,\varphi,q,\mu}}}\). The maximum function of \(\nu_{u,\varphi,q,\mu}\) is defined by
\[M_{\omega}(\nu_{u,\varphi,q,\mu})(z)=\sup_{z\in S(a)}\frac{\nu_{u,\varphi,q, \mu}(S(a))}{\omega(S(a))},\ \ a,z\in\mathbb{D}.\]
**Theorem B**.: _Assume \(\omega,\mu\in\hat{\mathcal{D}}\), \(u\in H(\mathbb{D})\) and \(\varphi\in S(\mathbb{D})\)._
1. _When_ \(0<p\leq q<\infty\)_, the following estimates hold:_ \[\|uC_{\varphi}\|_{A_{\omega}^{p}\to A_{\mu}^{q}}^{q}\approx\sup_{d\in\mathbb{D}} \int_{\mathbb{D}}|f_{\lambda,\gamma}(\varphi(z))|^{q}|u(z)|^{q}\mu(z)dA(z),\] _and_ \[\|uC_{\varphi}\|_{e,A_{\omega}^{p}\to A_{\mu}^{q}}^{q}\approx\limsup_{| \lambda|\to 1}\int_{\mathbb{D}}|f_{\lambda,\gamma}(\varphi(z))|^{q}|u(z)|^{q}\mu (z)dA(z).\]
2. _When_ \(0<q<p<\infty\)_, the following statements are equivalent:_ 1. \(uC_{\varphi}:A_{\omega}^{p}\to A_{\mu}^{q}\) _is bounded;_ 2. \(uC_{\varphi}:A_{\omega}^{p}\to A_{\mu}^{q}\) _is compact;_ 3. \(\|M_{\omega}(\nu_{u,\varphi,q,\mu})\|_{L_{\omega}^{\frac{p}{p-q}}}<\infty\)_._ _Moreover,_ \[\|uC_{\varphi}\|_{A_{\omega}^{p}\to A_{\mu}^{q}}^{q}\approx\|M_{\omega}(\nu_{ u,\varphi,q,\mu})\|_{L_{\omega}^{\frac{p}{p-q}}}.\]
For a positive number \(\gamma\) and \(j\in\mathbb{N}\), let \((\gamma)_{j}=\gamma(\gamma+1)\cdots(\gamma+j-1)\) and \((\gamma)_{0}=1\). The following lemma is a refinement of a statement in the proof of Theorem 3 in [1].
**Lemma 7**.: _Suppose \(n\in\mathbb{N}\cup\{0\}\) and \(M\geq 1\). There exists a strictly increasing sequence \(\{\gamma_{k}\}_{k=0}^{n}\) such that \(\gamma_{0}\) is large enough and_
\[M+M^{2}\sum_{k\neq j,0\leq k\leq n}\gamma_{k}^{\frac{1}{2}-k}(\gamma_{k})_{j}< \gamma_{j}^{\frac{1}{2}-j}(\gamma_{j})_{j},\ \ j=0,1,\cdots,n. \tag{6}\]
Proof.: Suppose \(\gamma_{k}>n\) for all \(k=0,1,\cdots,n\). Since \(\gamma_{j}^{\frac{1}{2}}\leq\gamma_{j}^{\frac{1}{2}-j}(\gamma_{j})_{j}\) and
\[M+M^{2}\sum_{k\neq j,0\leq k\leq n}\gamma_{k}^{\frac{1}{2}-k}( \gamma_{k})_{j} =M+M^{2}\sum_{k=0}^{j-1}\gamma_{k}^{\frac{1}{2}-k}(\gamma_{k})_{j} +M^{2}\sum_{k=j+1}^{n}\gamma_{k}^{\frac{1}{2}-k}(\gamma_{k})_{j}\] \[\leq M+M^{2}\sum_{k=0}^{j-1}\gamma_{k}^{\frac{1}{2}-k}(\gamma_{k} )_{j}+M^{2}\sum_{k=j+1}^{n}\gamma_{k}^{\frac{1}{2}-k}(2\gamma_{k})^{j}\] \[\leq M+M^{2}\sum_{k=0}^{j-1}\gamma_{k}^{\frac{1}{2}-k}(\gamma_{k} )_{j}+n2^{n}M^{2},\]
it is enough to choose \(\{\gamma_{k}\}\) such that
\[M+M^{2}\sum_{k=0}^{j-1}\gamma_{k}^{\frac{1}{2}-k}(\gamma_{k})_{j}+n2^{n}M^{2}< \gamma_{j}^{\frac{1}{2}},\ j=0,1,\cdots,n.\]
When \(j=0\), let \(\gamma_{0}>(M+n2^{n}M^{2})^{2}\). Suppose \(\{\gamma_{k}\}_{k=0}^{j-1}\) is chosen. Then we can choose
\[\gamma_{j}>\left(M+M^{2}\sum_{k=0}^{j-1}\gamma_{k}^{\frac{1}{2}-k}(\gamma_{k} )_{j}+n2^{n}M^{2}\right)^{2}.\]
By mathematic induction, we get the desired result. The proof is complete.
**Lemma 8**.: _[_1_, Lemma 3.2]_ _Let \(\{\lambda_{j}\}_{j=0}^{n}\subset\mathbb{D}\) and \(\{z_{j}\}_{j=0}^{n}\subset\mathbb{D}\). There exists a constant \(C\) depending only on \(n\) such that there exists a polynomial \(p\) with \(\|p\|_{H^{\infty}}<C\) and \(p^{(j)}(\lambda_{j})=z_{j}\) for all \(j=0,1,2,\cdots,n\)._
As usual, let \(\beta(\cdot,\cdot)\) be the Bergman metric, i.e., for all \(\xi,\eta\in\mathbb{D}\),
\[\beta(\xi,\eta)=\frac{1}{2}\log\frac{1+|\varphi_{\xi}(\eta)|}{1-| \varphi_{\xi}(\eta)|},\ \ \text{where}\ \ \varphi_{\xi}(\eta)=\frac{\xi-\eta}{1-\xi\eta}.\]
**Lemma 9**.: _[_1_, Lemma 3.5]_ _Let \(c>1,\varepsilon>0,J,M\in\mathbb{N}\) and \(N\in\mathbb{N}\cup\{0\}\) be given. Then there is a constant \(C\) such that for all \(\{z_{j}\}_{j=1}^{J}\subset\mathbb{D}\) and \(\{w_{m}\}_{m=1}^{M}\subset\mathbb{D}\) satisfying_
\[\beta(z_{j},z_{1})<\varepsilon,\ \ \beta(w_{m},z_{1})>c\varepsilon,\ (1\leq j \leq J,\ 1\leq m\leq M),\]
_there exists a function \(f\in H(\mathbb{D})\) satisfying_
\[\|f\|_{H^{\infty}}\leq C,\ \ f^{(n)}(z_{j})=\delta_{0n},\ \ \ f^{(n)}(w_{m})=0\]
_for all \(0\leq n\leq N,1\leq j\leq J,1\leq m\leq M\)._
The following lemma can be obtained by a standard argument, see Lemma 2.10 in [20] for example, and we omit its proof here.
**Lemma 10**.: _Suppose \(0<p,q<\infty\), \(\omega,\mu\in\hat{\mathcal{D}}\). Let \(Y\) be \(A_{\mu}^{q}\) or \(H^{\infty}\). If \(T:A_{\omega}^{p}\to Y\) is bounded, then \(T\) is compact if and only if \(\|Tf_{n}\|_{Y}\to 0\) as \(n\to\infty\) whenever \(\{f_{n}\}\) is bounded in \(A_{\omega}^{p}\) and uniformly converges to 0 on any compact subset of \(\mathbb{D}\) as \(n\to\infty\)._
## 3. proof of Theorem 1
Proof of Theorem 1.: It is obvious that
\[\|T_{n,\varphi,\vec{u}}\|_{A_{\omega}^{p}\to A_{\mu}^{q}}\lesssim \sum_{k=0}^{n}\|u_{k}D_{\varphi}^{(k)}\|_{A_{\omega}^{p}\to A_{\mu}^{q}}\ \ \text{when}\ \ 1\leq p,q<\infty\]
and
\[\|T_{n,\varphi,\vec{u}}\|_{e,A_{\omega}^{p}\to A_{\mu}^{q}}\lesssim \sum_{k=0}^{n}\|u_{k}D_{\varphi}^{(k)}\|_{e,A_{\omega}^{p}\to A_{\mu}^{q}}\ \ \text{when}\ \ 1\leq p\leq q<\infty.\]
Next, we only need to prove the inverse of the above inequalities. We first claim that
\[\|u_{0}D_{\varphi}^{(0)}\|_{A_{\omega}^{p}\to A_{\mu}^{q}}\lesssim \|T_{n,\varphi,\vec{u}}\|_{A_{\omega}^{p}\to A_{\mu}^{q}}\ \ \text{when}\ \ 1\leq p,q<\infty \tag{7}\]
and
\[\|u_{0}D_{\varphi}^{(0)}\|_{e,A_{\omega}^{p}\to A_{\mu}^{q}}\lesssim\|T_{n, \varphi,\vec{u}}\|_{e,A_{\omega}^{p}\to A_{\mu}^{q}}\ \ \text{when}\ \ 1\leq p\leq q<\infty. \tag{8}\]
Take these for granted for a moment. Let \(\widetilde{T}_{n-1,\varphi,\vec{u}}=\sum_{j=0}^{n-1}u_{j+1}D_{\varphi}^{(j)}\). By Lemmas 4 and 5, we have \(\omega_{[p]}\in\mathcal{D}\) and
\[\|\widetilde{T}_{n-1,\varphi,\vec{u}}\|_{A_{\omega_{[p]}}^{p}\to A_{\mu}^{q}} \approx\Big{\|}\sum_{j=1}^{n}u_{j}D_{\varphi}^{(j)}\Big{\|}_{A_{\omega}^{p} \to A_{\mu}^{q}}=\|T_{n,\varphi,\vec{u}}-u_{0}D_{\varphi}^{(0)}\|_{A_{\omega}^{ p}\to A_{\mu}^{q}}.\]
Then, (7) and Triangle Inequality deduce
\[\|\widetilde{T}_{n-1,\varphi,\tilde{u}}\|_{A^{p}_{\omega_{[p]}}\to A^{q}_{\mu}} \lesssim\|T_{n,\varphi,\tilde{u}}\|_{A^{p}_{\omega}\to A^{q}_{\mu}}.\]
Since \(\omega_{[p]}\in\mathcal{D}\), using Lemma 5 and (7) again, we obtain
\[\|u_{1}D^{(1)}_{\varphi}\|_{A^{p}_{\omega}\to A^{q}_{\mu}}\approx\|u_{1}D^{(0) }_{\varphi}\|_{A^{p}_{\omega[p]}\to A^{q}_{\mu}} \lesssim \left\|\widetilde{T}_{n-1,\varphi,\tilde{u}}\right\|_{A^{p}_{[p]} \to A^{q}_{\mu}}\lesssim\|T_{n,\varphi,\tilde{u}}\|_{A^{p}_{\omega}\to A^{q}_ {\mu}}.\]
Then, mathematical induction deduces
\[\|u_{j}D^{(j)}_{\varphi}\|_{A^{p}_{\omega}\to A^{q}_{\mu}}\lesssim\|T_{n, \varphi,\tilde{u}}\|_{A^{p}_{\omega}\to A^{q}_{\mu}},\ \ \ j=2,3,\cdots,n,\]
and therefore
\[\sum_{j=0}^{n}\|u_{j}D^{(j)}_{\varphi}\|_{A^{p}_{\omega}\to A^{q}_{\mu}} \lesssim\|T_{n,\varphi,\tilde{u}}\|_{A^{p}_{\omega}\to A^{q}_{\mu}}.\]
Similarly, we have
\[\sum_{j=0}^{n}\|u_{j}D^{(j)}_{\varphi}\|_{e,A^{p}_{\omega}\to A^{q}_{\mu}} \lesssim\|T_{n,\varphi,\tilde{u}}\|_{e,A^{p}_{\omega}\to A^{q}_{\mu}}\ \ \text{when}\ \ 1\leq p\leq q<\infty.\]
Here, we omit the proof of the essential norm of \(T_{n,\varphi,\tilde{u}}:A^{p}_{\omega}\to A^{q}_{\mu}\) when \(q<p\). The above proof of this theorem, Theorem B, Lemmas 4 and 5 ensure that \(T_{n,\varphi,\tilde{u}}\) and \(u_{k}D^{(k)}_{\varphi}\) are all compact when \(T_{n,\varphi,\tilde{u}}\) is bounded.
It remains to prove (7) and (8). To do this, let \(T_{n,\varphi,\tilde{u}}:A^{p}_{\omega}\to A^{q}_{\mu}\) be bounded and \(\{\gamma_{k}\}_{k=0}^{n}\) be those in Lemma 7 for \(M=1\) and large enough. For any \(\lambda\in\mathbb{D}\) and \(k=0,1,\cdots,n\), let
\[f_{\lambda,\gamma_{k}}(z)=\left(\frac{1-|\lambda|^{2}}{1-\overline{\lambda}z }\right)^{\gamma_{k}}\frac{1}{\omega(S(\lambda))^{\frac{1}{p}}},\ z\in H( \mathbb{D}).\]
Then we have
\[\|T_{n,\varphi,\tilde{u}}f_{\lambda,\gamma_{k}}\|_{A^{q}_{\mu}}^{q}=\int_{ \mathbb{D}}\left|\sum_{j=0}^{n}\frac{(\gamma_{k})_{j}(\overline{\lambda})^{j} u_{j}(z)}{(1-\overline{\lambda}\varphi(z))^{j}}\right|^{q}\left|\frac{1-| \lambda|^{2}}{1-\overline{\lambda}\varphi(z)}\right|^{q\gamma_{k}}\frac{1}{ \omega(S(\lambda))^{\frac{q}{p}}}\mu(z)dA(z). \tag{9}\]
Since \(\{\gamma_{k}\}_{k=0}^{n}\) is increasing, when \(k<n\), we obtain
\[\left|\frac{1-|\lambda|^{2}}{1-\overline{\lambda}\varphi(z)}\right|^{q\gamma _{k}}\leq 2^{q(\gamma_{n}-\gamma_{k})}\left|\frac{1-|\lambda|^{2}}{1- \overline{\lambda}\varphi(z)}\right|^{q\gamma_{k}}. \tag{10}\]
Thus, for all \(k=0,1,2,\cdots,n\),
\[\|T_{n,\varphi,\tilde{u}}f_{\lambda,\gamma_{k}}\|_{A^{q}_{\mu}}^{q}\gtrsim\int_ {\mathbb{D}}\left|\sum_{j=0}^{n}\frac{(\gamma_{k})_{j}(\overline{\lambda})^{j }u_{j}(z)}{(1-\overline{\lambda}\varphi(z))^{j}}\right|^{q}\left|\frac{1-| \lambda|^{2}}{1-\overline{\lambda}\varphi(z)}\right|^{q\gamma_{n}}\frac{1}{ \omega(S(\lambda))^{\frac{q}{p}}}\mu(z)dA(z). \tag{11}\]
Let \(\Delta_{j,k}=\gamma_{k}^{\frac{1}{2}-k}(\gamma_{k})_{j},j,k=0,1,\cdots,n\) and \(A=(\Delta_{j,k})\). By Gersgorin's theorem, see [8, Theorem 6.1.1] for example, \(|\det(A)|>1\). So, there exists a sequence
\(\{c_{k}\}_{k=0}^{n}\) such that
\[A\left(\begin{array}{c}c_{0}\\ c_{1}\\ \vdots\\ c_{n}\end{array}\right)=\left(\begin{array}{c}1\\ 0\\ \vdots\\ 0\end{array}\right). \tag{12}\]
**Case (a).**\(1\leq p\leq q<\infty\). Using (12), it is easy to check that
\[\|u_{0}D_{\varphi}^{(0)}f_{\lambda,\gamma_{n}}\|_{A_{\mu}^{q}}^{q} =\int_{\mathbb{D}}\left[\left(\sum_{k=0}^{n}c_{k}\gamma_{k}^{\frac {1}{2}-k}(\gamma_{k})_{0}\right)u_{0}(z)f_{\lambda,\gamma_{n}}(\varphi(z)) \right]^{q}\mu(z)dA(z)\] \[=\int_{\mathbb{D}}\left[\sum_{j=0}^{n}\left(\sum_{k=0}^{n}c_{k} \gamma_{k}^{\frac{1}{2}-k}(\gamma_{k})_{j}\right)\left(\frac{(\overline{ \lambda})^{j}u_{j}(z)}{(1-\overline{\lambda}\varphi(z))^{j}}f_{\lambda,\gamma _{n}}(\varphi(z))\right)\right]^{q}\mu(z)dA(z)\] \[\lesssim\sum_{k=0}^{n}|c_{k}|^{q}\gamma_{k}^{\frac{q}{2}-kq}\int _{\mathbb{D}}\left|\sum_{j=0}^{n}\left(\frac{(\gamma_{k})_{j}(\overline{ \lambda})^{j}u_{j}(z)}{(1-\overline{\lambda}\varphi(z))^{j}}f_{\lambda,\gamma _{n}}(\varphi(z))\right)\right|^{q}\mu(z)dA(z). \tag{13}\]
Then, (11) implies
\[\|u_{0}D_{\varphi}^{(0)}f_{\lambda,\gamma_{n}}\|_{A_{\mu}^{q}}^{q}\lesssim \sum_{k=0}^{n}|c_{k}|^{q}\gamma_{k}^{\frac{q}{2}-kq}\|T_{n,\varphi,\overline{ \lambda}}f_{\lambda,\gamma_{k}}\|_{A_{\mu}^{q}}^{q}\lesssim\left(\sum_{k=0}^{ n}|c_{k}|^{q}\gamma_{k}^{\frac{q}{2}-kq}\right)\|T_{n,\varphi,\overline{ \lambda}}\|_{A_{\mu}^{p}\to A_{\mu}^{q}}^{q}.\]
By Theorem B, we see that \(u_{0}D_{\varphi}^{(0)}:A_{\omega}^{p}\to A_{\mu}^{q}\) is bounded and
\[\|u_{0}D_{\varphi}^{(0)}\|_{A_{\omega}^{p}\to A_{\mu}^{q}}\lesssim\|T_{n, \varphi,\overline{\lambda}}\|_{A_{\omega}^{p}\to A_{\mu}^{q}}.\]
If \(K:A_{\omega}^{p}\to A_{\mu}^{q}\) is bounded, similarly to the proof of (13), we get
\[\|(u_{0}D_{\varphi}^{(0)}-K)f_{\lambda,\gamma_{n}}\|_{A_{\mu}^{ q}}^{q}\] \[=\int_{\mathbb{D}}\left[\sum_{k=0}^{n}c_{k}\gamma_{k}^{\frac{1}{2 }-k}(\gamma_{k})_{0}\right)\left(u_{0}(z)f_{\lambda,\gamma_{n}}(\varphi(z))- (Kf_{\lambda,\gamma_{n}})(z)\right)\right|^{q}\mu(z)dA(z)\] \[=\int_{\mathbb{D}}\left|\sum_{j=0}^{n}\left(\sum_{k=0}^{n}c_{k} \gamma_{k}^{\frac{1}{2}-k}(\gamma_{k})_{j}\right)\left(\frac{(\overline{ \lambda})^{j}u_{j}(z)}{(1-\overline{\lambda}\varphi(z))^{j}}f_{\lambda,\gamma _{n}}(\varphi(z))-(Kf_{\lambda,\gamma_{n}})(z)\right)\right|^{q}\mu(z)dA(z)\] \[\lesssim\sum_{k=0}^{n}|c_{k}|^{q}\gamma_{k}^{\frac{q}{2}-kq}\int _{\mathbb{D}}\left|\sum_{j=0}^{n}\left(\frac{(\gamma_{k})_{j}(\overline{ \lambda})^{j}u_{j}(z)}{(1-\overline{\lambda}\varphi(z))^{j}}f_{\lambda,\gamma _{n}}(\varphi(z))-(\gamma_{k})_{j}(Kf_{\lambda,\gamma_{n}})(z)\right)\right|^{q }\mu(z)dA(z). \tag{14}\]
Since
\[\left|\sum_{j=0}^{n}\left(\frac{(\gamma_{k})_{j}(\overline{ \lambda})^{j}u_{j}(z)}{(1-\overline{\lambda}\varphi(z))^{j}}f_{\lambda,\gamma _{n}}(\varphi(z))-(r_{k})_{j}(Kf_{\lambda,\gamma_{n}})(z)\right)\right|^{q}\] \[\lesssim\left|\sum_{j=0}^{n}\frac{(\gamma_{k})_{j}(\overline{ \lambda})^{j}u_{j}(z)}{(1-\overline{\lambda}\varphi(z))^{j}}f_{\lambda,\gamma _{n}}(\varphi(z))-(Kf_{\lambda,\gamma_{n}})(z)\right|^{q}\left|\frac{1-| \lambda|^{2}}{1-\overline{\lambda}\varphi(z)}\right|^{q(\gamma_{n}-\gamma_{k})}\] \[\quad+\left|\frac{1-|\lambda|^{2}}{1-\overline{\lambda}\varphi(z )}\right|^{q(\gamma_{n}-\gamma_{k})}|(Kf_{\lambda,\gamma_{n}})(z)|^{q}+\sum_{j= 0}^{n}|(\gamma_{k})_{j}|^{q}|(Kf_{\lambda,\gamma_{n}})(z)|^{q},\]
by (14) we have
\[\|(u_{0}D_{\varphi}^{(0)}-K)f_{\lambda,\gamma_{n}}\|_{A_{\mu}^{q}}^{q}\lesssim \sum_{k=0}^{n}|c_{k}|^{q}\gamma_{k}^{\frac{q}{2}-kq}\left(\|(T_{n,\varphi,\vec{u }}-K)f_{\lambda,\gamma_{k}}\|_{A_{\mu}^{q}}^{q}+\|Kf_{\lambda,\gamma_{k}}\|_{A_ {\mu}^{q}}^{q}+\|Kf_{\lambda,\gamma_{n}}\|_{A_{\mu}^{q}}^{q}\right).\]
Then, Triangle Inequality deduces
\[\|u_{0}D_{\varphi}^{(0)}f_{\lambda,\gamma_{n}}\|_{A_{\mu}^{q}}^{q}\lesssim\sum _{k=0}^{n}|c_{k}|^{q}\gamma_{k}^{\frac{q}{2}-kq}\left(\|(T_{n,\varphi,\vec{u}} -K)f_{\lambda,\gamma_{k}}\|_{A_{\mu}^{q}}^{q}+\|Kf_{\lambda,\gamma_{k}}\|_{A _{\mu}^{q}}^{q}+\|Kf_{\lambda,\gamma_{n}}\|_{A_{\mu}^{q}}^{q}\right).\]
By Lemma 10, we obtain
\[\lim_{|\lambda|\to 1}\|u_{0}D_{\varphi}^{(0)}f_{\lambda,\gamma_{n}}\|_{A_{ \mu}^{q}}^{q}\lesssim\sum_{k=0}^{n}\limsup_{|\lambda|\to 1}\|(T_{n,\varphi, \vec{u}}-K)f_{\lambda,\gamma_{k}}\|_{A_{\mu}^{q}}^{q}\lesssim\|T_{n,\varphi, \vec{u}}-K\|_{A_{\mu}^{p}\to A_{\mu}^{q}}^{q}.\]
By Theorem B and the arbitrary of \(K\), we have
\[\|u_{0}D_{\varphi}^{(0)}\|_{e,A_{\omega}^{p}\to A_{\mu}^{q}}\lesssim\|T_{n, \varphi,\vec{u}}\|_{e,A_{\omega}^{p}\to A_{\mu}^{q}}.\]
**Case (b):**\(1\leq q<p<\infty.\) Let \(\{\lambda_{i}\}_{i=1}^{\infty}\) be those \(\{\xi_{j,l}^{k}\}\) in [13, Thoerem 2](also see [25, Theorem 3.2]) and \(r_{k}(t)\) be Rademacher functions. We have the following statements:
1. For any \(a=\{a_{i}\}_{i=1}^{\infty}\in l^{p}\), \(\|g_{a,\gamma_{k},t}\|_{A_{\omega}^{p}}\lesssim\|\{a_{i}\}_{i=1}^{\infty}\|_{ l^{p}}\), where \[g_{a,\gamma_{k},t}(z)=\sum_{i=1}^{\infty}a_{i}r_{i}(t)f_{\lambda_{i},\gamma_{k}}( z),\,\,k=0,1,\cdots,n.\]
2. For any \(g\in A_{\omega}^{\frac{p}{2}}\), there exists \(\{b_{i}\}_{i=1}^{\infty}\in l^{\frac{p}{2}}\) such that \[g(z)=\sum_{i=1}^{\infty}b_{i}\left(\frac{1-|\lambda_{i}|^{2}}{1-\overline{ \lambda}_{i}z}\right)^{2\gamma_{n}}\frac{1}{\omega(S(\lambda_{i}))^{\frac{2}{ p}}},\,\,\,\,\|g\|_{A_{\omega}^{\frac{p}{2}}}\approx\|\{b_{i}\}\|_{l^{ \frac{p}{2}}}.\] (15)
By Fubini's theorem, Khinchin's inequality, (9) and (10), we have
\[\int_{0}^{1}\|T_{n,\varphi,\vec{u}}g_{a,\gamma_{k},t}\|_{A_{\mu}^ {q}}^{q}dt\] \[=\int_{\mathbb{D}}\int_{0}^{1}\left|\sum_{i=1}^{\infty}a_{i}r_{i} (t)(T_{n,\varphi,\vec{u}}f_{\lambda_{i},\gamma_{k}})(z)\right|^{q}dt\mu(z)dA(z)\] \[\approx\int_{\mathbb{D}}\left(\sum_{i=1}^{\infty}|a_{i}|^{2}|(T_{ n,\varphi,\vec{u}}f_{\lambda_{i},\gamma_{k}})(z)|^{2}\right)^{\frac{q}{2}}\mu(z)dA(z)\] \[=\int_{\mathbb{D}}\left(\sum_{i=1}^{\infty}\left|a_{i}\left(\sum_ {j=0}^{n}\frac{(\gamma_{k})_{j}(\overline{\lambda_{i}})^{j}u_{j}(z)}{(1- \overline{\lambda}_{i}\varphi(z))^{j}}\right)\left(\frac{1-|\lambda_{i}|^{2}}{ 1-\overline{\lambda}_{i}\varphi(z)}\right)^{\gamma_{k}}\frac{1}{\omega(S( \lambda_{i}))^{\frac{1}{p}}}\right|^{2}\right)^{\frac{q}{2}}\mu(z)dA(z)\]
\[\gtrsim\int_{\mathbb{D}}\left(\sum_{i=1}^{\infty}\left|a_{i}\left( \sum_{j=0}^{n}\frac{(\gamma_{k})_{j}(\overline{\lambda_{i}})^{j}u_{j}(z)}{(1- \overline{\lambda_{i}}\varphi(z))^{j}}\right)\left(\frac{1-|\lambda_{i}|^{2}}{ 1-\overline{\lambda_{i}}\varphi(z)}\right)^{\gamma_{n}}\frac{1}{\omega(S( \lambda_{i}))^{\frac{1}{p}}}\right|^{2}\right)^{\frac{q}{2}}\mu(z)dA(z)\] \[=\int_{\mathbb{D}}\left(\sum_{i=1}^{\infty}|A_{k,i}(z)|^{2}\right) ^{\frac{q}{2}}\mu(z)dA(z).\]
Here
\[A_{k,i}(z)=a_{i}\left(\sum_{j=0}^{n}\frac{(\gamma_{k})_{j}(\overline{\lambda_{ i}})^{j}u_{j}(z)}{(1-\overline{\lambda_{i}}\varphi(z))^{j}}\right)\left( \frac{1-|\lambda_{i}|^{2}}{1-\overline{\lambda_{i}}\varphi(z)}\right)^{\gamma _{n}}\frac{1}{\omega(S(\lambda_{i}))^{\frac{1}{p}}}.\]
Thus, by the statement (bi),
\[\int_{\mathbb{D}}\left(\sum_{i=1}^{\infty}|A_{k,i}(z)|^{2}\right) ^{\frac{q}{2}}\mu(z)dA(z) \lesssim\int_{0}^{1}\|T_{n,\varphi,il}g_{a,\gamma_{k},il}\|_{A_{ \mu}^{q}}^{q}dt\] \[\lesssim\|T_{n,\varphi,il}\|_{A_{\mu}^{p}\to A_{\mu}^{q}}^{q}\|[a_{i} ]\|_{l^{p}}^{q}.\]
Recalling that \(\{c_{k}\}_{k=0}^{n}\) was decided by (12), we have
\[\int_{\mathbb{D}}\left(\sum_{i=1}^{\infty}\left|a_{i}\right|^{2} \left|(u_{0}D_{\varphi}^{(0)}f_{\lambda_{i},\gamma_{n}})(z)\right|^{2}\right) ^{\frac{q}{2}}\mu(z)dA(z)\] \[=\int_{\mathbb{D}}\left(\sum_{i=1}^{\infty}\left|\sum_{k=0}^{n}c_ {k}\gamma_{k}^{\frac{1}{2}-k}A_{k,i}(z)\right|^{2}\right)^{\frac{q}{2}}\mu(z )dA(z)\] \[\lesssim\int_{\mathbb{D}}\left(\sum_{i=1}^{\infty}\sum_{k=0}^{n} |A_{k,i}(z)|^{2}\right)^{\frac{q}{2}}\mu(z)dA(z)\] \[\lesssim\sum_{k=0}^{n}\int_{\mathbb{D}}\left(\sum_{i=1}^{\infty} |A_{k,i}(z)|^{2}\right)^{\frac{q}{2}}\mu(z)dA(z)\] \[\lesssim\] \[\lesssim \|T_{n,\varphi,il}\|_{A_{\mu}^{p}\to A_{\mu}^{q}}^{q}\|[a_{i}]\|_{ l^{p}}^{q}. \tag{16}\]
For any \(g\in A_{\alpha}^{\frac{p}{2}}\), by the statement (bii), there exists \(\{b_{i}\}_{i=1}^{\infty}\in l^{\frac{p}{2}}\) such that (15) holds. Let \(a_{i}=b_{i}^{\frac{1}{2}}\). So, \(\|\{b_{i}\}\|_{l^{2}_{2}}=\|\{a_{i}\}\|_{l^{p}}^{2}\). By (16), we get
\[\|u_{0}^{2}D_{\varphi}^{(0)}g\|_{\begin{subarray}{c}\frac{q}{2} \\ A_{\mu}^{q}\end{subarray}}^{\frac{q}{2}} \leq\int_{\mathbb{D}}\left(\sum_{i=1}^{\infty}\left|a_{i}\right|^ {2}\left|(u_{0}D_{\varphi}^{(0)}f_{\lambda_{i},\gamma_{n}})(z)\right|^{2} \right)^{\frac{q}{2}}\mu(z)dA(z)\] \[\lesssim \|T_{n,\varphi,il}\|_{A_{\mu}^{p}\to A_{\mu}^{q}}^{q}\|[a_{i}]\|_{ l^{p}}^{q}=\|T_{n,\varphi,il}\|_{A_{\mu}^{p}\to A_{\mu}^{q}}^{q}\|[b_{i}]\|_{l^{ \frac{p}{2}}}^{\frac{q}{2}}\] \[\approx \|T_{n,\varphi,il}\|_{A_{\mu}^{p}\to A_{\mu}^{q}}^{q}\|g\|_{A_{ \mu}^{\frac{q}{2}}}^{\frac{q}{2}}.\]
That is to say, \(u_{0}^{2}D_{\varphi}^{(0)}:A_{\omega}^{\frac{p}{2}}\to A_{\mu}^{\frac{q}{2}}\) is bounded. By Theorem B, \(u_{0}D_{\varphi}^{(0)}:A_{\omega}^{p}\to A_{\mu}^{q}\) is bounded and
\[\|u_{0}D_{\varphi}^{(0)}\|_{A_{\omega}^{p}\to A_{\mu}^{q}}\lesssim\|T_{n, \varphi,\vec{u}}\|_{A_{\omega}^{p}\to A_{\mu}^{q}}.\]
The proof is complete.
## 4. Proof of Theorem 3
Proof of Theorem 3.: For brief, let \(\upsilon(a)=\omega(S(a))^{\frac{1}{p}}\). By Chapter 1 in [11], there exist \(\alpha,\beta>0\) such that \(\frac{\upsilon(t)}{(1-t)^{p}}\) and \(\frac{\upsilon(t)}{(1-t)^{p}}\) are essentially increasing and essentially decreasing on \([0,1)\), respectively. By (4.8) in [26], there is a constant \(M_{0}>4\), whenever \(\beta(z,w)<\frac{1}{2}\),
\[\frac{2}{\sqrt{M_{0}}}<\frac{\upsilon(z)}{\upsilon(w)}<\frac{\sqrt{M_{0}}}{2}.\]
By Lemma 7, we can choose \(\{\gamma_{j}\}_{j=0}^{n}\) large enough such that (6) holds for \(M=M_{0}\). By Proposition 4.5 in [26], there exist \(0<\varepsilon<\frac{1}{4}<R<1\) such that whenever \(|z|>R\), \(\xi,\eta\in D(z,\varepsilon)\) and \(0\leq k\), \(j\leq n\),
\[\frac{1}{\sqrt{M_{0}}}<|\xi|^{n},|\eta|^{n}<1, \tag{17}\]
\[\frac{1}{2}<\frac{(1-|\xi|^{2})^{\gamma_{j}}(1-|\eta|^{2})^{k}}{|1-\overline{ \xi}\eta|^{\gamma_{j}+k}}<2,\]
and then,
\[\frac{1}{\sqrt{M_{0}}}<\frac{\upsilon(\eta)}{\upsilon(\xi)}\frac{(1-|\xi|^{2}) ^{\gamma_{j}}(1-|\eta|^{2})^{k}}{|1-\overline{\xi}\eta|^{\gamma_{j}+k}}<\sqrt {M_{0}}. \tag{18}\]
**Case (a):**\(|\lambda_{J}|>R\) and \(\sup\limits_{0\leq j\leq n}\beta(\lambda_{j},\lambda_{J})<\varepsilon\). Let \(f_{\lambda,\gamma}(z)=\left(\frac{1-|\lambda_{J}|^{2}}{1-\overline{\lambda}z} \right)^{\gamma}\frac{1}{\upsilon(\lambda)}\) and
\[f_{\lambda,J}(z)=\sum\limits_{k=0}^{n}b_{k}\gamma_{k}^{\frac{1}{2}-k}f_{ \lambda_{k},\gamma_{k}}(z).\]
Then, the equations (3) can be written as \(Ab=\delta_{J}\), in which
\[a_{jk}=\gamma_{k}^{\frac{1}{2}-k}(\gamma_{k})_{j}\overline{\lambda_{k}}^{j} \frac{\upsilon(\lambda_{j})}{\upsilon(\lambda_{k})}\frac{(1-|\lambda_{k}|^{2}) ^{\gamma_{k}}(1-|\lambda_{j}|^{2})^{j}}{(1-\overline{\lambda_{k}}\lambda_{j})^ {\gamma_{k}+j}},\]
and
\[b=(b_{0},b_{1},\cdots,b_{n})^{T},\quad\delta_{J}=(\delta_{0J},\ \delta_{1J}, \cdots,\ \delta_{nJ})^{T},\quad A=(a_{jk}).\]
By (6), (17) and (18), for any \(0\leq j\leq n\),
\[1+\sum\limits_{k\neq j,0\leq k\leq n}|a_{jk}|<1+\sqrt{M_{0}}\sum\limits_{k \neq j,0\leq k\leq n}\gamma_{k}^{\frac{1}{2}-k}(\gamma_{k})_{j}<\frac{1}{M_{0} }\gamma_{j}^{\frac{1}{2}-j}(\gamma_{j})_{j}<|a_{jj}|.\]
By Gersgorin's theorem, \(|\det(A)|>1\). Meanwhile, since all elements in \(A\) are bounded independent of \(\{\lambda_{j}\}_{j=0}^{n}\), the elements of adjoint matrix \(A^{*}\) of \(A\) are also bounded. So, there exists a constant \(M_{1}\) independent of \(\{\lambda_{j}\}_{j=0}^{n}\) such that
and \(\sum_{k=0}^{n}|b_{k}|<M_{1}\). So, \(\|f_{\Lambda,J}\|_{A_{\omega}^{p}}\lesssim M_{1}\). For a fixed \(J\in\{0,1,\cdots,n\}\) and any given \(\delta\in(0,1)\) and \(0\leq j\leq n\), set
\[E_{j,\delta}=\left\{\lambda_{j}:\Lambda=\{\lambda_{k}\}_{k=0}^{n}\subset\mathbb{ D},|\lambda_{J}|>\delta,\sup_{0\leq i\leq n}\beta(\lambda_{i},\lambda_{J})< \varepsilon\right\}.\]
As \(\delta\) approaches \(1\), \(E_{j,\delta}\) approaches the boundary of \(\mathbb{D}\). Therefore, the functions \(\{f_{\lambda_{j},\gamma_{j}}\}_{\lambda_{j}\in E_{j,\delta}}\) converge to \(0\) uniformly on any compact subset of \(\mathbb{D}\). By the arbitrary of \(0\leq j\leq n\) and \(\sum_{k=0}^{n}|b_{k}|<M_{1}\), the functions \(\{f_{\Lambda,J}\}\) converge to \(0\) uniformly on any compact subset of \(\mathbb{D}\) as \(|\lambda_{J}|\) approaches \(1\).
**Case (b):**\(|\lambda_{J}|>R\) and \(\sup_{0\leq k\leq n}\beta(\lambda_{k},\lambda_{J})\geq\varepsilon\). Let \(\varepsilon^{\prime}=\frac{\varepsilon}{n+1}\). By Pigeonhole Principle, there exists \(L\in\{0,1,\cdots,n\}\) such that
\[\left\{\lambda_{j}:L\varepsilon^{\prime}\leq\beta(\lambda_{j},\lambda_{J})<( L+1)\varepsilon^{\prime},j=0,1,2,\cdots,n\right\}\]
is empty. Set \(\Lambda_{1}=\{z_{k}\}_{k=0}^{n}\), where
\[z_{k}=\left\{\begin{array}{ll}\lambda_{k},&\mbox{if }\ \beta(\lambda_{k}, \lambda_{J})<L\varepsilon^{\prime},\\ \lambda_{J},&\mbox{if }\ \beta(\lambda_{k},\lambda_{J})\geq(L+1)\varepsilon^{ \prime}.\end{array}\right.\]
By the proof above, we have a function \(f_{\Lambda_{1},J}\) such \(\|f_{\Lambda_{1},J}\|_{A_{\omega}^{p}}\lesssim M_{1}\) and
\[f_{\Lambda_{1},J}^{(j)}(z_{j})=\frac{\delta_{jJ}}{(1-|z_{j}|^{2})^{j}\nu(z_{j} )},\ 0\leq j\leq n. \tag{19}\]
Let \(w_{1},w_{2},\cdots,w_{n^{\prime}}\) be the elements of \(\{\lambda_{k}\}_{k=0}^{n}\backslash\{z_{k}\}_{k=0}^{n}\). By Lemma 9, there exist a constant \(M_{2}\), independent of \(\{\lambda_{j}\}_{j=0}^{n}\), \(J\), and \(h\in H(\mathbb{D})\) such that for all \(0\leq k\), \(j\leq n\), \(0\leq i\leq n^{\prime}\), we have
\[\|h\|_{H^{\infty}}<M_{2},\ \ \ h^{(k)}(w_{i})=0,h^{(k)}(z_{j})=\delta_{0k}. \tag{20}\]
Letting \(f_{\Lambda,J}=f_{\Lambda_{1},J}h\), for all \(j=0,1,\cdots,n\), we have
\[f_{\Lambda,J}^{(j)}(z)=\sum_{k=0}^{j}C_{j}^{k}f_{\Lambda_{1},J}^{(k)}(z)h^{(j- k)}(z).\]
When \(\lambda_{j}=z_{j}\), by (19) and (20), we have
\[f_{\Lambda,J}^{(j)}(\lambda_{j})=f_{\Lambda,J}^{(j)}(z_{j})=f_{\Lambda_{1},J}^ {(j)}(z_{j})=\frac{\delta_{jJ}}{(1-|z_{j}|^{2})^{j}\nu(z_{j})}=\frac{\delta_{ jJ}}{(1-|\lambda_{j}|^{2})^{j}\nu(\lambda_{j})};\]
otherwise, \(\lambda_{j}\) could not be \(\lambda_{J}\) and there exists \(w_{i}\) such that \(\lambda_{j}=w_{i}\). By (20),
\[f_{\Lambda,J}^{(j)}(\lambda_{j})=f_{\Lambda,J}^{(j)}(w_{i})=0=\frac{\delta_{jJ }}{(1-|\lambda_{j}|^{2})^{j}\nu(\lambda_{j})}.\]
So, \(f_{\Lambda,J}=f_{\Lambda_{1},J}h\) is the desired and \(\|f_{\Lambda,J}\|_{A_{\omega}^{p}}\lesssim M_{1}M_{2}\). Moreover, by the proof of case (a) and \(\|h\|_{H^{\infty}}<M_{2}\), the functions \(\{f_{\Lambda,J}\}\) converge to \(0\) uniformly on any compact subset of \(\mathbb{D}\) when \(|\lambda_{J}|\) approaches \(1\).
**Case (c):**\(|\lambda_{J}|\leq R\). By Lemma 8, there is a constant \(M_{3}\), for all \(\{\lambda_{j}\}_{j=0}^{n}\) and \(0\leq J\leq n\), there is a function \(p\in H^{\infty}\) such that \(\|p\|_{H^{\infty}}<M_{3}\) and \(p^{(j)}(\lambda_{j})=\frac{1}{2}\delta_{jJ}\). Then \(f(z)=\frac{2p(z)}{(1-|\lambda_{j}|^{2})^{j}\nu(\lambda_{j})}\) is the desired.
By the above proof, we see that the functions \(\{f_{\Lambda,J}\}\) converge to \(0\) uniformly on compact subsets of \(\mathbb{D}\) as \(|\lambda_{J}|\to 1\). The proof is complete.
## 5. Proof of Theorem 2
Proof of Theorem 2.: First we consider the norm of \(\mathcal{T}_{n,\vec{\varphi},\vec{u}}:A_{\omega}^{p}\to H^{\infty}\). By the assumption and Lemma 3 in [31], we see that for any \(f\in A_{\omega}^{p}\) and \(k=0,1,\cdots,n+1\),
\[|f^{(k)}(z)|\lesssim\frac{\|f\|_{A_{\omega}^{p}}}{(1-|z|^{2})^{k} \omega(S(z))^{\frac{1}{p}}},\ z\in\mathbb{D}. \tag{21}\]
After a calculation, by (21) we get
\[\|u_{k}D_{\varphi_{k}}^{(k)}\|_{A_{\omega}^{p}\to H^{\infty}} \lesssim\sup_{z\in\mathbb{D}}\frac{|u_{k}(z)|}{(1-|\varphi_{k}(z)|^{2})^{k} \omega(S(\varphi_{k}(z)))^{\frac{1}{p}}}.\]
Therefore,
\[\|\mathcal{T}_{n,\vec{\varphi},\vec{u}}\|_{A_{\omega}^{p}\to H^{ \infty}}\lesssim\sum_{k=0}^{n}\sup_{z\in\mathbb{D}}\frac{|u_{k}(z)|}{(1-| \varphi_{k}(z)|^{2})^{k}\omega(S(\varphi_{k}(z)))^{\frac{1}{p}}}.\]
Conversely, suppose \(\mathcal{T}_{n,\vec{\varphi},\vec{u}}:A_{\omega}^{p}\to H^{\infty}\) is bounded. By Theorem 3, there exists \(M^{\prime}\) such that, for any \(\lambda\in\mathbb{D}\) and \(J\in\{0,1,\cdots,n\}\), there is a function \(f_{\Lambda,J}\in A_{\omega}^{p}\) satisfying \(\|f_{\Lambda,J}\|_{A_{\omega}^{p}}\leq M^{\prime}\) and
\[f_{\Lambda,J}^{(j)}(\varphi_{j}(\lambda))=\frac{\delta_{jJ}}{(1-|\varphi_{j}( \lambda)|^{2})^{j}\omega(S(\varphi_{j}(\lambda)))^{\frac{1}{p}}},\ j=0,1, \cdots,n.\]
Here, \(\Lambda=\{\varphi_{k}(\lambda)\}_{k=0}^{n}\). Then we have
\[\Big{|}\frac{u_{J}(\lambda)}{(1-|\varphi_{J}(\lambda)|^{2})^{J} \omega(S(\varphi_{J}(\lambda)))^{\frac{1}{p}}}\Big{|}=|(\mathcal{T}_{n,\vec{ \varphi},\vec{u}})f_{\Lambda,J}(\lambda)|\leq\|\mathcal{T}_{n,\vec{\varphi}, \vec{u}}f_{\Lambda,J}\|_{H^{\infty}}\lesssim\|\mathcal{T}_{n,\vec{\varphi}, \vec{u}}\|_{A_{\omega}^{p}\to H^{\infty}}.\]
Therefore,
\[\sum_{k=0}^{n}\sup_{z\in\mathbb{D}}\frac{|u_{k}(z)|}{(1-|\varphi_{k}(z)|^{2})^ {k}\omega(S(\varphi_{k}(z)))^{\frac{1}{p}}}\lesssim\|\mathcal{T}_{n,\vec{ \varphi},\vec{u}}\|_{A_{\omega}^{p}\to H^{\infty}},\]
as desired.
Next, we estimate the essential norm of \(\mathcal{T}_{n,\vec{\varphi},\vec{u}}:A_{\omega}^{p}\to H^{\infty}\). Suppose \(\mathcal{T}_{n,\vec{\varphi},\vec{u}}:A_{\omega}^{p}\to H^{\infty}\) is bounded. By the above proof, we see that
\[\sup_{z\in\mathbb{D}}|u_{k}(z)|<\infty,\ k=0,1,2,\cdots,n.\]
For any given \(r\in[0,1)\), let \((K_{r}f)(z)=f(rz)\). By Lemma 10, it is compact on \(A^{p}_{\omega}\). So, \(u_{k}D^{(k)}_{\varphi_{k}}K_{r}:A^{p}_{\omega}\to H^{\infty}\) is also compact. Let \(0<\delta<1\). For \(f\in A^{p}_{\omega}\), we have
\[\|u_{k}D^{(k)}_{\varphi_{k}}f-u_{k}D^{(k)}_{\varphi_{k}}K_{r}f\|_ {H^{\infty}} \leq\left(\sup_{|\varphi_{k}(z)|\leq\delta}+\sup_{\delta<|\varphi_ {k}(z)|<1}\right)|u_{k}(z)D^{(k)}_{\varphi_{k}}f(z)-r^{k}u_{k}(z)D^{(k)}_{r \varphi_{k}}f(z)|\] \[\leq\sup_{|\varphi_{k}(z)|\leq\delta}|u_{k}(z)D^{(k)}_{\varphi_{k }}f(z)-r^{k}u_{k}(z)D^{(k)}_{r\varphi_{k}}f(z)|\] \[\quad+\sup_{\delta<|\varphi_{k}(z)|<1}|u_{k}(z)D^{(k)}_{\varphi_{ k}}f(z)-r^{k}u_{k}(z)D^{(k)}_{r\varphi_{k}}f(z)|\] \[:=I+II.\]
By (21), there exists a constant \(M\) independent of \(f,r,\delta,u_{k},\varphi_{k}\) such that
\[I\leq \sup_{|\varphi_{k}(z)|\leq\delta}(1-r^{k})|u_{k}(z)f^{(k)}(r \varphi_{k}(z))|+\sup_{|\varphi_{k}(z)|\leq\delta}|u_{k}(z)|\left|\int_{r \varphi_{k}(z)}^{\varphi_{k}(z)}f^{(k+1)}(\xi)d\xi\right|\] \[\leq (1-r^{k}+1-r)\sup_{|\varphi_{k}(z)|\leq\delta}\frac{M|u_{k}(z) \|f\|_{A^{p}_{\omega}}}{(1-|\varphi_{k}(z)|^{2})^{k+1}\omega(S(\varphi_{k}(z) ))^{\frac{1}{p}}}\]
and
\[II \leq\sup_{\delta<|\varphi_{k}(z)|<1}|u_{k}(z)f^{(k)}(\varphi_{k}( z))|+\sup_{\delta<|\varphi_{k}(z)|<1}|u_{k}(z)f^{(k)}(r\varphi_{k}(z))|\] \[\leq\sup_{\delta<|\varphi_{k}(z)|<1}\frac{M|u_{k}(z)\|f\|_{A^{p}_ {\omega}}}{(1-|\varphi_{k}(z)|^{2})^{k}\omega(S(\varphi_{k}(z)))^{\frac{1}{p} }}.\]
So, for any given \(\varepsilon>0\), we can choose \(r\in(0,1)\) such that
\[\|u_{k}D^{(k)}_{\varphi_{k}}f-u_{k}D^{(k)}_{\varphi_{k}}K_{r}f\|_ {H^{\infty}}\leq\varepsilon\|f\|_{A^{p}_{\omega}}+\sup_{\delta<|\varphi_{k}(z) |<1}\frac{M|u_{k}(z)\|f\|_{A^{p}_{\omega}}}{(1-|\varphi_{k}(z)|^{2})^{k}\omega (S(\varphi_{k}(z)))^{\frac{1}{p}}}.\]
Letting \(\varepsilon\to 0\) and \(\delta\to 1\), we have
\[\|u_{k}D^{(k)}_{\varphi_{k}}\|_{e,A^{p}_{\omega}\to H^{\infty}} \lesssim\limsup_{|\varphi_{k}(z)|\to 1}\frac{|u_{k}(z)|}{(1-|\varphi_{k}(z)|^{2})^{k} \omega(S(\varphi_{k}(z)))^{\frac{1}{p}}}.\]
Therefore,
\[\|\mathcal{T}_{n,\vec{x},\vec{x},\vec{x}}\|_{e,A^{p}_{\omega}\to H^{\infty}} \lesssim\sum_{k=0}^{\infty}\limsup_{|\varphi_{k}(z)|\to 1}\frac{|u_{k}(z)|}{(1-| \varphi_{k}(z)|^{2})^{k}\omega(S(\varphi_{k}(z)))^{\frac{1}{p}}}.\]
Finally, we prove that
\[\sum_{j=0}^{n}\limsup_{|\varphi_{j}(\lambda)|\to 1}\frac{|u_{j}(\lambda)|}{(1-| \varphi_{j}(\lambda)|^{2})^{j}\omega(S(\varphi_{j}(\lambda)))^{\frac{1}{p}}} \lesssim\|\mathcal{T}_{n,\vec{x},\vec{x}}\|_{e,A^{p}_{\omega}\to H^{\infty}}.\]
Suppose \(K:A^{p}_{\omega}\to H^{\infty}\) is compact. By Theorem 3, there exists \(M^{\prime}\) such that, for any \(\lambda\in\mathbb{D}\) and \(J\in\{0,1,\cdots,n\}\), there is a function \(f_{\Lambda,J}\in A^{p}_{\omega}\) satisfying \(\|f_{\Lambda,J}\|_{A^{p}_{\omega}}\leq M^{\prime}\) and
\[f^{(j)}_{\Lambda,J}(\varphi_{j}(\lambda))=\frac{\delta_{jJ}}{(1-|\varphi_{j}( \lambda)|^{2})^{j}\omega(S(\varphi_{j}(\lambda)))^{\frac{1}{p}}},\,\,\,j=0,1, \cdots,n.\]
Here, \(\Lambda=\{\varphi_{k}(\lambda)\}_{k=0}^{n}\). Then we have
\[\Big{|}\frac{u_{J}(\lambda)}{(1-|\varphi_{J}(\lambda)|^{2})^{J} \omega(S(\varphi_{J}(\lambda)))^{\frac{1}{p}}}-(Kf_{\Lambda,J})(\lambda)\Big{|} =|(\mathcal{T}_{n,\vec{\varphi},\vec{u}}-K)f_{\Lambda,J}(\lambda)|\] \[\leq||\mathcal{T}_{n,\vec{\varphi},\vec{u}}f_{\Lambda,J}-Kf_{ \Lambda,J}||_{H^{\infty}}\] \[\lesssim||\mathcal{T}_{n,\vec{\varphi},\vec{u}}-K||_{A^{p}_{ \omega}\to H^{\infty}}.\]
By Lemma 10 and Theorem 3, \(||Kf_{\Lambda,J}||_{H^{\infty}}\to 0\) as \(|\varphi_{J}(\lambda)|\to 1\). Thus,
\[\limsup_{|\varphi_{J}(\lambda)|\to 1}\frac{|u_{J}(\lambda)|}{(1-|\varphi_{J}( \lambda)|^{2})^{J}\omega(S(\varphi_{J}(\lambda)))^{\frac{1}{p}}}\lesssim|| \mathcal{T}_{n,\vec{\varphi},\vec{u}}-K||_{A^{p}_{\omega}\to H^{\infty}}.\]
Since \(K\) and \(J\) are arbitrary, we have
\[\sum_{j=0}^{n}\limsup_{|\varphi_{J}(\lambda)|\to 1}\frac{|u_{J}(\lambda)|}{(1-| \varphi_{j}(\lambda)|^{2})^{j}\omega(S(\varphi_{j}(\lambda)))^{\frac{1}{p}}} \lesssim||\mathcal{T}_{n,\vec{\varphi},\vec{u}}||_{e,A^{p}_{\omega}\to H^{ \infty}}.\]
The proof is complete.
|
2309.06300 | Quantum memories for squeezed and coherent superpositions in a
driven-dissipative nonlinear oscillator | Quantum oscillators with nonlinear driving and dissipative terms have gained
significant attention due to their ability to stabilize cat-states for
universal quantum computation. Recently, superconducting circuits have been
employed to realize such long-lived qubits stored in coherent states. We
present a generalization of these oscillators, which are not limited to
coherent states, in the presence of different nonlinearities in driving and
dissipation, exploring different degrees. Specifically, we present an extensive
analysis of the asymptotic dynamical features and of the storage of squeezed
states. We demonstrate that coherent superpositions of squeezed states are
achievable in the presence of a strong symmetry, thereby allowing for the
storage of squeezed cat-states. In the weak symmetry regime, accounting for
linear dissipation, we investigate the potential application of these nonlinear
driven-dissipative resonators for quantum computing and quantum associative
memory and analyze the impact of squeezing on their performance. | Adrià Labay-Mora, Roberta Zambrini, Gian Luca Giorgi | 2023-09-12T15:06:08Z | http://arxiv.org/abs/2309.06300v2 | Quantum memories for squeezed and coherent superpositions in a driven-dissipative nonlinear oscillator
###### Abstract
Quantum oscillators with nonlinear driving and dissipative terms have gained significant attention due to their ability to stabilize cat-states for universal quantum computation. Recently, superconducting circuits have been employed to realize such long-lived qubits stored in coherent states. We present a generalization of these oscillators, which are not limited to coherent states, in the presence of different nonlinearities in driving and dissipation, exploring different degrees. Specifically, we present an extensive analysis of the asymptotic dynamical features and of the storage of squeezed states. We demonstrate that coherent superpositions of squeezed states are achievable in the presence of a strong symmetry, thereby allowing for the storage of squeezed cat-states. In the weak symmetry regime, accounting for linear dissipation, we investigate the potential application of these nonlinear driven-dissipative resonators for quantum computing and quantum associative memory and analyze the impact of squeezing on their performance.
## I Introduction
Quantum oscillators with high nonlinearities have recently gained attention due to their promise to perform universal quantum computation [1; 2; 3; 4]. These types of systems benefit from an infinite dimensional Hilbert space, which allows, for instance, quantum error correction (QEC) techniques [5; 6; 7] and fault-tolerant quantum computation to be implemented with no need for ancillary degrees of freedom [8; 9]. Moreover, there have been proposals to use it as a resource for quantum machine learning algorithms where information might be encoded in the amplitude or phase of a squeezed state [10; 11]. Examples include quantum reservoir computing [12; 13] and quantum associative memory [14]. Experimental realizations of these systems have also been carried out in the last decade by several groups that were able to engineer up to five-photon dissipation using supercomputing SQUID devices [15; 16; 17; 18], including demonstrations of logical qubits encoded in oscillators of this kind [19; 20].
In all cases studied so far, the exchange of photons with the environment - in the form of dissipation - and the nonlinear driving, have been considered to involve the same non-linearity, i.e. photon processes up to the same degree (where \(n\)-photon driving and dissipation are balanced). In this regime, it is well-known that the ground state (in the case of Kerr oscillators) [15; 21] or the steady state (in the case of dissipative oscillators) [3] is a cat-like superposition or classical mixture (in the presence of single-photon loss) of coherent states [2; 22]. These coherences that appear in the steady state can be traced back to the symmetry of the system which can be weak or strong depending on the system parameters [23].
In this balanced regime, it was recently shown Ref. [14] that such systems can be used as a quantum associative memory algorithm, as they permit the retrieval of previously stored patterns in the form of coherent states. The results of [14] set a new paradigm about associative memory, as they deviate from the standard approach based on Hopfield networks [24; 25], also adopted in the quantum realm [26; 27; 28; 29; 30; 31; 32; 33; 34]. Still, in all models considered so far, stored memories are encoded in classical-like states.
In this work, we go beyond and explore the possibility of different photon number exchanges between driving and dissipation, which produces squeezed states with different properties depending on the relation between the powers of the nonlinear processes. We first show that this system can be used as a quantum memory for quantum computation, similar to other works where squeezed-cat states can enhance the storage time compared to coherent states [35; 36; 37]. We numerically compute the storage time of quantum states and find under which conditions squeezing can lead to improved performance. Moreover, the results apply to broader scenarios where qubits are replaced by qudits.
Then, we extend the proposal introduced in Ref. [14] to implement a quantum associative memory protocol for pattern discrimination and characterize the system's performance in storing and retrieving patterns encoded in the amplitude or phase of the squeezed states. This extends the range of solutions for quantum associative memory to include bona fide quantum objects that could not be stored in an efficient manner in classical devices.
The article is organized as follows. In Section II we introduce the master equation and review some of its properties. Then, in Section III we determine the type of symmetry of the system, and in Section IV we study the metastable phase that arises in the case of weak symmetry. The following Section V is devoted to characterizing the squeezed states that form the metastable manifold of the system. All this analysis allows us to explore two applications for these oscillators in Section VI and Section VII. In the former, we explore the capability of the system to store quantum information over time to be used in
quantum computation [3]. In the latter, we extend the proposal introduced in Ref. [14] to implement a quantum associative memory protocol for pattern discrimination.
## II The model
The system under study consists of a generalized driven-dissipative nonlinear oscillator described by the GKLS master equation introduced in Ref. [14]
\[\frac{\partial\rho}{\partial t}=-i[H_{n},\rho]+\gamma_{1}\mathcal{D}[\hat{a}] \rho+\gamma_{m}\mathcal{D}[\hat{a}^{m}]\rho\;\equiv\mathcal{L}\rho, \tag{1}\]
where in the Liouvillian superoperator \(\mathcal{L}\) we distinguish three different terms. First, the unitary evolution described by the Hamiltonian, which in the rotation frame and after the parametric approximation is
\[H_{n}=\Delta\hat{a}^{\dagger}\hat{a}+i\eta_{n}\left[\hat{a}^{n}e^{i\theta_{0}n }-(\hat{a}^{\dagger})^{n}e^{-i\theta_{0}n}\right]\;. \tag{2}\]
This models an \(n\)-photon drive with \(n\geq 1\), where the detuning between the natural oscillator frequency \(\omega_{0}\) and the frequency of the driving force \(\omega_{s}\) is denoted by \(\Delta=\omega_{0}-\omega_{s}\). This \(n\)-photon parametric process produces squeezing effects for \(n>1\)[38] and will be called the squeezing term in the following. The parameter \(\eta_{n}\) controls the driving strength and \(\phi\) represents its phase.
The system is also coupled to the environment through two Lindblad dissipators \(\mathcal{D}[\hat{O}]\rho=\hat{O}\rho\hat{O}^{\dagger}-(1/2)\{\hat{O}^{\dagger }\hat{O},\rho\}\). The first is a linear (single-photon) dissipation, \(\hat{O}=\hat{a}\), characterized by a photon-loss rate \(\gamma_{1}\). The second is a nonlinear term with \(m\)-photon exchange rate \(\gamma_{m}\) (\(m\geq 1\)), dissipating photons to the environment in groups of \(m\). A sketch of the different nonlinear processes can be seen in Fig. 1(a).
This type of resonator has been extensively studied in literature when \(n=m\)[22; 23; 39; 40], as mentioned in the introduction. Moreover, superconducting circuits can be used to engineer \(n\)-photon driving and dissipation terms using a single buffer drive by modifying the flux frequency going through a Josephson junction [16; 17; 41]. While a single buffer drive can only implement a photon exchange mechanism of a particular degree \(n\), it may be possible to couple the resonator to two buffer drives, each with \(n\) and \(m\)-photon driving and dissipation terms. Then, by increasing the strength of the desired degree and decreasing the others, one recovers Eq. (1).
Oscillators of this kind have recently been proposed to implement universal quantum computation and fault-tolerant quantum computation due to their capacity to autonomously protect the qubits from bit-flip errors [2; 3; 8; 9]. Although phase flips still need to be corrected from photon loss, they can be easily prevented using quantum error correction techniques. To the best of our knowledge, the presence of only \(n\) photon exchange processes allows only coherent states to be stored. In the following sections, we will see that the resonator described by Eq. (1) can be used to store squeezed states as well as coherent superpositions of such states. In particular, the type of state that is preserved depends on the symmetry of the system.
In general, the results presented in the following have been obtained by fixing the parameters \(\gamma_{1}=1.0\), \(\gamma_{m}=0.2\), and \(\Delta=0.4\), unless specified otherwise. Also, to simplify the notation, we will typically refer to the master equation with \(n\)-photon driving and \(m\)-photon dissipation as a pair of numbers \((n,m)\).
## III Symmetry and steady-state structure
In this section, we will discuss the role of symmetry for Eq. (1) and its implications for understanding and controlling their dynamics. We will explore how symmetry can be used to design robust quantum systems and to protect against decoherence and other forms of environmental noise [42; 43]. We remark that although the system is infinite-dimensional, a cut-off on the dimension is introduced to perform the numerical simulation. This approximation guarantees the presence of at least one
Figure 1: (a) Sketch of the driven-dissipative nonlinear oscillator with the three processes involved in the master equation: nonlinear periodic driving with degree \(n\), linear dissipation with rate \(\gamma_{1}\) and nonlinear dissipation of degree \(m\). The driving force pushes the system with strength \(\eta\) and frequency \(\omega_{s}\) that may deviate from the natural oscillator frequency \(\omega_{0}\). The dissipative terms emit photons out of the system at rates \(\gamma_{1}\) and \(\gamma_{m}\) for the single- and multi-photon processes respectively. (b) Wigner distribution of the steady states generated in the weak symmetry regime with \(\gamma_{1}>0\). From left to right: \((n,m)=\{(2,3),(3,4),(4,3)\}\). (c) Wigner distribution of the two steady states (corresponding to even and odd parity eigenstates) present in the strong symmetry regime with \(\gamma_{1}=0\) and \((n,m)=(2,4)\).
steady state by Evan's theorem [44; 45]. Also note that the existence of at least one steady state for \(n=m\) has been shown analytically for infinite dimension[46; 47].
In accordance with Ref. [23], the system exhibits a \(\mathds{Z}_{n}\) symmetry when \(n=m\), with the symmetry being weak (strong) when \(\gamma_{1}>0\) (\(\gamma_{1}=0\)). This argument can be generalized for \(n\neq m\) where, however, the absence of linear dissipation alone does not guarantee the presence of a strong symmetry, for which a necessary condition is that the Hamiltonian and all the jump operators commute with \(\hat{Z}_{p}=\exp\bigl{(}-i\pi\hat{a}^{\dagger}\hat{a}/p\bigr{)}\)[48; 49]. Hence,
\[[\hat{Z}_{p},\hat{a}^{n}]=[\hat{Z}_{p},\hat{a}^{m}]=0\Rightarrow p \gcd(n,m)>1\, \tag{3}\]
where \(p\) determines the number of steady states in the system. Notably, instances of strong symmetry arise when the dissipation degree, \(m\), is not a coprime of \(n\), as exemplified by cases such as \((2,4)\), \((3,6)\), or \((4,6)\). On the other hand, when the two powers are coprime (\(\gcd(n,m)=1\)), there is a weak symmetry \(\mathds{Z}_{n}\) such that \([\mathcal{L},\mathcal{Z}_{n}]=0\), where \(\mathcal{Z}_{n}[\bullet]=e^{-i\pi\hat{a}^{\dagger}\hat{a}/n}\bullet e^{i\pi \hat{a}^{\dagger}\hat{a}/n}\).
The symmetry allows us to block-diagonalize the Liouvillian by dividing its matrix into \(n\) (\(p^{2}\)) independent sectors in the case of weak (strong) symmetry. Specifically, for weak symmetry, the Liouvillian can be expressed as \(L=\bigoplus_{\mu=0}^{n-1}B_{\mu}^{W}\), where the steady state is located within the symmetry sector \(\mu=0\)[48; 50]. Conversely, for strong symmetry, the Liouvillian takes the form \(L=\bigoplus_{\mu,\nu=0}^{p-1}B_{\mu\nu}^{S}\). The \(p\) steady states are found in the sectors \(B_{\mu\mu}^{\mu}\) with \(\mu=0,\ldots,p-1\). The other sectors contain the coherences that will eventually decay in the long time limit.
In Fig. 1, we present illustrative examples of the different steady states produced by Eq. (1). These have been obtained numerically by solving the steady-state equation \(\mathcal{L}\rho_{\rm ss}=0\)[51]. The Wigner distribution of these states allows us to observe quantum phenomena, such as quantum entanglement, as indicated by the negativity of this quasi-probability distribution [52]. On the one hand, panel (b) shows various scenarios with weak symmetry, characterized by a single steady state. Notably, we observe variations in the shape of the lobes corresponding to different photon exchange powers. On the other hand, panel (c) exhibits the case of \((2,4)\) under strong symmetry, which leads to two steady states in the system. These steady states correspond to even and odd parity cat-states with squeezed states as lobes. A similar situation can be expected for \((3,6)\), where we anticipate three steady states with well-defined symmetry eigenvalues \(\mu=0,1,2\). We note that in these two cases \(\gcd(n,m)=n\) resulting in the number of steady states being equal to the power of the driving. However, special situations arise when \(\gcd(n,m)=p\neq n\), with \(p>1\). For example, in the case of \((4,6)\), although there are four symmetrically distributed squeezed lobes, there are only two steady states due to \(p=2\). Similar to \((2,4)\), these two steady states also have well-defined parity with \(\mu=0,1\). This particular case will be discussed in more detail in Appendix C.
In both cases, for weak and strong symmetry, we can approximate the lobes as squeezed-coherent states
\[\left|\alpha,\xi\right\rangle=D(\alpha)S(\xi)\left|0\right\rangle \tag{4}\]
where \(D(\alpha)=\exp\bigl{(}\alpha\hat{a}^{\dagger}-\alpha^{*}\hat{a}\bigr{)}\) is the displacement operator with amplitude \(\alpha=r\exp(i\theta)\in\mathds{C}\). Here, \(r\in[0,\infty)\) determines the distance from the origin and \(\theta\in[0,2\pi)\) the displacement angle in phase-space. Then, \(S(\xi)=\exp\left[-(\xi(\hat{a}^{\dagger})^{2}-\xi^{*}\hat{a}^{2})\right]\) is the squeezing operator with squeezing parameter \(\xi=s\exp(i\phi)\in\mathds{C}\). The magnitude \(s\in\mathds{R}\) determines the strength of the squeezing while \(\phi\in[0,\pi)\) is the direction in which these states have a squeezed quadrature. We can relate the squeezing strength \(s\) with the variance of the quadrature as
\[\bigl{\langle}(\Delta X_{\phi})^{2}\bigr{\rangle}=\frac{1}{4}e^{-2s} \tag{5}\]
where \(X_{\phi}=[\hat{a}\exp(-i\phi)+\hat{a}^{\dagger}\exp(i\phi)]/2\).
By analyzing the mean field equation (detailed in Appendix A), we can determine the phase of each lobe in phase space (corresponding to the coherent displacement), given by \(\theta_{j}=\theta_{0}+(2j+1)/n\) for \(j=1,\ldots,n\). Consequently, the amplitudes of the \(n\) lobes composing the steady state are \(\alpha_{j}=r\exp(i\theta_{j})_{j=1}^{n}\), where \(r\) is approximated by Eq. (4). A mean-field approximation does not capture squeezed fluctuations, but looking at the direction of the smallest quadrature of the lobes in Fig. 1(b), we appreciate that when \(n>m\) the states are phase squeezed and when \(n<m\) the states are amplitude squeezed. Thus, taking into account the phase of the lobes \(\theta_{j}\), we get
\[\xi_{j}=s\exp[i2\theta_{j}+i\Theta(n-m)\pi/2]\qquad j=1,\ldots,n \tag{6}\]
where \(\Theta(x)\) is the Heaviside function which is one for \(x>0\) and vanishes otherwise. The case \(n=m\) is the previously studied case with no squeezing (\(s=0\)) [14]. Here, we assume \(r\) & \(s\) is the same for each lobe due to the rotational symmetry of the system.
The steady state of the system in the presence of weak symmetry is then a classical mixed state of the lobes so
\[\rho_{\rm ss}^{W}=\frac{1}{n}\sum_{j=1}^{n}\left|\alpha_{j},\xi_{j}\right\rangle \!\!\left\langle\alpha_{j},\xi_{j}\right|. \tag{7}\]
Instead, when the system has strong symmetry, we are led with \(p\) steady states which are coherent superpositions of the lobes. An example of such a state is the one belonging to the \(\mu=0\) symmetry sector
\[\rho_{\rm ss}^{S}=\left|\psi_{\rm ss}^{S}\right\rangle\!\!\left\langle\psi_{\rm ss }^{S}\right|;\qquad\left|\psi_{\rm ss}^{S}\right\rangle=\frac{1}{\sqrt{n}}\sum_ {j=1}^{n}\left|\alpha_{j},\xi_{j}\right\rangle. \tag{8}\]
Throughout the rest of the article, we will work mainly in the weak symmetry regime with \(\gamma_{1}>0\) which corresponds to the most physical scenario.
Metastability
As discussed in [50; 53], in open quantum systems, metastability typically arises as a consequence of a separation between two consecutive eigenvalues of the Liouvillian, i.e. \(\tau_{l}\gg\tau_{l+1}\) where \(\tau_{l}^{-1}=-\operatorname{Re}\lambda_{l}\) (see Appendix B for definitions and more detail about the Liouvillian formalism) and is closely related to the emergence of quantum entrainment and dissipative phase transitions [54]. In the presence of metastability, fast dynamics (\(t<\tau_{l+1}\)) is well separated from the slow metastable phase (\(t>\tau_{l+1}\)). After the relatively fast transient \(\tau_{l+1}\), the system decays into the metastable manifold whose size depends on the number of slow decaying modes. The emergence of a metastable phase, which isolates \(l\) modes from the rest, enables us to confine the system dynamics within this metastable manifold spanned by the right eigenmodes \(\{R_{j}\}_{j=1}^{l}\) of the Liouvillian [54; 55]. Specifically, after the initial decay (\(t>\tau_{l+1}\)), the system's state can be expressed as a complex linear combination of only the aforementioned eigenmodes. Finally, a decay to the steady state occurs for times \(t>\tau_{2}\).
Metastability is seen in the Liouvillian of Eq. (1) when there is linear dissipation (weak symmetry) and the nonlinear powers are balanced (\(n=m\)) [54; 23; 14]. We will, in the following, remove the latter condition while we stay in the weak symmetry limit (\(\gamma_{1}>0\)). Given the large number of parameters and regimes of the model, for the sake of clarity, we will consider Liouvillians with at most four-photon driving exchange (\(n\)) and we will modify the power of the dissipation (\(m\)) to be above and below that of the driving. Moreover, we will compare with the coherent-state situation (\(n=m\)) to study in which situation squeezing can improve the performance of certain applications.
One of the first properties to analyze is the length of the metastable phase. For that, we recall that in the balanced situation, the separation in the Liouvillian spectrum occurs between the eigenvalues \(n\) and \(n+1\) where \(n\) is the degree of the driving term [14]. Furthermore, the separation between the two eigenvalues increases for large \(\eta_{n}\) and small \(\gamma_{n}\). Both of these properties remain in the unbalanced situation (\(n\neq m\)) but the scaling of the separation with respect to the ratio \(\eta_{n}/\gamma_{m}\) is different for each case (see Appendix A). Consequently, in order to compare the different cases, we will use the parameter \(\langle\hat{n}\rangle_{\rm ss}=\operatorname{tr}\hat{n}\rho_{\rm ss}\) corresponding to the mean photon number of the \(n\) lobes forming the steady state (Eq. (7)).
In Fig. 2(a,b), we show the ratio between the real parts of the eigenvalues \(\lambda_{n}\) and \(\lambda_{n+1}\) which relates to the Liouvillian separation. The smaller the ratio, the longer the metastable phase. In this sense, we see the ratio decreasing exponentially with the photon number, as \(\operatorname{Re}\lambda_{n}\to 0^{-}\) and \(\operatorname{Re}\lambda_{n+1}\to-\infty\). The behavior is the same for all powers (including \(n=4\) which is not shown and should remain for higher values) but the slope of the curves changes drastically. In general, for \(n>2\), having different power between the driving and the dissipation reduces the eigenvalue separation, leading to a shorter metastable phase. This is not the case for \(n=2\) where both situations with \(m>n\) (\(m=3\) and \(m=4\)) result in a larger separation for the same mean photon number.
The slope of the curves can be used to better compare the different scenarios. For that, we calculate the scaling factor \(k\) of the separation with the mean photon number in Fig. 2(c). This has been obtained from an exponential fit of the lines in the left panels as \(a10^{b(i)_{m}}\) where \(a\) and \(b\) are the fit parameters that relate to the scale factor as \(k=10^{b}\). We can clearly see that squeezing in the presence of two lobes improves the metastable time. The worst scaling occurs for \(n=4\) and \(m=6\) which is related to the different distribution of the Liouvillian spectra. More details can be found in Appendix C. Also, although the case \((2,1)\) is shown in this type of figure, it has not been studied since the single steady state is a squeezed-vacuum state with no metastability.
The appearance of the metastable phase pauses the dynamics of the system which can be described by only the slowest \(n\) modes. However, the eigenmodes \(\{R_{j}\}_{j=1}^{n}\) themselves are not valid quantum states since they are trace-less. The quantum states that span the metastable manifold are known as _extreme metastable states_ which we will denote as \(\{\mu_{j}\}_{j=1}^{n}\). They can be constructed using the extreme eigenvalues of the left eigenmodes \(\{L_{j}\}_{j=1}^{n}\) as \(\mu_{j}=\sum_{a=1}^{n}c_{a}^{M/m}R_{a}\) where \(c_{a}^{M}\) and \(c_{a}^{m}\) are the maximum and minimum eigenvalues of \(L_{a}\). Of course, the contribution of the steady state (\(a=1\)) is maximal since \(L_{1}=\mathds{I}\) so \(c_{1}^{M}=c_{1}^{m}=1\). This gives the unit trace condition to the extreme metastable states. The other coefficients can be chosen as those that minimize the classicality condition [56]. Notably, we found that the combination of eigenmodes is the same for all values \(m\) (for a fixed driving degree \(n\)). The actual expression can be seen in the supplemental material of Ref. [14].
Using this method, we isolate the \(n\) lobes that form the
Figure 2: (a,b) Ratio between the eigenvalues \(\lambda_{n}\) and \(\lambda_{n+1}\) defining the Liouvillian separation. (c) Scale factor \(k\) (see main text) of the spectral separation with the photon number, obtained by fitting the lines in (a-c) to an exponential function.
steady state (see Fig. 1) which correspond identically to the \(n\) extreme metastable states. In this way, the steady state can be reconstructed as \(\rho_{\rm ss}^{W}=(1/n)\sum_{j}\mu_{j}\) like in Eq. (7).
## V Characterization of squeezed states
We have seen in Section III that the steady state of the oscillator is formed by \(n\) symmetrically distributed lobes which are, depending on the symmetry, entangled (strong) or mixed (weak). In both cases, the number of lobes is determined by the squeezing degree \(n\). The power of the dissipation, however, modifies the shape of such states. While it is well known that coherent states can be obtained for equal non-linear powers (\(n=m\)) [30; 39], for \(n\neq m\), the states become squeezed. Squeezing is a well-known quantum phenomenon in which quantum states have quantum fluctuations below the shot noise level of coherent states in one quadrature of the field. It is related to sub-Poissonian statistics, characterized by Mandel's \(\mathcal{Q}\) parameter [57; 58]
\[\mathcal{Q}=\frac{\left\langle(\Delta\hat{n})^{2}\right\rangle-\langle\hat{n} \rangle}{\langle\hat{n}\rangle}. \tag{9}\]
Quantum states can be classified into sup-poissonian (\(-1\leq\mathcal{Q}<0\)) and super-poissonian (\(\mathcal{Q}>0\)). Coherent states have \(\mathcal{Q}=0\) since the photon number distribution follows a Poisson distribution with a mean photon number equal to their variance.
By considering the steady state of the system obtained numerically by solving \(\mathcal{L}\rho_{\rm ss}=0\) in the weak symmetry regime (\(\gamma_{1}=1\)), we proceed to evaluate the Mandel's parameter. It is noteworthy that we can directly compute \(\mathcal{Q}\) using the steady state itself. This holds because the operator \(\hat{n}\) commutes with the symmetry operator \(\hat{Z}_{n}\).
In Fig. 3, we study the cases where the dissipation degree \(m\) is one below (a-c) or above (b-d) the driving power \(n\). First, in panels (a) and (b), we plot the evolution of \(\mathcal{Q}\) as the mean photon number of the steady state increases. We recall that the mean photon number is related to the ratio \(\eta_{n}/\gamma_{m}\) as obtained from the mean-field analysis (Eq. (10)). More details on this can be found in Appendix A.
These figures illustrate two distinct scenarios: for \(m=n-1\), the states exhibit super-Poissonian behavior, while for \(m=n+1\), the states display sub-Poissonian statistics (for \(\langle\hat{n}\rangle_{\rm ss}>5\)). This distinction becomes evident when examining panels (c) and (d), which depict the photon number probability distribution for the steady states represented in the inset figures (a) and (b), respectively. The probability distribution of a coherent state with the same mean photon number \(\langle\hat{n}\rangle_{\rm ss}\) as the squeezed state is also included as a red-dashed line for comparison. We find that the states arising from \(m=n+1\) correspond to amplitude-squeezed states, characterized by sub-Poissonian statistics. Importantly, this behavior extends to other values of \(m\) where \(m>n\). For the case \(m=n-1\), however, we cannot draw definitive conclusions about the classicality of the states. It is the analysis of the Wigner distribution that shows that they exhibit phase squeezing, suggesting the need for alternative techniques to accurately determine their nature.
In Figure 3(e), we present the attained values of the Mandel parameter for large mean photon numbers (\(\langle\hat{n}\rangle_{\rm ss}\gg 1\)). These values are computed as the average of \(\mathcal{Q}\) when \(\langle\hat{n}\rangle_{\rm ss}\in[20,36]\)1. Additionally, we include the corresponding values for the case of \(n=m\), representing coherent states, and \(m=n+2\), which also leads to sub-Poissonian statistics. Observing the figure, we can discern that the degree of sub-Poissonianity diminishes as \(n\) increases, while it intensifies with growing \(m\).
Footnote 1: The variance of this mean vanishes in most cases since \(\mathcal{Q}\) is constant for these large values of the mean photon number.
Although the Mandel \(\mathcal{Q}\) parameter allows us to distinguish between states with sub- and super-Poissonian statistics, it does not provide information about the extent of squeezing in a direction different than the amplitude. Concretely, for the phase-squeezed states we encounter when \(n>m\), this needs to be extracted from the variance of the quadrature operator Eq. (5). The angle \(\phi\) is given by the direction of minimal squeezing which can be related to the angle of the lobes in phase space as in Eq. (6).
To analyze the squeezing properties of the metastable states, we computed the variance of the quadrature operator along the direction of minimal squeezing for each state. The obtained values, presented in Figure 4(a), represent the average variance over all lobes. It is important to note that, due to the rotational symmetry of the system, the quadrature variance and the associated squeezing parameter (\(s\)) are the same for all lobes. Moreover, as the lobes may deviate from pure coherent squeezed states, we performed a fitting procedure to determine the parameter \(s\) by matching the steady states with the mixed state given in Eq. (7). The results of the fitting process are depicted as markers in the same figure. This fitting approach helps to determine the nature of the extreme metastable phases since a lower variance than for coherent states in a given direction does not necessarily indicate that the states are squeezed coherent states. In this way, we can assess how closely the numerical steady states resemble the expected mixed-state superposition of symmetrically distributed squeezed coherent states.
In all cases examined, we find that the quadrature variance is smaller than that of coherent states (\(\langle(\Delta X_{\rm coh})^{2}\rangle=0.25\)). The lowest quadrature variance is observed in the case \((3,2)\). However, it should be noted that the metastability window is practically non-existent for small values of \(\langle\hat{n}\rangle_{\rm ss}\) (refer to Fig. 2). In this regime, the deviations from classical behavior are significant, indicating that the dynamics cannot be adequately described by considering
only the first \(n\) modes. Concretely, in Fig. 4(d), we show the Wigner distribution of the extreme metastable state corresponding to \(\theta_{1}=\pi\), constructed using the extreme eigenvalues as explainedThis state, for a small photon number, contains negativities and lacks a well-defined direction of squeezing. For larger amplitudes, it becomes possible to accurately construct the lobes and calculate the quadrature variance, which reaches a value of \(0.15\) (corresponding to \(-2.21\,\mathrm{dB}\)). Additionally, panel (e) shows the complementary scenario \((3,4)\). The Wigner distributions for various mean photon numbers (\(\left\langle\hat{n}\right\rangle_{\mathrm{ss}}=4,8,12\) and \(16\)) are displayed. Unlike the previous case, the lobe can already be discerned for the smallest photon number. However, a significant discrepancy is observed when comparing the quadrature fluctuations obtained from the lobe (represented by the pink bar in panel (b)) with those obtained using the fitting procedure (indicated by the square marker). This discrepancy suggests that the state is not accurately described by Eq. (4) until \(\left\langle\hat{n}\right\rangle_{\mathrm{ss}}\geq 8\).
In the other cases, the fit reaches a close value when compared to the direct computation of the quadrature fluctuations. Similar to our observation in Fig. 3, where the Mandel parameter reaches a stable value for large photon numbers, we find that the quadrature variance also stabilizes as the photon number increases. We plot this convergence value in panel (c) of the same figure, obtained by averaging \(\left\langle(\Delta X)^{2}\right\rangle\) for \(\left\langle\hat{n}\right\rangle_{\mathrm{ss}}=20,\ldots,30\). As expected, coherent states are only obtained when \(n=m\), indicating a balance between driving and dissipation. However, we can consistently generate squeezed states when the driving and dissipation degrees differ. Furthermore, it is worth noting that the magnitude of the squeezing is solely determined by the powers \(n\) and \(m\) and remains invariant under changes in the oscillator parameters.
Figure 4: (a,b) Quadrature variance of the extreme metastable states for the phase (a) and amplitude (b) squeezed states. The squeezing angle corresponds to the phase of each lobe in phase space, with an extra phase factor of \(\pi/2\) for amplitude squeezed states. The final value shown for each photon number is the average over the different lobes. In markers, the quadrature squeezing obtained from fitting the numerical steady-state with Eq. (7). (c) Quadrature variance \(\left\langle(\Delta X)^{2}\right\rangle\) on average for large \(\left\langle\hat{n}\right\rangle_{\mathrm{ss}}\in[20,30]\). (d,e) Wigner representation of the metastable state \(\mu_{1}\) (\(\theta_{1}=\pi\)) for \(n=3\). The mean photon number of the lobes, from left to right, corresponds to \(\left\langle\hat{n}\right\rangle_{\mathrm{ss}}=4,8,12\) & \(16\). We appreciate the phase squeezing in (d) and the amplitude squeezing in (e) which leads to the phase factor \(\pi/2\).
Figure 3: (a,b) Evolution of the Mandel \(\mathcal{Q}\) parameter for increasing value of the average photon number. We distinguish between dissipative powers below the driving degree (a) and above (b). (c, d) Probability distribution of the steady state marked in crosses on the respective left figure. The solid black line corresponds to the distribution of a steady state as a mixture of \(n\) squeezed states. The dashed red line corresponds to the probability distribution of a coherent state with the mean photon number shown. The super- and sub-Poissonian character of the states are identified given the larger and smaller variance of the bars in (c) and (d) respectively. (e) Mandel \(\mathcal{Q}\) parameter for large average photon number.
Memory lifetime
In this section, we investigate the feasibility of using the resonator as a storage medium for squeezed and cat states. We focus on evaluating two key properties: the bit-flip time and the phase-flip time. These measures are commonly employed in quantum computation to assess the memory's ability to retain information over time [2; 3; 9; 43]. The bit-flip (or relaxation) time is the time it takes for a lobe to decay to the steady state, while the phase-flip (or dephasing) time is the time it takes for coherences to vanish. We will straightforwardly extend the meaning of these two quantities to qudits without imposing a particular qubit encoding in each case (see Ref. [3] for mappings taking states generated with \(n=4\) to a 2d Bloch sphere).
In the context of bosonic memories, the bit-flip time is defined as the decay time required for a lobe \(\ket{\psi_{k}}\) to lose all information about its initial state, resulting in the system reaching the fully mixed state \(\mathds{I}=\rho_{\text{ss}}=(1/n)\sum_{k}\ket{\psi_{k}}\!\bra{\psi_{k}}\). Here, we assume the extreme metastable phases form a \(n\)-dimensional computational basis. For coherent states (\(n=m\)), experimental studies have demonstrated an exponential increase in the bit-flip time as a function of the mean photon number. Conversely, the phase-flip error rate exhibits a linear dependence on \(2\bra{\hat{n}}/T_{1}\), where \(T_{1}\) is the resonator lifetime [59; 4].
Bit-flipTo calculate the bit-flip time, we compute the full master equation evolution for each state in the computational basis, i.e. \(\{\mu_{k}\}_{k=1}^{n}\). At each time step, we measure the expectation value of \(\hat{a}\), which asymptotically approaches zero as \(\text{Tr}(\hat{a}\rho_{\text{ss}})=0\). Thus, the bit-flip time, denoted as \(T_{\text{bf}}\), is obtained by fitting the imaginary part of \(\langle\hat{a}\rangle^{2}\) to an exponentially decaying function \(\exp(-T_{\text{bf}}t)\). In most cases, this decay time is expected to be determined by the Liouvillian spectral gap, denoted by \(\tau_{2}^{-1}=-\operatorname{Re}\lambda_{2}\), which represents the decay time of the slowest eigenmode. It should be noted, however, that phenomena such as skin effects or many-body localization can lead to longer timescales in certain systems [60; 61; 62].
To explore the scaling behavior with increasing lobe separation, we perform this fitting procedure for several values of the mean photon number \(\bra{\hat{n}}_{\text{ss}}\). In our analysis, we fix the dimension of the Hilbert space to \(\dim\mathcal{H}=50\) to ensure an accurate representation of the dynamics over a range of mean photon numbers from 2 to 20.
The results are presented in Fig. 5, where panels (a-c) depict the bit-flip time \(T_{\text{bf}}\) as a function of the mean photon number of the steady state. Each data point represents a fitting procedure applied to the decay time of \(\langle\hat{a}\rangle\) obtained from a full master equation evolution, using the lobes as initial states. Additionally, panel (d) showcases three specific trajectories with \(\langle\hat{n}\rangle=9\) and different driving degrees (\(n=2,3,4\)), resulting in bit-flip times on the order of \(10^{5}\), \(10^{2}\), and \(1\) respectively. It is important to note that the error in estimating the bit-flip time is less than \(10^{-3}\) in all cases, with a correlation coefficient between the data and the fitting function \(r^{2}\sim 1-10^{-6}\).
In these figures, we also include the decay of the spectral gap in lines of different strokes for each value of \(m\) (dotted for \(n-1\), solid for \(n\), dashed for \(n+1\), and dash-dotted for \(n+2\)). We appreciate that the bit-flip time obtained from the full master equation evolution is equivalent to the decay time of the spectral gap. Hence, the evolution can be fully understood from the Liouvillian eigenvalues.
We note that in most cases \(T_{\text{bf}}\) grows exponentially with the mean photon number. This behavior is well known and has been experimentally demonstrated for coherent states (\(n=m=2\)) [4], and we show that it remains valid for squeezed states with \(n\neq m\). In panel (e) of Fig. 5, we can see the scale factor \(K\) of \(T_{\text{bf}}\) with the mean photon number obtained by exponentially fitting the
Figure 5: (a-c) Logarithmic plot of the bit-flip time over the mean photon number of the steady state. The markers are obtained by fitting the time evolution of \(\operatorname{Im}\bra{\hat{a}}\) to an exponentially decaying function (error bars are smaller than marker size). The lines correspond to the decay time of the spectral gap \(\lambda_{2}\). (d) Example full master equation evolution of \(\operatorname{Im}\bra{\hat{a}}\) (y-axis) for an initial state corresponding to the lobe \(\ket{\psi_{0}}\) with \(\bra{\hat{n}}=9\) (\(\dim\mathcal{H}=50\)). Each line corresponds to a different driving degree \(n\) where we fixed the relation with the dissipation degree to \(m=n+1\). We obtain a bit-flip time of \(1.41\times 10^{5}\), 70 and 1.1 (units of \(\gamma_{1}\)) for \(n=2,3,4\) respectively. (e) The scale factor of the bit-flip with respect to the number of photons, obtained by fitting the data in (a-c) to an exponential function.
data points in the left panels to \(T_{\text{bf}}=xK^{\left<\hat{n}\right>}\). Notably, for \(n=2\), the presence of squeezing can significantly enhance the bit-flip time of the resonator, with a scale factor of \(K=6.4\) for \(m=3\). This is in agreement with previous results where squeezing can enhance the storage time of a qubit [35; 36; 37]. In general, however, squeezing is counter-productive in both amplitude and phase. The longer bit-flip time for \(n>2\) is obtained when both nonlinearities are equal. The significant reduction in storage time observed is a consequence of the lobes becoming increasingly indistinguishable as their number grows. In other words, a higher photon number is required for resolving the lobes with the same precision as in the case of \(n=2\). This effect is further amplified by the presence of amplitude squeezing, which diminishes the separation between the lobes.
Phase-flipWe proceed to evaluate the phase-flip time, which quantifies the duration for a superposed state to lose its coherences. In our analysis, we adopt a similar approach as for the bit-flip time. We consider the \(n\)-cat states \(\{\,|C_{\mu}^{(n)}\rangle\}_{\mu=1}^{n}\), derived from the \(n\) lobes, as our initial states. These \(n\)-cat states exhibit well-defined parities, denoted by \(p\), which correspond to the symmetry eigenvalues associated with each state. By investigating the expectation value over time of the projector \(\hat{P}_{\mu}=\sum_{a=0}^{\lfloor D/n\rfloor}|an+\mu\rangle\!\langle an+\mu|\) associated with the symmetry sector \(p\), we observe that the expectation value progressively decays. Eventually, when the probability of finding the state within that sector reaches \(1/n\) (fully mixed states), all coherences are lost. The fitting of this decay to \(\langle\hat{P}_{\mu}\rangle=[(n-1)\exp(-\Gamma_{\text{pf}}t)+1]/n\) determines the phase-flip error rate \(\Gamma_{\text{pf}}\).
An example can be seen in Fig. 6(a) for \(n=2\) where we take as the initial state the even cat-state \(|C_{0}\rangle=(|\alpha\rangle+|-\alpha\rangle)/\sqrt{2}\) that populates only the even energy levels of the oscillator. Then, the evolution of the even parity operator \(\hat{P}_{0}\) indicates the decay of the coherence, which is lost at times \(0.01\gamma_{1}\) and \(1\gamma_{1}\) for \(m=3\) and \(m=2\) respectively. The Wigner representation of the states at particular times is shown in panels (b) and (c) for the two dissipation degrees. The presence of coherence can be appreciated in the negative fringes in the center of the Wigner distribution which decay faster in the presence of squeezing.
The values of the phase-flip error rate \(\Gamma_{\text{pf}}=1/T_{pf}\) for the two trajectories in Fig. 6(a) are \(\Gamma_{\text{pf}}^{m=2}=(1.71472\pm 0.00016)\gamma_{1}\) and \(\Gamma_{\text{pf}}^{m=3}=(316.82\pm 0.20)\gamma_{1}\) so the cat state stored in the balanced situation remains in memory longer than in the unbalanced case.
A more complete analysis for more combinations of \(n\) and \(m\) is done in Fig. 7, where we also compute the phase-flip rate for different values of the mean photon number. We find that cat-states survive longer in a resonator with \(n=m\). We also find a consistent linear relationship between the phase-flip error rate and the mean photon number similar to the cases \(n=m\), indicating that states with more photons are more sensitive to noise. However, the slope of the curves differs drastically depending on the relation between the two degrees \(n\) and \(m\). As can be seen in Fig. 7(c), we show the slope \(y\) obtained from a linear fit \(\Gamma_{\text{pf}}=x+y\left<\hat{n}\right>\) of the lines in panels (a,b) (as well as from the data obtained for the lines not shown). Essentially, when \(n\) and \(m\) are coprime, the phase-flip error rate increases very fast with the mean photon number with slopes of up to four orders of magnitude larger than the corresponding coherent-state cases. Instead, if \(\text{gcd}(n,m)>1\), we obtain a much smaller scaling comparable to the situations with \(n=m\). For instance, for a
Figure 7: (a-b) Phase flip decay rate as a function of the mean photon number for \(n=2,3\) and \(4\). The markers are obtained by fitting the decay of the corresponding block observable \(\hat{P}_{\mu}\) to an exponential function as in Fig. 6.
Figure 6: (a) Full master equation evolution of the parity operator for a cat-state with even parity and \(\left<\hat{n}\right>=9\) in a resonator with driving degree \(n=2\) and dissipation degree \(m=2\) (solid red) and \(m=3\) (dotted blue). The phase flip error rate calculated by fitting the lines to an exponentially decaying function is shown in the legend. (b,c) Wigner distribution of the states at times corresponding, from left to right, to the vertical dashed lines in (a).
driving degree of 2, we have \(\Gamma_{\rm pf}=2\gamma_{1}\) for both \(m=2\) and \(m=4\). This result is compatible with the theoretical scaling of the dephasing rate [4]. Hence, even though we are in the weak-symmetry regime where cat-states are not the steady states of the system, during the metastability window, an effective decoherence-free subspace appears in the metastable manifold that freezes the dynamics until the decay to the single steady state [63]. The fact that only the linear-dissipative term mixes the symmetry blocks allows us to maintain coherent superpositions for a longer time. Another example of this phenomenon, which is not shown in Fig. 7, is the case \(n=3\) and \(m=6\). There, we obtain a slope of the dephasing rate equal to \(1.84\pm 0.03\) which is close to the scaling found for \(n=m=3\).
The most notable situation is when \(n=4\) and \(m=6\) which has \(\gcd(4,6)=2\). Thus, the corresponding strong-symmetric case (\(\gamma_{1}=0\)) has only two steady states consisting of a combination of even and odd 4-cat states. Consequently, if we were to directly compute the phase-flip rate for the four 4-cat states, we would see a very fast decay of these states to one of the two stable steady states. The chosen state depends on the symmetry eigenvalue of the initial one, that is, the 4-cat states with eigenvalue 0 and 2 (1 and 3) converge to the steady state with even (odd) parity. In the weak-symmetry regime (\(\gamma_{1}>0\)), the number of metastable states depends on the mean photon number \(\left\langle\hat{n}\right\rangle_{\rm ss}\): two for small values coinciding with the strong-symmetric steady states and four for large values corresponding to the lobes. Hence, this resonator, for the particular values of \(\left\langle\hat{n}\right\rangle_{\rm ss}\) considered, is much more useful to store 2-dimensional qubits. Indeed, in Fig. 7(c), the slope of the phase-flip error rate corresponds to the storage of even and odd parity states. For more details on this particular case see Appendix C.
## VII Quantum associative memory
In our previous work [14], we demonstrated the applicability of these oscillators in pattern classification within the framework of quantum associative memory focusing on the balanced configuration \(n=m\). By leveraging the metastable phase, where the lobes function as attractors of the system dynamics, we successfully discriminated initial states into \(n\) coherent states, which served as the memories for storing information. Building upon this approach, we now extend it to incorporate squeezed states, enabling the encoding of information within the four degrees of freedom of a squeezed-coherent state Eq. (4). The encoding leads to a state that most closely resembles one of the \(n\) memories in the metastable phase \(\bar{k}\). During the metastable phase, the system dynamics will converge with a high probability to the desired state if the initial separation from the other lobes is negligible. Then, a phase-shifted measurement for squeezed state allows extraction of the lobe \(k\) to which it has converged to assert the probability \(P[\bar{k}=k]\) that it went to the correct memory.
For our numerical simulations, we consider an initial squeezed-coherent state \(\left|\beta,\zeta\right\rangle\), where \(\left|\beta\right|^{2}/\left\langle\hat{n}\right\rangle_{\rm ss}\in[1/2,2]\), \(\left|\zeta\right|\in[0,1]\), and the phases are randomly chosen from the intervals \([0,2\pi]\) and \([0,\pi]\) for each respective complex value. To determine the closest memory lobe, we calculate \(\bar{k}=\mathrm{argmin}_{k=1,\ldots,n}\|\mu_{k}-|\alpha,\zeta\rangle\!(\alpha, \zeta\|]\), where \(\mu_{k}\) represents the \(k\)-th memory state.
Using the squeezed-coherent state as the initial state, we perform a Monte Carlo simulation based on the master equation given in Eq. (1). The simulation is run for a time long enough to ensure the state penetrates into the metastable transient. At this point, we measure the state using the positive operator-valued measure (POVM) \(\left\{\Pi_{j}=|\alpha_{j},\xi_{j}\rangle\!(\alpha_{j},\xi_{j}|)\right\}_{j=1}^ {n}\) and \(\Pi_{?}=\mathds{I}-\sum_{j}\Pi_{j}\) that is used for phase-shifted squeezed state discrimination and represents a natural extension to the POVM used in [14].
The success probability of correctly identifying the lobe for a single trajectory is given by
\[P[\bar{k}|\tau_{n}<t<\tau_{2}]=\frac{1}{\tau_{2}-\tau_{n}}\int_{\tau_{n}}^{ \tau_{2}}dt\,\mathrm{tr}\,\Pi_{\bar{k}}\rho(t)\, \tag{10}\]
where \(\rho(t)\) is the density matrix at time \(t\). To obtain reliable statistics, we repeat the entire procedure for different initial states, generating 500 realizations, and calculate the average success probability over these realizations.
In Fig. 8, we show the success probability for an oscillator with mean photon number \(\left\langle\hat{n}\right\rangle=4,8,12\) and 16. The error bars correspond to the standard deviation of the success probability that is averaged for the 500 trajectories. We include a horizontal black line that represents the success probability you would obtain by random guessing the lobe, that is, \(1/n\). In other words, the state is a statistical mixture of all lobes (see Eq. (7)).
In general, we can see that the success probability tends to increase as the mean number of photons increases. This is expected since the discrimination between the lobes improves the further they are from the center and from each other. Moreover, in the large amplitude regime, the metastable states are well approximated by squeezed states as we saw in Section V. Thus, the POVM used for state discrimination of squeezed states is optimal.
Another characteristic is that patterns made of squeezed states (\(n\neq m\)) have a lower success probability than coherent states (\(n=m\)) except for \(n=2\) where the cases \(m>n\) outperform the latter. This is the same behavior we saw previously in Section IV and Section VI. These two cases with \(m=3\) and \(m=4\) had a longer metastable phase which led to longer bit-flip error times. In this case, the squeezed lobes achieve a success probability near one for \(\left\langle\hat{n}\right\rangle\geq 8\) with the coherent state results slightly below them. However, this trend changes when considering different values of \(n\). Panel (b) of the figure reveals that when \(m=n+1\) (amplitude-squeezed states), a higher success probability is obtained compared to other squeezing cases. On the other hand, panel (c) shows that the highest success probability (excluding the \(n=m\) scenario) is observed when \(m=n-1\) (phase-squeezed states). This discrepancy arises because amplitude-squeezed lobes
exhibit a smaller distance between them when \(n=4\) compared to the case of \(n=3\), as more states must fit within a fixed photon number. The least favorable scenario occurs in \((4,6)\), where the success probability is comparable to random guessing because the lobes are not the metastable states3. Also, when the driving and dissipation degrees are \(3\) and \(2\) respectively, the lobes are not well characterized by squeezed states unless \(\left\langle\hat{n}\right\rangle\geq 16\) (see Section V) which reduces the success probability for smaller mean photon number. Nevertheless, the highest success probability is reached for coherent states with \(3\) and \(4\) patterns.
Footnote 3: As explained in Appendix C, the four lobes are not the metastable phases so a different encoding would be needed to encode only two patterns in the even and odd parity states spanning the metastable manifold.
It should be noted that the cut-off set for the amplitude and squeezing of the initial states also affects the performance of the associative memory. In this study, we have limited the minimum amplitude of the lobes setting it to \(\sqrt{\left\langle\hat{n}\right\rangle/2}\). This criterion serves to exclude from the analysis states which are equally spaced from all the lobes and lead to a success probability of \(1/n\). Nevertheless, one can note in Fig. 8 that whiskers and outliers points arrive on several occasions to the random guessing success probability. Similarly, the maximum squeezing has been limited to \(1\) (\(-8.68\,\mathrm{dB}\)) which already allows for the application of quantum key distribution protocols [64] but may negatively affect the performance depending on its direction. For instance, an amplitude squeezed state with a large squeezing parameter might overlap two lobes.
We recall that in associative memories, the storage capacity is defined as the number of memories stored in the system over its dimension. This quantity has a classical bound of \(\alpha_{c}=0.138\) for an all-to-all network of binary neurons (commonly known as a Hopfield neural network). This limit has not yet been surpassed by the quantum version of the Hopfield network made of spin-\(1/2\) units [65; 66] although quantum systems promise to store an exponentially large number of patterns [67]. In Ref. [14], we showed that by optimizing the amplitude of the lobes one can overcome the classical bound and reduce the dimension of the Hilbert space needed to store a given number of patterns. In the presence of squeezed states, the storage capacity can be enhanced for amplitude-squeezed states as they can be described using a smaller Hilbert space. However, the capacity to distinguish the patterns is highly affected by small mean photon numbers. Hence, the combination of the two factors makes coherent-state storage more optimal in terms of storage capacity for \(n>2\).
## VIII Conclusions
In this work, we studied several dynamical properties of a quantum oscillator with driving and dissipative terms that exchange photons with the environment in packets of \(n\) and \(m\) particles. This leads to the possibility of obtaining steady states with symmetrically phase-distributed lobes that can be characterized as squeezed-coherent states, especially for high driving strength and small nonlinear dissipation rate. We have seen that a higher driving degree leads to phase-squeezed states while a higher dissipation degree leads to amplitude-squeezed lobes.
In terms of the metastable phase, we have seen that the lobes are well approximated by squeezed states when the driving degree is equal to the dissipation degree. This is the case for coherent states (\(n=m\)) and phase-squeezed states (\(n=m+1\)). In the case of amplitude-squeezed states (\(n=m-1\)), the lobes are not well approximated by squeezed states unless the mean photon number is large enough. This is because the lobes are closer to each other and the squeezing parameter must be large enough to distinguish them.
We have analyzed two applications of the oscillator. The first one is the storage of quantum states where we have characterized the bit and phase flip times. We have seen that the bit flip time for squeezed states is longer
Figure 8: (a-c) Success probability for measuring the correct lobe starting from an initial squeezed-coherent state with random amplitude and squeezing parameter. Each box is obtained from an average of \(500\) Monte Carlo trajectories with different initial states. The lower and upper sides of the box correspond to the first and third quartiles, respectively and the inner line denotes the mean probability of success. The position of the whiskers is \(1.5\) of the interquartile range, points lying outside this range are shown as dots. The horizontal black line corresponds to the success probability of random guessing the lobe, i.e. \(1/n\).
than for coherent states for \(n=2\). This is because the lobes are further apart from each other and the spectral gap is larger. Moreover, while the phase flip error rate is expected to grow linearly with the mean photon number, we have seen that the slope is much smaller when the two integers \(n\) and \(m\) are not coprime. Hence, coherences between states disappear at the same or a smaller rate than for coherent states.
The second application is quantum associative memory where moving from the results of Ref. [14], we analyzed the possibility of storing and retrieving genuine quantum states and computed the success probability of pattern discrimination for the different driving and dissipative degrees. We have seen that in general amplitude-squeezed states are a better option than phase-squeezed states to store the patterns. Nevertheless, patterns encoded in a coherent state allow attaining a high success probability for a smaller mean photon number (system size). This shows the possibility of using such memories for practical purposes. For instance, in discrete-modulated continuous-variable quantum key distribution [68; 69] where the information encoded in squeezed-coherent states can be distorted due to the transmission channel [70]. Quantum associative memory can be used to retrieve the original information.
Our primary emphasis has centered on exploring how squeezing could potentially be employed to improve the capabilities of our driven-dissipative oscillator in storing and retrieving information problems. The resonator, however, has the potential to be used for more tasks including quantum error correction [1; 5; 36] or holonomic quantum control [71].
Finally, we exhaustively analyzed some properties of the oscillator necessary to realize the aforementioned applications. Despite that, the rich dynamical features present go far beyond the ones presented. The presence of dissipative phase transitions found in the balanced model [23], exceptional points in the Liouvillian spectrum [72; 54] or symmetry breaking [73; 42] might be of further interest in the generalized model.
###### Acknowledgements.
We are grateful to E. Fiorelli for suggestions and discussions. We acknowledge the Spanish State Research Agency, through the Maria de Maeztu project CEX2021-001164-M funded by the MCIN/AEI/10.13039/501100011033 and through the QUARESC project (PID2019-109094GB-C21/AEI/10.13039/501100011033). We also acknowledge funding by CAIB through the QUAREC project (PRD2018/47). The CSIC Interdisciplinary Thematic Platform (PTI) on Quantum Technologies in Spain is also acknowledged. GLG is funded by the Spanish Ministerio de Educacion y Formacion Profesional/Ministerio de Universidades and co-funded by the University of the Balearic Islands through the Beatriz Galindo program (BG20/00085). ALM is funded by the University of the Balearic Islands through the project BGRH-UIB-2021.
## Appendix A Mean field
The evolution of the expectation value of the operator \(\hat{a}\) can be used to characterize the mean-field dynamics. For that, we compute from the master equation Eq. (1) \(\dot{\alpha}=\operatorname{tr}\hat{a}\hat{\rho}\) and approximate \(\bra{\hat{a}^{x}(\hat{a}^{\dagger})^{y}}\sim\bra{\hat{a}}^{x}\bra{\hat{a}^{ \dagger}}^{y}\). This leads to the following equation [14]
\[\dot{\alpha}=-\frac{\gamma_{1}}{2}\alpha-i\Delta\alpha-n\eta(\alpha^{*})^{n-1 }e^{-in\theta}-\frac{m}{2}\gamma_{m}|\alpha|^{2(m-1)}\alpha\, \tag{10}\]
which becomes exact in the thermodynamic limit of a large number of excitations (\(|\alpha|^{2}\rightarrow\infty\)) [23].
The roots of the previous equation determine the fixed points of the system. Defining \(\alpha=Re^{i\phi}\), we obtain the mean field amplitude
\[R^{2m-n}=\frac{2n\eta_{n}}{m\gamma_{m}}. \tag{11}\]
of the \(n\) symmetrically distributed lobes forming the steady state. This equation is only valid for the \(2m>n\) which will be the cases studied in this paper and is only valid for large amplitudes \(R\). In these conditions, the \(n\) fix-points are symmetrically distributed with angles \(\theta_{j}=(2j+1)\pi/n\) where \(j=1,\ldots,n\).
A general solution for the amplitude \(R\) does not exist for all powers \(n\) and \(m\) but, in some cases, we can get a more accurate description by fixing one of the two exponents. Concretely, for \(n=2\) and \(m>1\) we have
\[R^{2m-2}=\frac{2}{m\gamma_{m}}\left[\sqrt{(2\eta)^{2}+\Delta^{2}}-\frac{\gamma _{1}}{2}\right] \tag{12}\]
and for \(m=n-1\) we get
\[R^{2n-4}=\frac{1}{[(n-1)\gamma_{m}]^{2}}\left\{\left[2(n\eta)^{2}-(n-1)\gamma _{m}\gamma_{1}\right]+\sqrt{\left[2(n\eta)^{2}-(n-1)\gamma_{m}\gamma_{1} \right]^{2}-\left[(n-1)\gamma_{m}\right]^{2}\left[\gamma_{1}^{2}+(2\Delta)^{2} \right]}\right\}\quad. \tag{13}\]
This last expression is especially useful in the case \((3,2)\) where the approximation in Eq. (11) fails in the regimes considered in this work.
The stability of the fixed points can be analyzed easily for \(\gamma_{1}=\Delta=0\). In this case, the Jacobian matrix of the
system is
\[J[R,\phi]=\begin{pmatrix}-\frac{1}{2}\gamma_{1}-\frac{m(2m-1)}{2}\gamma_{m}R^{2m-2 }-n(n-1)\eta R^{n-2}\cos n\phi&n^{2}\eta R^{n-1}\sin n\phi\\ n(n-2)\eta R^{n-3}\sin n\phi&n\eta R^{n-2}\cos n\phi\end{pmatrix} \tag{30}\]
so if \(n-2m<0\) the eigenvalues are negatives and lead to \(n\) stable fix points. This is the case for all pairs of \((n,m)\) presented in this paper.
It is important to note that the driving strength \(\eta\) necessary to achieve a steady state with mean photon number \(\left\langle\hat{n}\right\rangle_{\text{ss}}\) differs largely from the nonlinear degrees \((n,m)\). In general, as can be seen in Fig. 11, a two-order of magnitude increase in \(\eta\) is needed for each added photon loss in the nonlinear dissipation degree. The effect is similar to increasing \(\gamma_{m}\) (see Eq. (31)), photons are lost more rapidly and so a higher \(\eta\) is necessary to stabilize the system [74].
Given that the value of \(\eta\) changes abruptly for the different combinations of \((n,m)\), we will use \(\left\langle\hat{n}\right\rangle_{\text{ss}}\) to show the results. This allows us to use the same scale for all nonlinear degrees but we must note that the actual parameter being changed is \(\eta\).
## Appendix B Liouville formalism
The Liouvillian analysis offers a systematic approach to studying the dynamics of a driven-dissipative system. It encompasses the eigenspectrum, which comprises the right and left Liouvillian eigenmatrices along with their corresponding eigenvalues:
\[\mathcal{L}R_{j}=\lambda_{j}R_{j},\quad\mathcal{L}^{\dagger}L_{j}^{\dagger}= \lambda_{j}^{*}L_{j}^{\dagger}, \tag{31}\]
with normalization \(\text{Tr}[L_{j}^{\dagger}R_{k}]=\delta_{jk}\). Let us now assume the existence of at least one steady state \(\rho_{\text{ss}}\), it is worth noting that this condition is universally applicable in finite-dimensional systems. However, in the case of an infinite-dimensional Hilbert space, it is necessary to explicitly verify the existence of such a steady state \(\rho_{\text{ss}}\)[44; 45]. The stationary state (or states) will correspond to \(\lambda_{1}=0\), while the rest of the eigenvalues have a non-positive real part, which we use to order them: \(\text{Re}[\lambda_{1}]\geq\text{Re}[\lambda_{2}]\geq\text{Re}[\lambda_{3}]\dots\). The left and right eigenvectors can be used to decompose the dynamics of the state of the system as
\[\hat{\rho}(t)=\hat{\rho}_{\text{ss}}+\sum_{j>1}\text{Tr}[L_{j}\hat{\rho}(0)]R_ {j}e^{\lambda_{j}t}, \tag{32}\]
The real and imaginary parts of the eigenvalues will correspond to the decay rate and the frequency, respectively:
\[\varepsilon_{j}=\text{Im}[\lambda_{j}],\quad\tau_{j}^{-1}=-\text{Re}[\lambda _{j}]. \tag{33}\]
## Appendix C Four-photon driving and six-photon dissipation
The oscillator with \(n=4\) and \(m=6\) behaves in a different manner from the other scenarios considered with four lobes. The difference is explained by the relation between the two nonlinear degrees which have \(p=\text{gcd}(4,6)=2\) so \(p\neq n\). Hence, in the absence of linear dissipation, a strong symmetry arises with only two steady states of even and odd parity. In contrast, for instance, to the \(n=m=4\) case where there are four steady states corresponding to 4-cat states each having a different symmetry eigenvalue.
Notably, this also has consequences in the weak symmetry case. Two distinct regimes can be appreciated in Fig. 22 from the distribution of the Liouvillian spectrum. Initially, for a small mean photon number, the 4th and 5th eigenvalues are close to each other and the largest eigenvalue separation occurs between \(\lambda_{2}\) and \(\lambda_{3}\). Then, at around \(\left\langle\hat{n}\right\rangle\approx 20\), a high order exceptional point [75; 76] occurs between the eigenvalues \(\lambda_{2}\), \(\lambda_{3}\) and \(\lambda_{4}\) that makes the first two complex conjugate (\(\lambda_{2}=\lambda_{3}^{*}\)) and the last one real. Beyond the exceptional point, the separation between the 4th and 5th eigenvalues becomes larger showing hints that a four-dimensional metastable phase may arise for a very large photon number. This makes sense since for small \(\left\langle\hat{n}\right\rangle\) the driving strength \(\eta\) and the nonlinear dissipation rate \(\gamma_{m}\) are comparable so the symmetry that dominates is \(\mathds{Z}_{2}\). For larger \(\left\langle\hat{n}\right\rangle\), the driving strength is much larger than the nonlinear dissipation rate so the symmetry that dominates is \(\mathds{Z}_{4}\). In all cases, however, the spectral separation is small compared to the other cases with \(n=m\).
As we work in the regime of small mean photon number (\(\langle\hat{n}\rangle<20\)), only two metastable states are present. To characterize these states we start by defining \(\{\ket{\psi_{j}}\}_{j=0}^{3}\) as the squeezed-coherent states describing the four symmetrically distributed lobes. Then, the four 4-cat states can be written as
\[\ket{\pi_{0}} =\frac{1}{2}\left(\ket{\psi_{0}}+\ket{\psi_{1}}+\ket{\psi_{2}}+ \ket{\psi_{3}}\right) \tag{10a}\] \[\ket{\pi_{1}} =\frac{1}{2}\left(\ket{\psi_{0}}-i\ket{\psi_{1}}-\ket{\psi_{2}}+ i\ket{\psi_{3}}\right)\] (10b) \[\ket{\pi_{2}} =\frac{1}{2}\left(\ket{\psi_{0}}-\ket{\psi_{1}}+\ket{\psi_{2}}- \ket{\psi_{3}}\right)\] (10c) \[\ket{\pi_{3}} =\frac{1}{2}\left(\ket{\psi_{0}}+i\ket{\psi_{1}}-\ket{\psi_{2}}- i\ket{\psi_{3}}\right) \tag{10d}\]
Each of these states \(\{\ket{\pi_{j}}\}\) has only Fock levels \(\ket{a}\) with \(a\mod 4=j\). Combining the two 4-cat states with even and odd parity we obtain
\[\ket{C_{\pm}^{\text{even}}} =\frac{1}{\sqrt{2}}\left(\ket{\psi_{0}}\pm\ket{\psi_{2}}\right) \tag{10a}\] \[\ket{C_{\pm}^{\text{odd}}} =\frac{1}{\sqrt{2}}\left(\ket{\psi_{1}}\pm\ket{\psi_{3}}\right) \tag{10b}\]
from which we can construct the two metastable states
\[\mu_{0} =\frac{1}{2}\left(\ket{C_{+}^{\text{even}}}\!\!\bra{C_{+}^{\text{ even}}}+\ket{C_{+}^{\text{odd}}}\!\!\bra{C_{+}^{\text{odd}}}\right) \tag{11a}\] \[\mu_{1} =\frac{1}{2}\left(\ket{C_{-}^{\text{even}}}\!\!\bra{C_{-}^{\text {even}}}+\ket{C_{-}^{\text{odd}}}\!\!\bra{C_{-}^{\text{odd}}}\right) \tag{11b}\]
where \(\mu_{0}\) (\(\mu_{1}\)) has even (odd) parity and corresponds to the steady states of the system in the limit \(\gamma_{1}\to 0\).
Fig. A3 shows the time evolution of two initial states: \(\ket{\pi_{0}}\) and \(\mu_{0}\). The former is an even parity 4-cat state containing only Fock states with a photon number multiple of 4. The latter, instead, is the metastable state with even parity (it is spanned by all even Fock states). In panel (a) we plot the expectation value of the parity operator \(\hat{P}_{2}=\sum_{a}\,\ket{2a+2}\!\!\bra{2a+2}\). This operator allows us to see that the cat-states (which are not metastable states) lead to one of the two metastable states specified in Eq. (11). That is, initially the state \(\ket{\pi_{0}}\) has no component on the subspace spanned by the Fock states with modulus 2 but rapidly, the state converges to \(\langle\hat{P}_{2}\rangle=1/2\) that we would expect for a state that has all the even modes populated. On the other hand, the metastable state \(\mu_{0}\) remains in the same state for a longer time. Hence, the parity of the initial cat-state determines to which metastable state it converges.
This has consequences in both applications we studied: storage and quantum associative memory. First, in terms of storage of quantum states, the memory should be regarded as a two-dimensional system where the two metastable states are the computational-basis states. Thus, the bit- and phase-flip times should be calculated accordingly. Second, in terms of quantum associative memory, the number of patterns that can be efficiently stored is only 2. Trying to store four patterns in the squeezed lobes leads to a success probability close to \(1/4\) because the states rapidly decay to the manifold \(\{\mu_{0},\mu_{1}\}\) and the posterior state discrimination is not capable of distinguishing between the two states. Hence, a different encoding would be needed. For instance, knowing that Fock states with even (odd) parity converge to metastable states with even (odd) parity. We can use this property to encode the initial states which will allow us to use the system for pattern discrimination of two memories.
|
2301.13649 | Studies of New Physics in $B^0_q-\bar{B}^0_q$ Mixing and Implications
for Leptonic Decays | The phenomenon of $B^0_q$-$\bar{B}^0_q$ mixing ($q=d,s$) provides a sensitive
probe for physics beyond the Standard Model. We have a careful look at the
determination of the Unitarity Triangle apex, which is needed for the Standard
Model predictions of the $B_q$ mixing parameters, and explore how much space
for New Physics is left through the current data. We study the impact of
tensions between inclusive and exclusive determinations of the CKM matrix
elements $|V_{ub}|$ and $|V_{cb}|$, and focus on the $\gamma$ angle extraction.
We present various future scenarios and discuss the application of these
results for leptonic rare $B$ decays, which allows us to minimise the CKM
parameter impact in the New Physics searches. Performing future projections, we
explore and illustrate the impact of increased precision on key input
quantities. It will be exciting to see how more precise data in the future
high-precision era of flavour physics can lead to a much sharper picture. | Kristof De Bruyn, Robert Fleischer, Eleftheria Malami, Philine van Vliet | 2023-01-31T14:08:17Z | http://arxiv.org/abs/2301.13649v1 | # Studies of New Physics in \(B^{0}_{q}-\bar{B}^{0}_{q}\) Mixing and Implications for Leptonic Decays
###### Abstract:
The phenomenon of \(B^{0}_{q}\)-\(\bar{B}^{0}_{q}\) mixing (\(q=d,s\)) provides a sensitive probe for physics beyond the Standard Model. We have a careful look at the determination of the Unitarity Triangle apex, which is needed for the Standard Model predictions of the \(B_{q}\) mixing parameters, and explore how much space for New Physics is left through the current data. We study the impact of tensions between inclusive and exclusive determinations of the CKM matrix elements \(|V_{ub}|\) and \(|V_{cb}|\), and focus on the \(\gamma\) angle extraction. We present various future scenarios and discuss the application of these results for leptonic rare \(B\) decays, which allows us to minimise the CKM parameter impact in the New Physics searches. Performing future projections, we explore and illustrate the impact of increased precision on key input quantities. It will be exciting to see how more precise data in the future high-precision era of flavour physics can lead to a much sharper picture.
Introduction
The phenomenon of \(B^{0}_{q}\)-\(\bar{B}^{0}_{q}\) mixing (where \(q=d,s\)) arises only from loop processes in the Standard Model (SM) and is sensitive to possible New Physics (NP) contributions, which could enter the loop topologies or even at the tree level, for instance in \(Z^{\prime}\) models. Associated to the mixing phenomenon are the mixing parameters and the CP-violating phases for which we have impressive experimental data. In this presentation, we follow Ref. [1] and explore the space allowed for NP by current measurements and the state-of-the-art parameters. In addition, we point out interesting connections to the studies of leptonic rare \(B\) decays.
In order to determine the parameter space of possible NP effects to \(B^{0}_{q}\)-\(\bar{B}^{0}_{q}\) mixing, we have to compare the SM predictions of the mixing parameters with the corresponding experimental values. For these SM predictions, a careful analysis of the Unitarity Triangle (UT) apex is required. We pay special attention to the different determinations of the Cabibbo-Kobayashi-Maskawa (CKM) parameters and the tensions that arise between the extractions of the \(|V_{ub}|\) and \(|V_{cb}|\) matrix elements through inclusive and exclusive semileptonic \(B\) meson decays. These longstanding tensions have a profound impact on the whole analysis.
## 2 Unitarity Triangle
Using the parametrisation of the Particle Data Group (PDG), the UT apex is given as [2]:
\[R_{b}\ e^{i\gamma}=\bar{\rho}+i\bar{\eta}\,\qquad\bar{\rho}\equiv\left[1-( \lambda^{2}/2)\right]\rho\,\qquad\bar{\eta}\equiv\left[1-(\lambda^{2}/2)\right]\eta. \tag{1}\]
Here, \(\rho\), \(\eta\) and \(\lambda\) are the Wolfenstein parameters [3, 4], \(R_{b}\) is the side from the origin to the apex of the UT, defined with the help of the CKM matrix elements \(\lambda\equiv|V_{us}|,|V_{ub}|\) and \(|V_{cb}|\) as:
\[R_{b}\equiv\left(1-\frac{\lambda^{2}}{2}\right)\frac{1}{\lambda}\left|\frac{V _{ub}}{|V_{cb}|}\right|=\sqrt{\bar{\rho}^{\,2}+\bar{\eta}^{\,2}}\, \tag{2}\]
and \(\gamma\equiv\arg\left(-V_{ud}V_{ub}^{*}/V_{cd}V_{cb}^{*}\right)\) is the angle between the \(R_{b}\) side and the UT basis.
### Determining the UT Apex Utilising \(\gamma\) and \(R_{b}\)
In this subsection, we work in the SM and are interested in obtaining the UT apex in a way that is not affected by possible NP in \(B^{0}_{q}\)-\(\bar{B}^{0}_{q}\) mixing. One way of determining the apex is utilising the side \(R_{b}\) and the angle \(\gamma\), which can both be determined from decays that proceed only via tree decays. The value of \(\gamma\) can be determined either from \(B\to DK\) decays or from a \(B\to\pi\pi,\ \rho\pi,\ \rho\rho\) isospin analysis.
More specifically, one option is to use the time-dependent \(B^{0}_{s}\to D_{s}^{\mp}K^{\pm}\) system, where mixing-induced CP violation plays a key role. Through interference effects caused by \(B^{0}_{q}\)-\(\bar{B}^{0}_{q}\) mixing, the CP asymmetry parameters allow the determination of \(\phi_{s}+\gamma\), where \(\phi_{s}\) is the \(B^{0}_{s}\)-\(\bar{B}^{0}_{s}\) mixing phase. Since \(\phi_{s}\) is determined through the \(B^{0}_{s}\to J/\psi\phi\) channel, including penguin corrections [5, 6], \(\gamma\) can be obtained in a theoretically clean way [7, 8]. However, the surprisingly large value arising in this case still needs to be further explored. An alternative way of getting the \(\gamma\) value is using the time-independent \(B\to DK\) transitions, where the sensitivity to \(\gamma\) comes from direct CP violation [9]. Last but not least, another interesting system is provided by \(B\to\pi\pi,\ \rho\pi,\ \rho\rho\) modes [10, 11],
which usually are used to determine \(\alpha\) from an isospin analysis. Actually this value corresponds to \(\gamma\) when we use the \(B^{0}_{d}\)-\(\bar{B}^{0}_{d}\) mixing phase \(\phi_{d}\), determined from \(B^{0}_{d}\to J/\psi K^{0}\)[5, 6], taking penguin effects into account. Thus, we can convert the result \(\phi_{d}+2\gamma\) into \(\gamma\). The value from the latter case is in good agreement with the one coming from \(B\to DK\) modes. Therefore, for our analysis, we average these two results [1]:
\[\gamma_{\rm avg}=(68.4\pm 3.4)^{\circ}. \tag{3}\]
Regarding \(R_{b}\) there are tensions between the various theoretical and experimental approaches. Even though there are different determinations of the \(|V_{us}|\) element and the tensions between them are intriguing, they only have a negligible impact on NP studies in neutral \(B_{q}\) mixing. Thus, we choose to work with the value \(|V_{us}|=0.22309\pm 0.00056\)[12, 13]. Contrary to the \(|V_{us}|\) case, the deviations between determinations of \(|V_{ub}|\) and \(|V_{cb}|\) from inclusive and exclusive semileptonic \(B\) decays, which are given as follows [14, 15]:
\[|V_{ub}|_{\rm incl}=(4.19\pm 0.17)\times 10^{-3}\,\ \ \ \ \ |V_{ub}|_{\rm excl }=(3.51\pm 0.12)\times 10^{-3}\,\ \ \ \ \ \ {\rm differing\ by\ 3.9\ \sigma}, \tag{4}\] \[|V_{cb}|_{\rm incl}=(42.16\pm 0.50)\times 10^{-3}\,\ \ \ \ |V_{cb}|_{\rm excl }=(39.10\pm 0.50)\times 10^{-3}\,\ \ \ \ {\rm differing\ by\ 4.3\ \sigma}, \tag{5}\]
have a significant impact on the allowed parameter space for NP in \(B^{0}_{q}\)-\(\bar{B}^{0}_{q}\) mixing. Trying to understand and resolve these tensions, another case is studied in the literature [15, 16, 17, 18], which is a hybrid scenario combining the exclusive \(|V_{ub}|\) with the inclusive \(|V_{cb}|\) determination. Therefore, we consider for the rest of our analysis all these three cases. The corresponding \(R_{b}\) results are:
\[R_{b,{\rm incl}}=0.434\pm 0.018\,\ \ \ \ \ \ \ R_{b,{\rm excl}}=0.392\pm 0.014\,\ \ \ \ \ \ R_{b,{\rm hybrid}}=0.364\pm 0.013. \tag{6}\]
Making a fit to \(R_{b}\) and \(\gamma\), the UT apex is determined [1]:
\[{\rm Incl.} \
In this case, only information on the two UT sides \(R_{b}\) and \(R_{t}\) is required without needing any information from \(\gamma\). However, in order to get the \(R_{t}\), we have to assume SM expressions for the mixing parameters \(\Delta m_{d}\) and \(\Delta m_{s}\). The numerical predictions are given in [1].
The side \(R_{t}\) can be written as
\[R_{t}=\frac{1}{\lambda}\left|\frac{V_{td}}{V_{ts}}\right|\left|1-\frac{\lambda ^{2}}{2}\left(1-2\bar{\rho}\right)\right|+\mathcal{O}\left(\lambda^{4}\right)\, \tag{11}\]
where
\[\left|\frac{V_{td}}{V_{ts}}\right|=\xi\sqrt{\frac{m_{B_{s}}\Delta m_{d}^{\rm SM }}{m_{B_{d}}\Delta m_{s}^{\rm SM}}}. \tag{12}\]
Here the SU(3)-breaking parameter \(\xi\) is the ratio of bag parameters and decay constants of the \(B_{d}\) and the \(B_{s}\) systems that can be calculated on the lattice. The advantage of the ratio is that uncertainties cancel, making it cleaner than using individual results.
Making a fit to the \(R_{b}\) and \(R_{t}\) sides, we obtain [1]:
\[{\rm Incl.} \bar{\rho}=0.180\pm 0.014\, \bar{\eta}=0.395\pm 0.020\, \tag{13}\] \[{\rm Excl.} \bar{\rho}=0.163\pm 0.013\, \bar{\eta}=0.357\pm 0.017\,\] (14) \[{\rm Hybrid} \bar{\rho}=0.153\pm 0.013\, \bar{\eta}=0.330\pm 0.016. \tag{15}\]
We note that the UT apex determinations relying on \(\gamma\) are a factor 2 less precise than those without information from \(\gamma\). However, the determination through \(R_{b}\) and \(R_{t}\) requires the SM expressions of \(\Delta m_{d}\) and \(\Delta m_{s}\), thus ignores possible NP contributions in \(B_{q}^{0}\)-\(\bar{B}_{q}^{0}\) mixing.
Figure 1: Determination of the UT apex from the \(R_{b}\) and \(\gamma\) measurements for the inclusive (left), exclusive (right) and hybrid (botttom) case [1].
## 3 NP in \(B_{q}^{0}\)-\(\bar{B}_{q}^{0}\) mixing
The neutral \(B_{q}\)-meson mixing is a sensitive phenomenon for NP. In order to quantify its impact, we introduce NP parameters \(\kappa_{q}\), which describes the size of the NP effects, and \(\sigma_{q}\), which is a complex phase accounting for additional CP-violating effects. The generalised expressions of the mixing parameters take the following form [19]:
\[\Delta m_{q} = \Delta m_{q}^{\rm SM}\left|1+\kappa_{q}e^{i\sigma_{q}}\right|\, \tag{16}\] \[\phi_{q} = \phi_{q}^{\rm SM}+\phi_{q}^{\rm NP}=\phi_{q}^{\rm SM}+\arg\left(1 +\kappa_{q}e^{i\sigma_{q}}\right). \tag{17}\]
This is a model independent parametrization. Utilising these relations, we explore two different NP scenarios; the first one is the most general case and the second one assumes Flavour Universal NP (FUNP) [1].
Let us firstly discuss the general case, namely Scenario I. The only assumption here is that there is no NP in the angle \(\gamma\) and \(R_{b}\). The determination from \(R_{b}\) and \(\gamma\) does not rely on information from mixing. We make use of this determination to obtain the UT apex, which we then need for getting the SM predictions for the mixing parameters \(\Delta m_{q}\) and \(\phi_{q}\). Comparing them with their measured values, we can constrain the NP parameters. Here, the NP parameters \((\kappa_{d},\sigma_{d})\) and \((\kappa_{s},\sigma_{s})\) are determined independently from each other.
In the second case, Scenario II, we have the FUNP assumption where we consider that the NP contributions are equal in the \(B_{d}\) and \(B_{s}\) systems, thus \((\kappa_{d},\sigma_{d})=(\kappa_{s},\sigma_{s})\). This is not a Minimal Flavour Violation scenario but it can be realised in NP models with \(U(2)\) symmetry [20, 21]. The UT apex fit relies on \(R_{b}\) and \(R_{t}\), without using \(\gamma\) information, therefore possible NP in the angle \(\gamma\)
Figure 2: Comparing Scenario I and Scenario II fits for \(\kappa_{q}\) and \(\sigma_{q}\) for the inclusive (left), exclusive (right) and hybrid (bottom) case [1].
will not affect the findings. Comparing the two scenarios, we have a test of the FUNP assumption and we see the impact of the assumptions on the constraints on the parameter space of NP in mixing. Fig. 2 illustrates this comparison of the two fits for \(\kappa_{q}\) and \(\sigma_{q}\) for the inclusive, the exclusive and the hybrid cases.
## 4 Rare Leptonic Decays \(B_{q}^{0}\to\mu^{+}\mu^{-}\)
The tensions between the CKM matrix elements have an impact not only on the UT apex determination and possible NP in \(B_{q}^{0}\)-\(\bar{B}_{q}^{0}\) mixing but also on the branching ratios of rare decays. A key example is the leptonic \(B_{q}^{0}\to\mu^{+}\mu^{-}\) transition. These modes are pure loop processes and helicity suppressed in the SM. This helicity suppression could be lifted by new scalar and pseudoscalar contributions, therefore putting these decays in an outstanding position to probe NP in this sector. As these are decays of neutral \(B\) mesons, \(B_{q}^{0}\)-\(\bar{B}_{q}^{0}\) mixing enters and leads to subtleties concerning the measurement of the experimental branching ratio and comparison with the theoretical prediction [22]. However, NP in \(B_{s}^{0}\)-\(\bar{B}_{s}^{0}\) mixing is included through the experimental values of the mixing parameters.
The SM predictions require information on \(|V_{ts}|\) which we determine through \(|V_{cb}|\), which again depends on inclusive and exclusive determinations. In order to minimise the dependence on \(|V_{cb}|\) and the UT apex, we create the following ratio with the \(B_{s}\) mass difference \(\Delta m_{s}\)[23, 24, 25]:
\[{\cal R}_{s\mu}\equiv\bar{\cal B}(B_{s}\to\mu^{+}\mu^{-})/\Delta m_{s}\;. \tag{18}\]
Using this ratio, we can eliminate the leading dependence on the CKM elements but we have to correct for the possible NP contributions to \(B_{q}^{0}\)-\(\bar{B}_{q}^{0}\) mixing. This is now possible following our analysis in [1].
So, we include NP effects in \(\Delta m_{s}\) and then we can use the ratio \({\cal R}_{s\mu}\) to constrain NP in the scalar and pseudoscalar sector. We obtain the generalised expression:
\[{\cal R}_{s\mu}={\cal R}_{s\mu}^{\rm SM}\times\frac{1+{\cal A}_{\Delta T_{s}}^ {\mu\mu}y_{s}}{1+y_{s}}\frac{|P_{\mu\mu}^{s}|^{2}+|S_{\mu\mu}^{s}|^{2}}{\sqrt{ 1+2\kappa_{s}\cos\sigma_{s}+\kappa_{s}^{2}}}\;, \tag{19}\]
with \(P_{\mu\mu}^{s}\equiv|P_{\mu\mu}^{s}|e^{i\varphi_{P}}\), \(S_{\mu\mu}^{s}\equiv|S_{\mu\mu}^{s}|e^{i\varphi_{S}}\), where \(\varphi_{P}\), \(\varphi_{S}\) are CP-violating phases, and the observable \({\cal A}_{\Delta T_{s}}^{\mu\mu}\) in terms of the NP phase \(\phi_{S}^{\rm NP}\):
\[{\cal A}_{\Delta\Gamma}^{\mu\mu}=\frac{|P_{\mu\mu}^{s}|^{2}\cos(2\varphi_{P}- \phi_{s}^{\rm NP})-|S_{\mu\mu}^{s}|^{2}\cos(2\varphi_{S}-\phi_{s}^{\rm NP})}{| P_{\mu\mu}^{s}|^{2}+|S_{\mu\mu}^{s}|^{2}}\;. \tag{20}\]
The \({\cal R}_{s\mu}\) has only a dependence on the CKM matrix elements through the NP parameters \(\kappa_{q}\) and \(\sigma_{q}\), determined as described above. Therefore, we have another constraint on the scalar and pseudoscalar contributions. The same strategy can be applied to the \(B_{d}^{0}\to\mu^{+}\mu^{-}\) channel once in the future accurate measurements of the branching ratio will become available.
## 5 Future Prospects and Final Remarks
It will be important in the future to achieve improved precision on the NP parameters \(\kappa_{q}\) and \(\sigma_{q}\). In order to get a feeling of the prospects, we assume a hypothetical reduction of 50% on each
one of the three input parameters, which are the \(|V_{cb}|\), the lattice calculations and the UT apex [1]. We obtain interesting findings, which of course depend on these assumptions. In our studies, we demonstrate that in the \(B_{d}\)-system the apex plays a limiting factor and in order to fully explore the potentials of this system, progress on the UT apex has to be made. On the other hand, in the \(B_{s}\)-system we do not have this situation as the SM prediction of \(\phi_{s}\) is more robust. Therefore, searches of NP in \(B_{s}^{0}\)-\(\bar{B}_{s}^{0}\) mixing are more promising than in the \(B_{d}\)-system but it is of key importance to constrain NP in both systems as much as possible.
Another essential future prospect is related to the angle \(\gamma\). Improved precision on the input measurements might lead to significant discrepancies between the different \(\gamma\) determinations due to NP effects. In this case, averaging over the different results, as we did in this analysis, would no longer be justified. Therefore, the UT should then be revisited. Independent information from additional observables would be necessary to resolve such a situation. Exciting new opportunities might come up to search for NP, both in \(\gamma\) and in \(B_{q}^{0}\)-\(\bar{B}_{q}^{0}\) mixing, which is strongly correlated with the UT apex coordinates.
Last but not least, the branching ratios of the \(B_{q}^{0}\to\mu^{+}\mu^{-}\) decays might offer interesting opportunities. The ratio of the branching fractions between \(B_{d}^{0}\to\mu^{+}\mu^{-}\) and \(B_{s}^{0}\to\mu^{+}\mu^{-}\) can provide an alternative way to determine the UT side \(R_{t}\). Another useful application for the ratio of the branching fractions between these channels is the quantity [26]:
\[U_{\mu\mu}^{ds}\propto\left|\left|\frac{V_{ts}}{V_{td}}\right|^{2}\frac{ \bar{\mathcal{B}}(B_{d}\to\mu^{+}\mu^{-})}{\bar{\mathcal{B}}(B_{s}\to\mu^{+} \mu^{-})}\right|^{1/2} \tag{21}\]
which requires knowledge of \(R_{t}\) and offers a very powerful test of the SM, where \(U_{\mu\mu}^{ds}=1\).
In the future, \(B_{q}^{0}\)-\(\bar{B}_{q}^{0}\) mixing will remain a key element for constraining NP. It will be exciting to see how more precise data in the high-precision era of flavour physics ahead of us can lead to a much sharper picture.
## Acknowledgements
We would like to thank the DISCRETE 2022 organisers for the invitation and for giving us the opportunity to present our studies. This research has been supported by the Netherlands Organisation for Scientific Research (NWO). PvV acknowledges support from the DFG through the Emmy Noether research project 400570283, and through the German-Israeli Project Cooperation (DIP).
|
2309.09755 | Coherent Tunneling and Strain Sensitivity of an All Heusler Alloy
Magnetic Tunneling Junction: A First-Principles Study | Half-metallic Co-based full Heusler alloys have captured considerable
attention of the researchers in the realm of spintronic applications, owing to
their remarkable characteristics such as exceptionally high spin polarization
at Fermi level, ultra-low Gilbert damping, and high Curie temperature. In this
comprehensive study, employing density functional theory, we delve into the
stability and electron transport properties of a magnetic tunneling junction
(MTJ) comprising a Co$_2$MnSb/HfIrSb interface. Utilizing a standard model
given by Julliere, we estimate the tunnel magnetoresistance (TMR) ratio of this
heterojunction under external electric field, revealing a significantly high
TMR ratio (500%) that remains almost unaltered for electric field magnitudes up
to 0.5 V/A. In-depth investigation of K-dependent majority spin transmissions
uncovers the occurrence of coherent tunneling for the Mn-Mn/Ir interface,
particularly when a spacer layer beyond a certain thickness is employed.
Additionally, we explore the impact of bi-axial strain on the MTJ by varying
the in-plane lattice constants between -4% and +4%. Our spin-dependent
transmission calculations demonstrate that the Mn-Mn/Ir interface manifests
strain-sensitive transmission properties under both compressive and tensile
strain, and yields a remarkable three-fold increase in majority spin
transmission under tensile strain conditions. These compelling outcomes place
the Co2MnSb/HfIrSb junction among the highly promising candidates for nanoscale
spintronic devices, emphasizing the potential significance of the system in the
advancement of the field. | Joydipto Bhattacharya, Ashima Rawat, Ranjit Pati, Aparna Chakrabarti, Ravindra Pandey | 2023-09-18T13:31:27Z | http://arxiv.org/abs/2309.09755v1 | Coherent Tunneling and Strain Sensitivity of an All\(-\)Heusler Alloy Magnetic Tunneling Junction: A First-Principles Study
###### Abstract
Half-metallic Co-based full Heusler alloys have captured considerable attention of the researchers in the realm of spintronic applications, owing to their remarkable characteristics such as exceptionally high spin polarization at Fermi level, ultra-low Gilbert damping, and high Curie temperature. In this comprehensive study, employing density functional theory, we delve into the stability and electron transport properties of a magnetic tunneling junction (MTJ) comprising a Co\({}_{2}\)MnSb/HfIrSb interface. Utilizing a standard model given by Julliere, we estimate the tunnel magnetoresistance (TMR) ratio of this heterojunction under external electric field, revealing a significantly high TMR ratio (\(\approx 500\%\)) that remains almost unaltered for electric field magnitudes up to 0.5 V/A. Indepth investigation of K-dependent majority spin transmissions uncovers the occurrence of coherent tunneling for the Mn-Mn/Ir interface, particularly when a spacer layer beyond a certain thickness is employed. Additionally, we explore the impact of bi-axial strain on the MTJ by varying the in-plane lattice constants between -4% and +4%. Our spin-dependent transmission calculations demonstrate that the Mn-Mn/Ir interface manifests strain-sensitive transmission properties under both compressive and tensile strain, and yields a remarkable three-fold increase in majority spin transmission under tensile strain conditions. These compelling outcomes place the Co\({}_{2}\)MnSb/HfIrSb junction among the highly promising candidates for nanoscale spintronic devices, emphasizing the potential significance of the system in the advancement of the field.
## I Introduction
In recent years, significant advancements have been made in controlling spin-dependent tunneling between two ferromagnetic electrodes separated by an insulating barrier. These developments have a profound impact on various magnetic data storage technologies, particularly due to the observation of exceptionally high tunneling magnetoresistance (TMR) values [1; 2; 3]. Initially, the ability to achieve substantially high TMR in magnetic tunneling junctions (MTJs) was limited by the use of amorphous tunnel barriers. However, the landscape has since evolved significantly, thanks to the theoretical predictions [4; 5; 6; 7] and subsequent experimental realization of epitaxial MTJs [8; 9].
The fabrication of epitaxial Co-based MTJ, which exploits the coherent electronic tunneling phenomenon to produce large TMR, was a major breakthrough in the field of MTJs. TMR values of up to 220% at room temperature and 300% at low temperatures for CoFeB/MgO/CoFeB based MTJ was reported by Parkin and his co-workers in 2004. [10] Till now, the highest TMR ratio in Heusler alloy based MTJs has been observed for Co\({}_{2}\)MnSi/MgO/Co\({}_{2}\)MnSi junction, which produced a TMR ratio of 1995% at 4K [11] which reaches up to 2610% with Mn-rich and highly Fe-doped electrode. [12] However, from ab-initio theory based calculations, TMR ratio of about \(10^{5}-10^{8}\) has been reported for MTJs with half-metallic electrodes. [13; 14; 15; 16; 17; 18; 19]
In this context, the half-metallic (HM) materials, exhibiting metallic behavior for one of the spin-up and spin-down channels and semiconducting for the other, have long been expected to work well as a spin filter or spin-injecting source capable of generating extremely highly spin-polarized current. [20; 21] Among the various HM materials that have been explored, HM Heusler alloys are regarded as one of the most promising materials for the electrode, due to their low Gilbert damping factor, high Curie temperature and reasonably good lattice matching with the traditionally used semiconductor substrates, \(e.g.\) MgO, GaAs etc. [5; 6; 7] Since the prediction of first HM Heusler alloy by de Groot \(et.al\)[22], many HM Heusler alloy materials have been proposed from first-principles calculations [23; 24; 25; 26] and many of these materials were discovered experimentally as well. [27; 28; 29] This gives us a wide range of materials to choose from, suitability depends on their electronic and geometric properties.
These HM materials being close to 100% spin-polarized at Fermi level (E\({}_{F}\)), provide an enormous advantage over other ferromagnetic electrode materials, leading to its wide application in spintronic devices. However, interestingly, most of the half metals loose their unique character (\(i.e\sim\)100% spin polarization at E\({}_{F}\)), when they are embedded into heterostructures constructed for the purpose of achieving a high TMR ratio or efficient spin injection into semiconductor spacer layers. [30] So far, there are quite a few studies on the electronic properties of heterojunction interfaces based on first-principles calculations. [5; 6; 7; 31] It is known that HM properties almost always get affected and completely lose the spin-polarized character at interfaces. However, there are a few theoretical exceptions. For example,
NiMnSb/CdS, zinc-blende CrAs/GaAs and all Heusler interfaces (Co\({}_{2}\)MnSi/Fe\({}_{2}\)TiSi, Co\({}_{2}\)MnSb/TiCoSb, CoFeTiSi/Fe\({}_{2}\)TiSi.[31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213; 214; 215; 216; 217; 218; 219; 220; 221; 222; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 252; 259; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 291; 289; 292; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 319; 320; 321; 322; 324; 325; 326; 327; 328; 329; 333; 340; 335; 336; 337; 338; 339; 350; 339; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 3777; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 397; 399; 40; 40; 40; 40; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 91; 84; 89; 85; 87; 89; 92; 93; 94; 95; 96; 97; 98; 99; 99; 100; 101; 11; 12; 132; 133; 144; 145; 146; 147; 148; 149; 150; 151; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 178; 179; 180; 181; 183; 184; 185; 186; 187; 188; 189; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 211; 22; 22; 22; 22; 23; 241; 25; 26; 27; 28; 293; 294; 295; 296; 297; 298; 299; 300; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 43; 45; 46; 47; 49; 51; 52; 53; 54; 56; 57; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 72; 74; 75; 76; 77; 78; 79; 80; 82; 83; 84; 85; 86; 87; 89; 93; 94; 95; 96; 97; 98; 97; 99; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 113; 114; 115; 168; 17; 189; 19; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 333; 34; 35; 36; 37; 38; 39; 40; 41; 43; 42; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 53; 54; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 73; 74; 75; 76; 77; 78; 83; 84; 85; 87; 88; 89; 95; 96; 97; 98; 101; 103; 114; 115; 169; 17; 181; 19; 199; 11; 120; 11; 133; 140; 141; 15; 16; 17; 18; 19; 202; 211; 233; 24; 25; 26; 27; 28; 29; 30; 311; 32; 333; 34; 31; 35; 37; 38; 39; 50; 539; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 83; 85; 86; 87; 88; 89; 99; 90; 91; 102; 103; 104; 105; 106; 107; 108; 109; 111; 11; 112; 113; 114; 115; 116; 17; 18; 19; 203; 119; 21; 22; 23; 24; 25; 26; 27; 28; 29; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 43; 45; 46; 47; 48; 49; 50; 51; 52; 54; 53; 56; 57; 58; 59; 61; 70; 71; 72; 73; 74; 76; 78; 79; 80; 83; 86; 87; 89; 92; 93; 94; 95; 96; 97; 98; 9
Methodology
To perform electronic structure calculations, we utilize the density functional theory (DFT) based Vienna Ab-initio Simulation Package (VASP) [46], using the projector augmented wave method implemented in VASP. [47] We employ the Perdew-Burke-Ernzerhof [48] generalized gradient approximation (GGA) for the exchange-correlation (XC) functional. An energy cut-off of 500 eV has been used to expand the plane waves. The Brillouin zone was sampled using the Monkhorst-Pack scheme [49] with a mesh of 17\(\times\)17\(\times\)1 k-points. A convergence criterion for energy in the self-consistent-field cycle of 10\({}^{-6}\) eV is adopted. To optimize the geometry of the heterojunction systems, we fix the in-plane lattice constant of the electrode material. The total force tolerance on each atom is set to be below 0.02 eV/A. For calculating the density of states, we use the tetrahedron method of integration scheme, implemented in the VASP package, with a mesh of 21\(\times\)21\(\times\)1 k-points.
The value of spin polarization at E\({}_{F}\) (SP) has been calculated as follows:
\[SP=\frac{n^{\uparrow}(E_{F})-n^{\downarrow}(E_{F})}{n^{\uparrow}(E_{F})+n^{ \downarrow}(E_{F})}\times 100\%\]
Here n\({}^{\uparrow}\)(\(E_{F}\)) and n\({}^{\downarrow}\)(\(E_{F}\)) correspond to the majority and minority spin density of states (DOS) at E\({}_{F}\), respectively.
To investigate the magnetoelectric (ME) property and to understand the effect of spin injection bias on the TMR ratio, we have incorporated transverse electric field in our calculations. We have considered series of external electric fields ranging from 0.01 V/A to 0.50 V/A in the direction perpendicular to the heterojunction interfaces. For the electric field dependent calculation 15 A of vacuum has been added along the (001) direction with the same in-plane lattice parameter as obtained in the previous interface. To mitigate any artificial Coulomb interaction resulting from the external electric field, a dipole correction has been incorporated.
In order to calculate the TMR ratio for each applied electric field, we have taken the help of the model given by Julliere, which is based on two-current model [50].
Since, this model assumes that spin is conserved in tunneling in a MTJ, the TMR ratio can be calculated as follows:
\[TMR=\frac{G_{P}-G_{AP}}{G_{AP}} \tag{1}\]
where G\({}_{P}\) and G\({}_{AP}\) can be calculated from the projected density of states (PDOS) of the bottom and top magnetic contact in both parallel (P) and anti-parallel (AP) configurations as given by,
\[G_{P}=\frac{e^{2}}{h}(n_{bottom}^{\uparrow}n_{top}^{\uparrow}+n_{bottom}^{ \downarrow}n_{top}^{\downarrow}) \tag{2}\]
\[G_{AP}=\frac{e^{2}}{h}(n_{bottom}^{\uparrow}n_{top}^{\downarrow}+n_{bottom}^{ \downarrow}n_{top}^{\uparrow}) \tag{3}\]
where, \(n_{bottom}^{\uparrow\uparrow}\) and \(n_{top}^{\uparrow\downarrow}\) represent the PDOS value of majority/minority spin carrier for bottom and top magnetic contact at E\({}_{F}\), respectively and \(e\) and \(h\) are the electronic charge and Plank's constant respectively.
Furthermore, the first-principles calculations of ballistic conductance have been carried out using PWCOND code [51] as implemented in the Quantum ESPRESSO (QE) package [52]. After obtaining the converged geometry using VASP, we calculate the spin-dependent transmission, \(T^{\sigma}(k_{||},E)\) using the method proposed by Choi and Imm [53], also using GGA exchange-correlation functionals [48]. The spin-dependent tunneling conductance is obtained by using Landauer-Buttiker formula [51]:
\[G^{\sigma}=\frac{e^{2}}{h}\sum_{k_{||}}T^{\sigma}(k_{||},E_{F})\]
where, \(\sigma(=\uparrow,\downarrow)\), is the spin index and \(T^{\sigma}(k_{||},E_{F})\) is the spin-dependent transmission coefficient at the energy \(E_{F}\), with \(k_{||}=(k_{x},k_{y})\). We set the wave function and charge density cut-off energy to 60 and 600 Ry, respectively, and use a 10\(\times\)10\(\times\)1 k-point mesh for the heterojunction calculations. All calculations are converged to an accuracy of 10\({}^{-8}\) Ry. We resolve the transmission with a large k-grid in the \(x\) and \(y\) directions (100 \(\times\) 100) to accurately capture fine spikes in transmission. To reduce the 2D plane wave basis set, we use an energy window of 45 Ry. More information about the method for calculating ballistic conductance can be found in Ref.[53].
The Crystal Orbital Hamilton Population (COHP) analysis has been carried out using the LOBSTER package [54; 55].The pbeVaspFit2015 basis with the following basis functions: Co: 3d, 4s, and 3p; Mn: 4s, 3d and 4p; Sb: 5s, 4d and 5p; Hf: 5d, 6s and 5p ; Ir: 5d, 6s and 4f, have been used for the orbital projection of plane waves. The wavefunctions are obtained from the DFT calculations.
## III Results \(\&\) Discussion
### Electronic and Magnetic properties of Bulk and Surfaces (001) of Co\({}_{2}\)MnSb Alloy
The HM nature of the bulk is confirmed through comprehensive electronic structure calculations, as presented in Table S1 and Fig. S1.[56] Analysis reveals the presence of \(\Delta_{1}\) and \(\Delta_{5}\) symmetric bands within the majority spin channel along the transport direction (from \(\Gamma\) to X shown in Fig. S1[56]). These bands play a crucial role in facilitating efficient spin-dependent symmetry filtering transport. Supplementary Information [56] gives details of
the electronic properties of the bulk electrode.
We further investigate the impact of bi-axial strain on the electronic and magnetic properties of the bulk electrode. Tensile strain preserves the half-metallic (HM) behavior, while compressive strain destroys it (Table S2, Fig. S2[56]). The band dispersion and orbital character along the transport direction remain largely unaffected by strain (Fig. S3, S4, S5[56]). However, under compressive strain, we observe the appearance of \(\Delta_{1}\) symmetric bands in the minority spin states (Fig. S3[56]), likely to influence the transmission behavior, which will lead to the reduction of TMR ratio under compressive strain.
To investigate the interfacial properties of the ferromagnetic/semiconductor heterostructure, we initially examined the free-standing surface slabs of Co\({}_{2}\)MnSb (001). Three different atomic terminations were considered: Co-Co, Mn-Sb, and Mn-excess Mn-Mn, each consisting of 17 diatomic layers with a 15 A vacuum along the z direction to prevent any interaction between the slabs along the (001) direction due to the periodic arrangement of the same. The energetic, electronic, and magnetic properties of these surfaces are presented in Table S3.[56] Analysis of the projected density of states confirmed the preservation of the HM property in case of the Mn-Sb and Mn-Mn terminated surfaces (Fig S8[56]). As Heusler alloys with high surface SP are desirable for spintronic devices, we selected these two surface terminations for further investigations on the heterostructures with semiconductors.
### Bulk properties of HfIrSb Alloy
In the MgAgAs-type crystal structure of HfIrSb, there are four interpenetrating FCC sublattices: a rock-salt structure formed by a lattice of Hf atoms and a lattice of Sb atoms, and a lattice of Ir atoms occupying the center of every other Hf4Sb4 cube. The remaining Hf4Sb4 cubes have vacant centers. The calculated lattice constant and band gap of HfIrSb (Table S1[56]) agree well with other calculations performed with the GGA XC term.[57, 58] The lattice constant of HfIrSb is about 5% larger than that of Co\({}_{2}\)MnSb. This difference has implications for the local bonding between atoms, as well as for the resulting electronic and transport properties, which will be discussed later.
Fig. 1 depicts the atom projected band structure along the high symmetry path in the 1st Brillouin zone, along with the total DOS for HfIrSb. The compound has a direct bandgap of approximately 0.89 eV at the \(\Gamma\) point, whereas the experimental band gap is found to be 1.3 eV.[57] The atom-projected band structure suggests that the top of the valence band is mainly composed of contributions from Hf and Ir atoms, whereas the conduction band is predominantly due to Sb atoms. Since Hf and Ir have \(d\) electrons in the valence, the valence band is expected to be mostly \(d\) electron derived.
Further, it is crucial to understand the symmetries of the conduction band minima (CBM) and valance band maxima (VBM) of the spacer material because we expect that the contribution of electrons for transport with different symmetries also differs. As can be seen from the orbital projected band-structure in Fig. 1(d) to (f), there are two valence bands along the \(\Gamma\) to X direction that touches the E\({}_{F}\). One has predominantly \(\Delta_{1}\) and \(\Delta_{2}\) orbital symmetry (Fig. 1(d), (e)) whereas the other shows predominant \(\Delta_{5}\) orbital character (Fig. 1 (f)). However, due to the presence of \(sp\) electrons in the valence shell of Sb atoms, the CBM exhibits dominant \(\Delta_{1}\) symmetry. This suggests both the \(\Delta_{1}\) and \(\Delta_{2}\) states are likely to contribute to tunneling. We extend our investigation to examine the impact of spin-orbit coupling (SOC) interaction on the electronic band structure of HfIrSb. In Figure S6[56], the band structure of HfIrSb with SOC is presented. The introduction of SOC breaks the degeneracy of the valence bands at the \(\Gamma\) point, leading to a reduction of the band gap to 0.67 eV. Furthermore, we observe the splitting of valence bands as we move away from \(\Gamma\). Notably, conduction bands near the \(\Gamma\) point exhibits a Zeeman-like spin splitting of bands. Analyzing the orbital-projected bands (Figure S6 (b) - (d)[56]) incorporating SOC, we discern that while the CBM is predominantly of \(\Delta_{1}\) orbital character, the valence band maximum (VBM) is primarily characterized by the \(\Delta_{5}\) orbital.
The periodicity of the bulk crystal requires that the Bloch k-vectors are real, but in metal-semiconductor (or metal-insulator) interfaces, the metal-induced gap states (MIGS) play a crucial role. These states are itinerant in the metallic electrodes and exponentially decaying in the insulator. Their solutions with complex-k vectors result in complex band structures for the insulators.[59] In this study, we compare the complex band structures of HfIrSb along the (001) direction, which is the propagation direction of electrons in the heterojunctions.
To investigate the tunneling behavior in HfIrSb, we perform band structure calculations of the spacer material. Both real and complex K-values at k\({}_{||}=\Gamma\) were considered, as depicted in Fig. 1(c). Understanding the decay rate in the barrier layer, where the complex band energies intersect with the E\({}_{F}\), represented by Im(k), is crucial for comprehending how tunneling electrons approach the barrier layer perpendicular to the surface. A lower decay rate suggests that the electrons travel a shorter effective distance before encountering the barrier layer[59]. For HfIrSb, we observe that the complex bands intersected at the E\({}_{F}\) at a specific k-point in the complex region, resulting in a lower decay rate of Im(k) = 0.11\(\frac{2\pi}{a}\) at the \(\Gamma\) point (Fig. 1(c)), in contrast to the case of MgO (0.21\(\frac{2\pi}{a}\)).[31] In our previous study on a MTJ with TiCoSb as spacer layer, which is an indirect band gap semicon
ductor, we have found decay rates of 0.25\(\frac{2\pi}{a}\) at the \(\Gamma\) point and 0.14\(\frac{2\pi}{a}\) at the X point.[31] Therefore, we anticipate a larger \(\Gamma\)-centric tunneling in case of HfIrSb spacer layer compared to TiCoSb and MgO[34].
### Co\({}_{2}\)MnSb/HfIrSb/Co\({}_{2}\)MnSb Heterojunction
#### iii.3.1 Electronic and Magnetic properties of the Heterojunction
**Geometric Structure and Stability:** In this section, we examine the electronic and magnetic properties of Co\({}_{2}\)MnSb/HfIrSb/Co\({}_{2}\)MnSb MTJ having different interfaces. Two atomic terminations for HfIrSb along (001) crystal orientation were considered: Ir termination and Hf-Sb termination. The original structures along (001) crystal orientation included Mn-Sb/Ir, Mn-Mn/Ir and Mn-Sb/Hf-Sb, Mn-Mn/Hf-Sb interfacial terminations, which can be divided into two groups: one where interfacial atoms sit on top of Mn or Sb atoms (Top), and the other where interfacial atoms are located in the bridge site between Mn and Sb atoms (Hollow) (shown in Fig. S7[56]). Table 1 shows the surface free energy for all the various interfaces considered for the present study. For both terminations, the interface with Ir atoms sitting on the hollow side was found to be energetically favorable. Our DFT calculations were performed by fixing the in-plane lattice constant to the electrode lattice constant, and allowing the hetero-junctions to relax in the z-direction. We prepare the supercell of multilayer containing 17 atomic layers (ML) of Co\({}_{2}\)MnSb and 13 ML of HfIrSb for the Mn-Sb and Mn-Mn terminated interfaces, respectively. The selection of the semiconductor layer thickness has been made judiciously, taking into account the smaller band-gap of HfIrSb in comparison to widely used semiconductors like MgO. This choice enables it to serve as an effective barrier layer while providing a reasonably high conductance. We observe that the off-stoichiometric interface Mn-Mn/Ir had slightly lower surface free energy than the stoichiometric Mn-Sb/Ir interface (Table1), and the bond distances at the interface are found to be similar to the Mn-Ir (2.69 A) and Ir-Sb (2.70 A) bond lengths in the bulk IrMnSb[60], as shown in Table 1.
**Charge Density Difference (CDD):** In order to understand the chemical bonding at the interface, it is crucial to have a comprehensive understanding of the charge transfer in the system. The transfer of charge (\(\Delta\rho\)) at the interface can be visualized in three dimensions, as well as two dimensions, as shown in Fig. 2. The calculation of \(\Delta\rho\) is carried out by subtracting the spatial charge densities of the electrode and the spacer layers from that of the whole heterostructure, represented by \(\rho^{Electrode}\), and \(\rho^{Spacer}\), and \(\rho^{MTJ}\), respectively.
\(\Delta\rho=\rho^{MTJ}\) - \(\rho^{Electrode}\) - \(\rho^{Spacer}\)
The value of \(\Delta\rho\) is positive in the yellow-colored regions and negative in the blue-colored regions. A positive value indicates an accumulation of electronic charge, while a negative value signifies depletion of electronic charge. In Fig. 2(a) and (b), it is observed that charge is mostly transferred from the interface Mn and Sb atoms and is accumulated around the interfacial region between Mn-Sb and Ir planes. In Figure S10[56], we present the charge density difference (\(\Delta\rho\)) at two distinct interfaces (Mn-Sb/Ir and Mn-Mn/Ir), taking into account the influence of spin-orbit coupling (SOC). Remarkably, we find that the charge distribution at the interfaces remains largely unchanged with the inclusion of SOC. However, a notable disparity is observed at the Mn-Mn/Ir interface, revealing an accumulation of charges around the adjacent Hf atom in the subsequent Hf-Sb plane, a phenomenon absent in the absence of SOC.
**Magnetic Properties :** In Fig. 2(c), the magnetic moments of the interface Mn atoms are shown across the junction for both interfaces. It is observed that the magnetic moments of the interface Mn atoms exhibit a sudden jump. Moreover, for the Mn-Mn/Ir interface, the magnetic moments of the two interface Mn atoms (Mn\({}_{1}\) and Mn\({}_{2}\)) are different. The difference magnetization density plots in Fig. 2 (d) and (e) indicate that the change in magnetization is mainly localized at the interface Mn atom, while the interface Ir atom acquires a small magnetic moment (-0.1 \(\mu_{B}\)). Additionally, the difference magnetization density supports the inequivalence of the magnetization density of the Mn\({}_{1}\) and Mn\({}_{2}\) atoms for the Mn-Mn/Ir interface.
It should be emphasized that the difference magnetization density around the interface Mn atoms arises
Figure 1: The electronic properties of the HfIrSb spacer material are presented. Panel (a) shows the atom-projected band structure, while panel (b) shows DOS. Panel (c) depicts the complex band structure along the \(\Gamma\) to X direction. Finally, panels (d)-(f) display the orbital-projected band structures. Here \(\Delta_{1}\), \(\Delta_{2}\), \(\Delta_{5}\) represent the d\({}_{z^{2}}\), (d\({}_{x^{2}-y^{2}},d_{xy}\)) and (d\({}_{xz},d_{yz}\)) orbital characters, respectively.
primarily due to the localized d\({}_{yz,xz}\) orbitals. Furthermore, the small magnetic moment of the Ir atom at the interface induced by the proximity effect of the magnetic layer arises mainly from the out-of-plane d\({}_{z^{2}}\) orbitals, as shown in Fig. 2 (d), (e).
**Bader Charge and COHP Bond Analysis:** The net charge transfer at Co\({}_{2}\)MnSb/HfIrSb interfaces as calculated from Bader charge analysis [61] is shown in Fig.S9 [56]. For both the interfaces we observe that the Ir atom at the interface loses significant amount of charge (-0.27e) compared to the interface Mn (-0.14e) and Sb (-0.07e) atom, that get accumulated in the interface region. Similar trend can also be observed from our Mulliken and Loewdin charge analysis ( Table S4. [56]) as obtained from LOBSTER package [55]. Additionally, our orbital-decomposed charge density analysis, facilitated by the LOBSTER package [55], highlights that out-of-plane orbitals (\(s\) and \(d_{z^{2}}\) for Ir and \(p_{z}\) for Sb) witness the most significant charge losses, whereas for Mn, it is the in-plane (\(d_{x^{2}-y^{2}},\ d_{xz}\)) orbital that experiences substantial charge loss. These findings corroborates well with our charge density and magnetization density difference analysis presented in Fig.2.
To gain deeper insights into the interface bonding, we conducted the Crystal Orbital Hamilton Population (COHP) analysis between Mn-Ir and Sb-Ir atom pairs, as depicted in Fig.3. Additionally, in Fig.3, we present integrated COHP values (ICOHP) between these atom pairs, serving as an indicator of bond strength by integrating the COHP values up to the Fermi energy (E\({}_{F}\)). For the Mn-Sb/Ir interface (Fig.3 (a)), the COHP analysis between Mn-Ir atom reveals presence of anti-bonding states near E\({}_{F}\) in the majority spin channels, which lowers the bonding interaction. However, the situation is different for the minority-spin channel, where the whole valance band (VB) has bonding interaction. Similar observations have also been made in some previous bonding analysis involving 3\(d\) transition metal atoms [62; 63; 64]. Because, of the exchange hole, the majority spin orbitals are more spatially contracted than the minority spin orbitals and contributes less to the bonding interactions [62]. The COHP bonding analysis between the Ir-Sb pair mostly revealed bonding interaction in the valance band for both the spin channel, apart from the presence of small anti-bonding interaction in the minority spin-channel in the VB away from E\({}_{F}\), which is compensated by the strong bonding character deep into the energy. The ICOHP values in Fig.3 indicates the Sb-Ir (ICOHP:-2.35 eV) bonding is significantly stronger than the Mn-Ir (ICOHP:-0.79 eV ) bonding at the interface.
In the case of the Mn-Mn/Ir interface, the COHP analysis uncovers anti-bonding states in the majority spin channels near E\({}_{F}\) between the both, Mn\({}_{1}\)-Ir and Mn\({}_{2}\)-Ir bonds, contributing to even weaker bonding compared to the Mn-Sb/Ir interface. In contrast, the minority-spin contributes exclusively to the bonding interaction below E\({}_{F}\) (Fig.3 (b)).
The weaker covalent bonding at the Mn-Mn/Ir interface compared to the Mn-Sb/Ir interface, as indicated by our COHP bonding analysis is further supported by the Bader charge analysis, which suggests relatively less charge transfer at the Mn-Mn/Ir interface compared to the Mn-Sb/Ir Interface.
**Electronic Density of States \(\&\) Band Structure:** To investigate the influence of interfacial interactions on electronic behavior, we analyze the projected DOS of the interface atoms, as depicted in Fig. 4. The atom projected DOS of the interfacial Mn atoms for both the Mn-Sb/Ir and Mn-Mn/Ir interfaces are shown in Fig. 4 (a) and (d), respectively. Our analysis reveals that the half-metallic character of the bulk is disrupted for both interfaces. However, the Mn-Sb/Ir interface still exhibits a high degree of spin polarization (approximately 68%). The DOS of the different interface Mn atoms for the Mn-Mn/Ir interface significantly differ from each other. Specifically, the Mn\({}_{1}\) atom displays high majority spin density at around -1 eV, while the Mn\({}_{2}\) atom exhibits a slightly shifted density of states, which is towards the higher binding energy side (approximately -1.8 eV). This results in a higher exchange splitting energy and thus larger magnetic moment for the Mn\({}_{2}\) atom is observed.
We show various \(d\)-orbital contributions of the Mn atom for the Mn-Sb/Ir and Mn-Mn/Ir interfaces in Fig. 4(c) and (f), respectively. Here, d\({}_{1}\), d\({}_{2}\), d\({}_{3}\) correspond to the d\({}_{z^{2}}\), (d\({}_{xy}\), d\({}_{x^{2}-y^{2}}\)), and (d\({}_{yz}\), d\({}_{xz}\)) orbitals of the interface Mn atoms, respectively. Though the formation of the heterojunction breaks the periodicity along the z-direction, the d\({}_{1}\) orbital retains the half-metallic character of the bulk. However, the d\({}_{2}\) and d\({}_{5}\) orbitals are primarily responsible for the destruction of the half-metallicity at the interface. There is a significant difference between the majority spin DOS at the E\({}_{F}\) for the two interfaces. For the Mn-Sb/Ir interface, there is a dominant contribution from the d\({}_{5}\) orbitals (Fig. 4(c)), while for the Mn-Mn/Ir interface, both d\({}_{2}\) and d\({}_{5}\) orbitals have a significant contribution. Since the orbital characters of the traveling electrons play a crucial role in transmission, one would expect differences in the spin-transmission behavior between these two interfaces, which are discussed in the subsequent section.
The Ir atom at the interface becomes metallic for both interfaces, while the semiconducting behaviors persist in the bulk region (Fig. 4 (b), (e)). Consequently, there is a small gain in magnetic moments (approximately -0.1 \(\mu_{B}\)) for the interface Ir atoms for both interfaces.
Figure. S11 [56] shows a comparison of total DOS of the interface atoms for each interfaces of Co\({}_{2}\)MnSb/HfIrSb heterojunction, with considering the effect of SOC and without SOC. As depicted in Figure. S11 [56], the DOS around the E\({}_{F}\) gets hardly affected by SOC. Therefor we do not include the effect of SOC in the following part of
our discussion.
To achieve a high TMR ratio, it is essential to minimize the current passing through the barrier when the magnetization of the electrodes is anti-parallel. When perfect half-metallic electrodes are used, spin flipping and tunneling to or from an interfacial state can produce current in an anti-parallel configuration. The tunneling probability of carriers in various bands can vary significantly depending on their band symmetry, as demonstrated in the literature.[5; 7] Electrons in states with \(\Delta_{1}\) orbital symmetry exhibit weak decay within the barrier material, whereas the transmission of electrons in other symmetry states is exponentially suppressed. Our prior research[31] illustrated the presence of \(\Delta_{1}\) symmetric bands in the majority spin channel for Co\({}_{2}\)MnSb. This suggests that it is easy for majority spin electrons to tunnel through the barrier in a parallel spin configuration, which is a prerequisite for achieving a high TMR.
Nonetheless, it is equally important to reduce the tunneling rate into minority interface states to suppress the current for anti-parallel magnetization. We present the band structure of minority spin states for the heterojunction with Mn-Sb/Ir and Mn-Mn/Ir interfaces in Fig. 5. In Fig. 5(b) and (e), we demonstrate the contribution of interface atoms (i.e., Mn, Sb, and Ir) to the minority spin bands. The orbital projected band structures of the interface atoms in Fig. 5(c) imply that the minority spin conduction bands for the Mn-Sb/Ir interface have a dominant \(\Delta_{1}\) character, suggesting a larger transmission for the minority states. Conversely, for the Mn-Mn/Ir interface (Fig. 5(f)), mostly in-plane d orbitals dominate the minority spin states near E\({}_{F}\), leading to poor coupling with the \(\Delta_{1}\) type bands of the HfIrSb spacer material.
#### iii.3.2 TMR Ratio and the Effect of Electric Field
Next, we aim to investigate the influence of an external electric field on the TMR ratio of the heterojunction. Our objective is to explore the potential for achieving electrical control of magnetic tunnel junctions (MTJs). The TMR ratio has been calculated according to the Eq.
\begin{table}
\begin{tabular}{c c c c c c c} Surface termination & Atomic position & Interfaced & Surface free energy (in eV/Å\({}^{2}\)) & Bond Type & Bond-length (in Å) & Interface SP (in \%) \\ \hline Mn-Sb & Top & MnSb—Ir & -6.8106 & & & \\ & & MnSb—HfSb & -6.8122 & & & \\ & Hollow & **MnSb—Ir** & -6.9206 & Mn-Ir, Sb-Ir & 2.69, 2.70 & 68 \\ & & MnSb—HfSb & -6.8482 & & & \\ Mn-Mn & Top & MnMn—Ir & -6.9141 & & & \\ & & MnMn—HfSb & -6.8312 & & & \\ & Hollow & **MnMn—Ir** & -6.9825 & Mn1-Ir, Mn2-Ir & 2.60, 2.57 & 48 \\ & & MnMn—HfSb & -6.7456 & & & \\ \end{tabular}
\end{table}
Table 1: The calculated interface free energies of various optimized Co\({}_{2}\)MnSb/HfIrSb interface heterostructures are presented, along with the corresponding bond lengths between interfacial atoms and the observed spin-polarization (SP) at the interface of the heterostructure showing lowest surface free energy for each termination.
Figure 2: The charge density difference (\(\Delta\rho\)) of the heterostructure is plotted in two forms: a 3D visualization and a 2D visualization in the yz plane. Panels (a) and (b) show the \(\Delta\rho\) at the Mn-Sb/Ir interface and Mn-Mn/Ir interface, respectively, where blue and yellow colors in the 3D visualization indicate negative and positive \(\Delta\rho\), respectively. The isosurface value is set to 0.0005 e/Å\({}^{3}\) for both cases. Panel (c) presents the magnetic moment of the Mn atoms across the Co\({}_{2}\)MnSb/HfIrSb heterojunction for both the Mn-Sb/Ir and Mn-Mn/Ir interfaces. (d) and (e) depict the magnetization density difference projected in 2D yz plane for Mn-Sb/Ir and Mn-Mn/Ir interface, respectively.
[1] based on the standard model given by Julliere.[50] As depicted in Fig. 6, we observe that the TMR values for both the Mn-Sb/Ir and Mn-Mn/Ir interfaces exhibit minimal sensitivity to the external electric field. Significant TMR values are observed for both interfaces. However, we do not observe any magnetoelectric coupling, as has been reported in some other MTJs.[65, 66, 67] Additionally, the application of an external electric field does not significantly affect the magnetic moments at the interfaces. Instead, we observe a proximity effect where the interface Ir atoms acquire small magnetic moments (approximately 0.10 \(\mu_{B}\)), and the direction of these moments depends on the magnetization direction of the adjacent Mn atoms. In Figure. S12[56] we have further shown the charge density difference (\(\Delta\rho\)) of Co\({}_{2}\)MnSb/HfIrSb heterojunction, featuring the Mn-Sb/Ir interface, in the presence and absence of an external electric field. This clearly depicts that the charge distribution at the interface remains unaffected under the application of external electric field. We only observe difference in charges at those layers, which are exposed to the vacuum. The absence of electric field control over the magnetic properties in the all-Heusler alloy junctions can be attributed to the weaker covalent bonding between the interface atoms, as suggested by the negligible interface buckling observed. This weaker bonding stands in contrast to other Heusler alloy and oxide-based magnetic tunneling junctions, which exhibit greater electric field control.[67]
#### iii.3.3 Spin-Transport Properties
Our focus now turns to examining the spin-resolved transport properties of the heterojunction. We present
Figure 4: The local density of states (LDOS) for the interfacial Mn, Sb, and Ir atoms are shown in panels (a), (b), and (d), (e), respectively, for the heterojunctions with Mn-Sb/Ir and Mn-Mn/Ir interfaces. The LDOSs in the bulk region are also presented as a reference using filled curves in each figure. Panels (c) and (f) display the orbital projected DOS of the interfacial Mn atom for the heterojunctions with Mn-Sb/Ir and Mn-Mn/Ir interfaces, respectively, with d\({}_{1}\), d\({}_{2}\), d\({}_{3}\) denoting d\({}_{z^{2}}\), (d\({}_{xy}\), d\({}_{x^{2}-y^{2}}\)), and (d\({}_{yz}\) d\({}_{xz}\)), respectively.
Figure 5: (a) Minority spin band structure for the considered heterojunctions with Mn-Sb/Ir (Mn-Mn/Ir) interface on the top (bottom) panel along the X-\(\Gamma\)-M high symmetry directions of the 2D Brillouin zone; (b), (c) represent the atomic and orbital contribution of the inter-facial atoms on the minority spin band structures, respectively. Here \(\Delta_{1}\)\(\Delta_{2}\) and \(\Delta_{5}\) represent ( s, p\({}_{z}\), d\({}_{z^{2}}\)), (d\({}_{xy}\), d\({}_{x^{2}-y^{2}}\)), and (p\({}_{x}\), p\({}_{y}\), d\({}_{yz}\) d\({}_{xz}\)) orbitals respectively.
Figure 3: COHP analysis of the bonds between the interface atoms for Co\({}_{2}\)MnSb/HfIrSb heterojunction: (a) Mn-Sb/Ir ; (b) Mn-Mn/Ir interface, respectively. E\({}_{F}\) has been set to 0 eV. The positive (negative) values at the x-axis indicates bonding (anti-bonding) feature. The sign \(\uparrow\)(\(\downarrow\)) indicate majority (minority) spin contributing to the bonding.
the results for the parallel (P) and anti-parallel (AP) spin alignments in Fig. 7. Fig.7 (a) shows the majority spin transmittance (in log scale) as a function of spacer layer thickness for both interfaces. The exponential decay of the transmittance with spacer layer thickness confirms the tunneling behavior of the heterojunction[34; 68]. Figures 7(b) and (c) illustrate the energy-dependent spin transmission for the heterojunction with Mn-Sb/Ir and Mn-Mn/Ir interfaces, in P and AP spin configuration of the electrodes, respectively. Upon examining the majority spin transmission for both the heterojunctions, we observe that two distinct features emerge. Firstly, in the energy range from \(\approx\) -0.61 eV to 0.15 eV, the transmission decays, suggesting the tunneling of electrons with energies lower than the barrier. Secondly, a sudden drop in transmission occurs around 1 eV, since the \(\Delta_{1}\) band of the Co\({}_{2}\)MnSb electrode, which is the primary contributor to electron tunneling, extends up to just below 1 eV (see Fig. S1(d)[56]). Beyond this energy, it is mostly the bands with \(\Delta_{5}\) symmetry that contribute to the tunneling. For both spin channels and magnetic configurations in Fig.7 (b), (c) E\({}_{F}\) of the Co\({}_{2}\)MnSb/HfIrSb/Co\({}_{2}\)MnSb junction lies close to the valence band of the bandgap of HfIrSb. Due the HM nature of the electrode, we do not observe any transmission at E\({}_{F}\) for the minority spin channel as well as for majority and minority spin channels for AP configuration of the electrodes. For the AP configuration, as expected, the transmission coefficients are nearly (not exactly, because the inversion symmetry is broken) spin-degenerate. The transmission behavior that we have described indicates a significant level of spin-filtering in the junction.
In Fig. 8, we present the k-dependent majority spin transmission in the 2D-Brillouin Zone at E\({}_{F}\) for two heterojunctions with varying spacer layer thicknesses. Our analysis of the k-dependent transmission reveals that the transmission predominantly occurs around the lowest decay point, which is located at \(\Gamma\) for both the interfaces. However, there are notable differences in the transmission profiles for the two interfaces. For the Mn-Sb/Ir interface, the transmission profile exhibits 2-fold rotational symmetry, whereas for the Mn-Mn/Ir interface, it shows a combination of 4-fold and 2-fold rotational symmetry. This distinction suggests that the majority spin carriers for these two interfaces have different orbital symmetries. We have shown the orbital projected DOS for the majority spin states at E\({}_{F}\) in Fig. 3(c) and (f), which indicates that d\({}_{xz,yz}\) orbitals have a significant contribution for the Mn-Sb/Ir interface, while for Mn-Mn/Ir, d\({}_{xz,yz}\) as well as d\({}_{x^{2}-y^{2},xy}\) orbitals play an important role. This observation explains the difference in the transmission profiles for the two cases, where the heterostructure with Mn-Sb/Ir interface exhibits majority spin transmission dominated by electrons with \(\Delta_{5}\) orbital symmetry, while for Mn-Mn/Ir, the \(\Delta_{2}\) and \(\Delta_{5}\) states dominate. Previous studies have established that electrons with \(\Delta_{2}\) orbital symmetric states decay faster than those with \(\Delta_{5}\) states inside the barrier region. This is further supported by the variation of the absolute square of the scattering wavefunction at E\({}_{F}\) as a function of heterojunction layer thickness (as depicted in Figure. S13)[56]. This shows for the Mn-Mn/Ir interface the scattering states inside the barrier decays faster than of Mn-Sb/Ir interface. Consequently, we observe that the majority spin transmittance with Mn-Mn/Ir interface is smaller than that with the Mn-Sb/Ir interface (Fig. 7 (a)). Additionally, we notice that with increasing barrier thickness, the transmissions of tunneling electrons with finite k\({}_{||}\) values are highly suppressed due to their faster decay compared to those with k\({}_{||}\) =0 and the transmission is mostly centered around the \(\Gamma\) (Fig. 8 (c) and (f)). We, further observe that with increasing spacer layer thickness, the transmission due to \(\Delta_{2}\) orbital symmetric bands at the Mn-Mn/Ir interface is getting suppressed and transmission due to the \(\Delta_{5}\) orbitals symmetric bands sustains (Fig.8 (e)).
#### iii.3.4 Effect of Strain Engineering on spin-transport properties
Strain engineering offers a promising approach to control the electronic and magnetic properties of a material, which is essential for improving the performance of spintronic devices. Several studies in the literature demonstrate that modifying the TMR and magneto-crystalline anisotropy of MTJs using strain engineering is feasible.[39; 69] It is also reported that strain-mediated switching mechanisms can reduce the energy barrier be
Figure 6: The change in tunnel magnetoresistance (TMR) ratio of the heterojunction system as a function of an applied external electric field is depicted. The schematic representation of the heterojunction includes a top and bottom magnetic layer. The left (right) panel illustrates the ferromagnetic (anti-ferromagnetic) alignment of spins within the magnetic layer. The orientation of magnetization is denoted by black and blue arrows, while the red arrow indicates the direction of the electric field.
tween parallel and antiparallel states.[41; 44] However, previous research on the effect of strain on Heusler alloy-based MTJs mainly focuses on half-metallicity, spin polarization, and magnetic moment. In this study, we also aim to investigate the impact of strain engineering on the spin-transmission properties of the heterojunction. This is especially relevant because the selected material in the present study exhibits approximately +5% lattice mismatch, indicating that the investigation on the effect of strain on transport properties might be crucial.
To achieve strain engineering, we chose a heterojunction with an Mn-Mn/Ir interface that exhibits lower surface free energy (Table 1). Further analysis of the electronic properties of this heterostructure at the interface reveals competing contributions from \(\Delta_{2}\) and \(\Delta_{5}\) states in the majority spin states, indicating that the interface might show sensitivity to the bi-axial strain. Therefore, we investigate the majority and minority spin transmission of the heterojunction with Mn-Mn/Ir interface over the 2D Brillouin zone under bi-axial strain ranging from -4% to 4% as shown in Fig. 9 and 10.
The bi-axial strain was applied by fixing the in-plane lattice constant and allowing the volume of the heterojunction to relax. Our results show that bi-axial strains have a significant effect on the majority spin-transmission property, with a change in the transmission profile from 4-fold rotational symmetry to 2-fold rotational symmetry as bi-axial strain changes from compressive to tensile (Fig. 9). This implies that the \(\Delta_{2}\) bands dominate the majority spin transmission under compressive strain, while \(\Delta_{5}\) bands dominate under tensile strain. Additionally, we observe a three-fold increase in the conductance of the majority spin channel from 2.26 \(\times\) 10\({}^{-5}\) (\(\frac{e^{2}}{h}\)) to 6.94 \(\times\) 10\({}^{-5}\) (\(\frac{e^{2}}{h}\)) as bi-axial strain changed from -4% to 4%, which is consistent with the weaker decay rate of \(\Delta_{5}\) bands inside the barrier compared to the \(\Delta_{2}\) bands.
Under compressive bi-axial strain, the HM property of the electrode is disrupted, resulting in the observation of transmission due to minority spin states (Fig. 10). Specifically, for 4% compressive strain, transmission spots are detected around the \(\Gamma\) point and at the corners of the two-dimensional Brillouin zone (Fig. 10). However, these transmission spots diminish in size as the compressive strain decreases, and at 1% compressive strain, the minority transmission becomes completely absent. Notably, under compressive strain, a considerable amount of minority spin transmittance is observed, ranging from approximately 10\({}^{-5}\) to 10\({}^{-6}\). This can be attributed to the presence of \(\Delta_{1}\) symmetric bands in the minority spin states of Co\({}_{2}\)MnSb under compressive strain (refer to Fig. S3 in the Supplementary Information[56]). Consequently, this will lead to further decrement of the TMR ratio of the MTJ under compressive strain due to these effects.
Figure 8: Dependence of the majority spin transmission (values plotted in log scale) over the 2D Brillouin zone at E\({}_{F}\) in parallel spin configuration with Mn-Sb/Ir (Mn-Mn/Ir) interface in top (bottom) panel. (a), (b), (c) ((d), (e), (f) ) are for 9ML, 13MI, 17ML of HfIrSb layer, respectively.
Figure 7: (a) Transmittance in parallel magnetization case for Co\({}_{2}\)MnSb/HfIrSb/Co\({}_{2}\)MnSb junction with varying thicknesses of HfIrSb layers; (b), (c) Energy-dependent spin-resolved transmission coefficients for the same with 13 ML of HfIrSb layer with Mn-Sb/Ir and Mn-Mn/Ir interface, top (bottom) panel is for parallel (anti-parallel) spin configuration of the electrodes, respectively. All the graphs are plotted in log scale. In panels (b), (c) E\({}_{F}\) has been set to zero.
Figure 11: The atom and orbital projected band structure of bulk HfIrSb under strain is investigated, specifically for (a) -4% (compressive) strain and (b) +4% (tensile) strain. In this analysis, the symbols \(\Delta_{1}\), \(\Delta_{2}\), and \(\Delta_{5}\) represent the following orbital compositions, respectively: (s, p\({}_{z}\), d\({}_{z^{2}}\)), (d\({}_{xy}\), d\({}_{x^{2}-y^{2}}\)), and (p\({}_{x}\), p\({}_{y}\), d\({}_{yz}\), d\({}_{xz}\)).
Figure 10: Dependence of minority spin transmission (values given in log scale) over the 2D Brillouin zone at E\({}_{F}\) in parallel spin configuration with Mn-Mn/Ir interface corresponding to a applied (compressive) bi-axial strain of -4% to -2%, beyond that no transmission is observed for the minority spin states.
Figure 9: Dependence of majority spin transmission (values plotted in log scale) over the 2D Brillouin zone at E\({}_{F}\) in parallel spin configuration with Mn-Mn/Ir interface corresponding to a applied compressive (-4% to -1% ) and tensile (+1% to +4%) bi-axial strain. The spacer layer thickness has been chosen to 13 ML.
In order to investigate the underlying mechanism of such orbital sensitive majority transmission, we have conducted a detailed analysis of the electronic properties of the bulk electrode and spacer material. Our previous discussion has revealed that bi-axial strain induces changes in the spin polarization of the electrode, resulting in the loss of its HM property under compressive bi-axial strain (as shown in Fig. S2[56]). However, we have not observed any significant changes in the orbital character of the bands under the entire range of applied strain along the transport direction (\(\Gamma\) to X (Z), as depicted in Figures S3, S4[56]. Somewhat rigid shift of the bands can only be seen.
Additionally, we have demonstrated the effect of bi-axial strain on the electronic properties of the bulk spacer material HfIrSb, as illustrated in Fig. 11. Specifically, under -4% bi-axial strain, HfIrSb becomes an indirect bandgap semiconductor, with an increase in the bandgap of 1.25 eV compared to the unstained structure, and the degeneracy of the valence bands at the \(\Gamma\) point is also lifted. Furthermore, the atom-projected band structure analysis suggests that both the VBM at \(\Gamma\) and CBM at the M point have a dominant contribution from the Hf atoms (as shown in Fig. 11(a)), which is unlike the case of the unstained structure (as shown in Fig. 1). These changes are also reflected in the orbital-projected band structure (as presented in Fig. 11 (a)), where we observe that the VBM and CBM have \(\Delta_{2}\) and \(\Delta_{5}\) orbital characters, respectively.
On the other hand, under +4% bi-axial strain, we observe that the CBM is mostly Sb atom derived and the VBM has a contribution from both Hf and Ir atoms (as illustrated in Fig. 11(b)). The orbital-projected band structure shows, the VBM is mostly dominated by the \(\Delta_{5}\) orbitals. The comprehensive differences in the electronic properties of the spacer layer under compressive and tensile bi-axial strain lead to the orbital sensitivity of the majority spin transmission.
Finally, we analyze the results corresponding to the \(d\) orbitals of the interfacial Mn atoms of the heterojunction with Mn-Mn/Ir interface under -4% (compressive) and +4% (tensile) strain (Fig. S14[56]). Our results clearly indicate that under -4% strain, the \(d_{x^{2}-y^{2},xy}\) orbitals of the Mn atoms have a significant contribution to the majority spin states at the Fermi level, whereas it is the \(d_{xz,yz}\) orbitals of Mn atom that contributes more for +4% strain. These observations collectively explain the orbital sensitivity of the majority spin transmission under strain.
Our comprehensive observations indicate that the transmission properties in this heterojunction can be effectively tuned by strain. A significant discovery from our calculations is that tensile strain enhances the transmission of the majority spin states, while it considerably hinders the transmission of minority spin states. Considering the lattice mismatch of +5% between our electrode and spacer materials, this phenomenon can be utilized to achieve even higher TMR ratios under tensile strain. In essence, the application of tensile strain can further optimize the performance of the heterojunction and enhance the TMR ratio.
### Conclusion
Using first-principles density functional theory calculations, we have explored electronic and transport properties of a Co\({}_{2}\)MnSb/HfIrSb/Co\({}_{2}\)MnSb all-Heusler magnetic tunneling junction. We demonstrate that the Mn-Sb and Mn-Mn terminated surfaces of Co\({}_{2}\)MnSb along the (001) direction preserve the half-metallic properties of the bulk. From the surface free energies, we propose that heterojunctions of half-metallic and ferromagnetic alloy Co\({}_{2}\)MnSb with direct band gap semiconductor HfIrSb is feasible with both Mn-Sb and Mn-Mn surface terminations. Further, the COHP bonding analysis at the interface suggesting the Mn-Ir bonding (ICOHP:-1.15, -1.46 eV ) at the Mn-Mn/Ir interface is significantly weaker than Sb-Ir bonding (ICOHP:-2.35 eV) at the Mn-Sb/Ir interface.
The results of tunnel magnetoresistance ratios of the Mn-Mn/Ir and Mn-Sb/Ir interfaces indicate a higher ratio for the latter compared to the earlier. This is due to the significantly less contribution from the minority states in the case of Mn-Sb/Ir interface. However, experimentally achieved TMR ratio achieved in a heterojunction may be affected by several factors and is limited by the interface quality which is governed by the conditions of growth.
To study the magnetoelectric property and to understand the effect of spin injection bias on the TMR ratio, a transverse electric field in the range of 0.01 to 0.5 V/A, in the direction perpendicular to the interfaces, has been included in our calculations. Utilizing the standard two current model given by Julliere, we have calculated the TMR ratio of these heterojunctions under the external electric field. Significantly high TMR ratios have been obtained for these junctions, which is found to remain unaffected by electric field of magnitude up to 0.5 V/A.
Furthermore, we demonstrate that the Co\({}_{2}\)MnSb/HfIrSb junction displays remarkable strain-sensitive transmission, with a 3-fold increase in majority spin transmission and supression of minority spin-transmission under a bi-axial tensile strain at the Mn-Mn/Ir interface.
Based on our findings, we predict that a carefully engineered Co\({}_{2}\)MnSb/HfIrSb junction may have enormous potential for a range of spintronic applications, including magnetic sensors, non-volatile memories, and logic circuits - which awaits experimental validation.
Acknowledgements
Authors thank the Director, RRCAT for facilities and encouragement. Authors thank Haiying He for scientific discussions. The computer division of RRCAT, Indore and MTU, USA is thanked for the help in installing and support in smooth running of the codes. JB thanks D. Pandey, R. Dutt, L. Eggart for useful discussions during the work. JB thanks RRCAT, HBNI and MTU for financial support.
|
2302.14310 | FeynGrav 2.0 | We present a new version of FeynGrav. The present version supports Feynman
rules for matter with non-vanishing mass and $SU(N)$ Yang-Mills model. We
revisit the gauge fixing procedure for gravity and derive interaction rules
valid for an arbitrary gauge fixing parameter. We provide a few simple examples
of calculations to illustrate package usage. | Boris Latosh | 2023-02-28T04:59:52Z | http://arxiv.org/abs/2302.14310v1 | # FeynGrav 2.0
###### Abstract
We present a new version of FeynGrav. The present version supports Feynman rules for matter with non-vanishing mass and \(SU(N)\) Yang-Mills model. We revisit the gauge fixing procedure for gravity and derive interaction rules valid for an arbitrary gauge fixing parameter. We provide a few simple examples of calculations to illustrate package usage.
## 1 Introduction
This paper presents the recent development of the package "FeynGrav" [1]. The package provides a tool to operate with Feynman rules for perturbative quantum gravity within FeynCalc [2, 3, 4]. In [1] the author proposed a novel analytic approach to the derivation of Feynman rules. It provides a way to construct the Feynman rules for a wide class of gravity models. It was applied to models without supersymmetry and non-minimal coupling to gravity. Interaction rules for the massless matter of spin 0, 1/2, and 1 were derived and their implementation within FeynGrav was discussed.
In this paper, we present a further development of the analytic approach proposed earlier and its implementation for FeynGrav. Firstly, we consider a matter of spin 0, 1/2, and 1 with non-vanishing masses and minimal coupling to gravity. We pay particular attention to the case of a massless vector field and revisit the issue of gauge fixing. We demonstrate that the corresponding Faddeev-Popov ghosts interact with gravitational degrees of freedom. In addition, we derive the interaction rules for scalar field potential.
Secondly, we consider the gravitational coupling to \(SU(N)\) Yang-Mills model. We derive the corresponding Feynman rules and show that, similarly to the case of a single massless vector field, the Faddeev-Popov ghosts interact with the gravitational degrees of freedom. This generalization allows for the calculation of scattering amplitudes in gravity coupled to gauge fields and opens new perspectives for phenomenological investigations.
Finally, we revisit the gauge fixing procedure for gravity and introduce more general gauge fixing conditions. The corresponding gauge fixing parameter is made explicit in all calculations. In full analogy with the previous cases, the corresponding Faddeev-Popov ghosts interact with the gravitational degrees of freedom.
All models discussed in this paper are implemented within the new version of FeynGrav. Its usage is illustrated in a few physically relevant examples.
It shall be noted that there are different approaches to the derivation of the Feynman rules for gravity. For instance, in classical papers [5, 6, 7] interaction rules for three and four-graviton vertices were derived directly from the Hilbert action. A similar approach based on the Hilbert action perturbative expansion was constructed in [8]. Widely-known package xAct [9, 10, 11, 12, 13, 14] also provides a tool to operate with perturbative expansion within gravity models, but its applicability is mostly limited to the classical domain. We discuss opportunities to implement it within FeynGrav in the previous paper [1]. Lastly, recently another package providing a tool to operate with Feynman rules for gravity-matter coupling was created [15]. A more detailed discussion of computer algebra application for gravity research lies beyond the scope of this paper and can be found in the following reviews [16, 17].
In this paper, we present a comprehensive study of Feynman's rules for perturbative quantum gravity, covering matter with spin 0, 1/2, and 1, \(SU(N)\) Yang-Mills model, and the gauge fixing procedure. In Section 2, we provide an overview of our approach to derive these Feynman rules, including the notations used throughout the paper. The Feynman rules for matter fields are derived and presented. In Section 3, we extend our analysis to \(SU(N)\) Yang-Mills model coupled to gravity. We revisit the gauge fixing procedure for gravity in Section 4 and discuss the interaction of Faddeev-Popov ghosts with gravitational degrees of freedom. In Section 5, we
introduce the new version of FeynGrav, which implements all the models studied in this paper, and we illustrate its usage through a few physically relevant examples. Finally, we conclude with a discussion of the prospects and further development of FeynGrav in Section 6.
## 2 Perturbative Quantum Gravity
Perturbative quantum gravity associates gravitational phenomena with small metric perturbations propagating about the flat background. In that case the complete spacetime metric \(g_{\mu\nu}\) is given as the following finite expansion:
\[g_{\mu\nu}=\eta_{\mu\nu}+\kappa\,h_{\mu\nu}. \tag{1}\]
Here \(\eta_{\mu\nu}\) is the flat metric, \(h_{\mu\nu}\) are small metric perturbations with the canonical mass dimension, and \(\kappa\) is the gravity coupling related with the Newton's constant \(G_{\rm N}\):
\[\kappa^{2}\stackrel{{\rm def}}{{=}}32\,\pi\,G_{\rm N}. \tag{2}\]
Although (1) is a finite expression, it spawns infinite perturbative expansions for the inverse metric
\[g^{\mu\nu}=\eta^{\mu\nu}-\kappa\,h^{\mu\nu}+\kappa^{2}\,h^{\mu\sigma}{h_{ \sigma}}^{\nu}+\mathcal{O}\left(\kappa^{3}\right); \tag{3}\]
for the volume factor
\[\sqrt{-g}=1+\frac{\kappa}{2}h-\frac{\kappa^{2}}{4}\Bigg{(}h_{\mu\nu}^{2}- \frac{1}{2}h^{2}\Bigg{)}+\mathcal{O}\left(\kappa^{3}\right); \tag{4}\]
for the Christoffel symbols
\[\Gamma_{\mu\nu}^{\alpha}=g^{\alpha\beta}\,\Gamma_{\beta\mu\nu}=\left(\eta^{ \alpha\beta}-\kappa\,h^{\alpha\beta}+\kappa^{2}\,h^{\alpha\sigma}{h_{\sigma }}^{\beta}+\mathcal{O}\left(\kappa^{3}\right)\right)\frac{\kappa}{2}\left[ \partial_{\mu}h_{\nu\beta}+\partial_{\nu}h_{\mu\beta}-\partial_{\beta}h_{\mu \nu}\right]; \tag{5}\]
and, ultimately, for the Hilbert action
\[\begin{split}\mathcal{A}_{\rm H}[g_{\mu\nu}]\stackrel{{ \rm def}}{{=}}&\int d^{4}x\sqrt{-g}\left[-\frac{2}{ \kappa^{2}}R\right]=\mathcal{A}_{\rm H}[\eta]+\frac{\delta\mathcal{A}_{H}}{ \delta g_{\mu\nu}}[\eta]\,\kappa h_{\mu\nu}+\frac{\delta^{2}\mathcal{A}_{H}}{ \delta g_{\mu\nu}\,\delta g_{\alpha\beta}}[\eta]\,\,\kappa^{2}\,h_{\mu\nu}\,h _{\alpha\beta}+\mathcal{O}\left(\kappa^{3}\right)\\ =& h_{\mu\nu}\,\mathcal{D}^{\mu\nu\alpha\beta}\Box\,h _{\alpha\beta}+\kappa\,\left(\mathfrak{G}^{(3)}\right)^{\mu_{1}\nu_{1}\mu_{2} \nu_{2}\mu_{3}\nu_{3}}\,h_{\mu_{1}\nu_{1}}h_{\mu_{2}\nu_{2}}h_{\mu_{3}\nu_{3 }}+\mathcal{O}\left(\kappa^{2}\right).\end{split} \tag{6}\]
In this formula, the Hilbert action evaluated at the flat metric vanishes. The term linear in perturbations also vanishes because the flat background delivers a minimum to the Hilbert action. The term quadratic in perturbations describes the propagation of such perturbations. All other terms of higher orders in perturbations describe their interactions.
Perturbative quantum gravity is a quantum theory of small metric perturbations \(h_{\mu\nu}\) constructed with the functional integral technique. For the sake of briefness, we call quanta of the field \(h_{\mu\nu}\) gravitons. Their quantum behavior is described by the following generating functional:
\[\begin{split}\mathcal{Z}\stackrel{{\rm def}}{{=}}& \int\mathcal{D}[g]\,\exp\left[i\,\mathcal{A}_{\rm H}[g]\right]\\ =&\int\mathcal{D}[h]\,\exp\Bigg{[}i\,h_{\mu\nu}\, \mathcal{D}^{\mu\nu\alpha\beta}\Box\,h_{\alpha\beta}+i\,\kappa\,\left( \mathfrak{G}^{(3)}\right)^{\mu_{1}\nu_{1}\mu_{2}\nu_{2}\mu_{3}\nu_{3}}\,h_{\mu _{1}\nu_{1}}h_{\mu_{2}\nu_{2}}h_{\mu_{3}\nu_{3}}+\mathcal{O}\left(\kappa^{2} \right)\Bigg{]}.\end{split} \tag{7}\]
We shall note that this expression shall not be used directly before the gauge fixing procedure is performed. We discuss it in detail in Section 4.
The perturbative structures of the inverse metric \(g^{\mu\nu}\), the volume factor \(\sqrt{-g}\), and the vierbein \(\mathfrak{c}_{m}{}^{\mu}\) are described by families of \(\mathcal{I}\) and \(\mathcal{C}\) tensors defined in the original paper [1]. These tensors can be generated within a computer algebra system and offer a straightforward way to handle the corresponding perturbative expansions. While their discussion is beyond the scope of this paper, they are covered in great detail in [1].
We introduce the following notations for perturbative expansions. If a quantity \(X\) is expanded in a perturbative series with respect to \(\kappa\,h_{\mu\nu}\), we note the corresponding series as follows:
\[X=\sum_{n=0}^{\infty}\,\kappa^{n}\,(X)^{\rho_{1}\sigma_{1}\cdots\rho_{n} \sigma_{n}}\,h_{\rho_{1}\sigma_{1}}\cdots h_{\rho_{n}\sigma_{n}}. \tag{8}\]
Here \((X)^{\rho_{1}\sigma_{1}\cdots\rho_{n}\sigma_{n}}\) notes an expression that specifies the tensor structure of a given term. To put it otherwise, it shows how indices of metric perturbations shall be contracted. In these notations perturbative expansions for \(g^{\mu\nu}\) and \(\sqrt{-g}\) are written as follows:
\[\begin{split} g^{\mu\nu}=&\sum_{n=0}^{\infty}\, \kappa^{n}\,\left(g^{\mu\nu}\right)^{\rho_{1}\sigma_{1}\cdots\rho_{n}\sigma_{n }}\,h_{\rho_{1}\sigma_{1}}\cdots h_{\rho_{n}\sigma_{n}},\\ \sqrt{-g}=&\sum_{n=0}^{\infty}\,\kappa^{n}\,\left( \sqrt{-g}\right)^{\rho_{1}\sigma_{1}\cdots\rho_{n}\sigma_{n}}\,h_{\rho_{1} \sigma_{1}}\cdots h_{\rho_{n}\sigma_{n}}.\end{split} \tag{9}\]
For the sake of illustration, we present a few terms from these expressions:
\[\left(g^{\mu\nu}\right) =\eta^{\mu\nu}, \left(g^{\mu\nu}\right)^{\alpha\beta} =\frac{1}{2}\left(\eta^{\mu\alpha}\eta^{\nu\beta}+\eta^{\mu\beta} \eta^{\nu\alpha}\right), \tag{10}\] \[\left(\sqrt{-g}\right) =1, \left(\sqrt{-g}\right)^{\mu\nu} =\frac{1}{2}\eta^{\mu\nu}, \left(\sqrt{-g}\right)^{\mu\nu\alpha\beta} =\frac{1}{8}\!\left(-\eta^{\alpha\nu}\eta^{\beta\mu}-\eta^{\alpha\mu}\eta^{ \beta\nu}+\eta^{\alpha\beta}\eta^{\mu\nu}\right).\]
All of the interaction rules presented in this paper have been derived using perturbative techniques as described above. It is worth noting that this approach can be extended to supersymmetric models and models with non-minimal gravitational coupling. These cases will be discussed in future works.
Let us briefly review the construction of the Feynman rules for a single scalar field, a single Dirac fermion, and a single vector field. The scalar and Dirac field cases were covered in detail in the original paper, so we will only briefly touch upon them. The construction of Feynman rules for a vector field is more intricate due to the gauge fixing and will be discussed in more depth
### Single scalar field
A single free scalar field minimally coupled to gravity is described by the following action:
\[\begin{split}\mathcal{A}_{s=0}=&\int d^{4}x\sqrt{-g }\left[\frac{1}{2}\left(\nabla\phi\right)^{2}-\frac{m_{\rm s}^{2}}{2}\phi^{2} \right]=\int d^{4}x\left[\frac{1}{2}\sqrt{-g}\,g^{\mu\nu}\,\partial_{\mu}\phi \,\partial_{\nu}\phi-\frac{m_{\rm s}^{2}}{2}\sqrt{-g}\,\phi^{2}\right].\end{split} \tag{11}\]
Here \(m_{\rm s}\) is the scalar field mass. Its perturbative expansion in the momentum representation reads:
\[\begin{split}\mathcal{A}_{s=0}=&\sum_{n=0}^{\infty} \int\frac{d^{4}p_{1}}{(2\pi)^{4}(2\pi)^{2}}\prod_{i=1}^{n}\frac{d^{4}k_{i}}{(2 \pi)^{4}}(2\pi)^{4}\delta\left(p_{1}+p_{2}+\sum k_{i}\right)h_{\rho_{1}\sigma _{1}}(k_{1})\cdots h_{\rho_{n}\sigma_{n}}(k_{n})\\ &\times\kappa^{n}\left[-\frac{1}{2}\left(\sqrt{-g}\,g^{\mu\nu} \right)^{\rho_{1}\sigma_{1}\cdots\rho_{n}\sigma_{n}}I_{\mu\nu\alpha\beta}(p_{1 })^{\alpha}(p_{2})^{\beta}-\frac{m_{\rm s}^{2}}{2}\big{(}\sqrt{-g}\big{)}^{ \rho_{1}\sigma_{1}\cdots\rho_{n}\sigma_{n}}\right]\,\phi(p_{1})\phi(p_{2})\,. \end{split} \tag{12}\]
Here \(k_{i}\) are momenta of gravitons, \(p_{1}\) and \(p_{2}\) are momenta of scalars, and \(I\) tensor contracts indices of the metric and momenta in a symmetric way:
\[I^{\mu\nu\alpha\beta}=\frac{1}{2}\left(\eta^{\mu\alpha}\eta^{\nu\beta}+\eta^{ \mu\beta}\eta^{\nu\alpha}\right). \tag{13}\]
The background contribution of this expression describes the scalar field propagator:
\[\begin{split}\texttt{---}&=i\,\frac{1}{p^{2}-m_{\rm s }^{2}}\,.\end{split} \tag{14}\]
The other parts of this expression define rules for gravitons coupling to the scalar field kinetic energy:
\[\begin{split}\rho_{n}\sigma_{n}&=-i\,\kappa^{n}\, \left[\left(\sqrt{-g}\,g^{\mu\nu}\right)^{\rho_{1}\sigma_{1}\cdots\rho_{n} \sigma_{n}}I_{\mu\nu\alpha\beta}(p_{1})^{\alpha}(p_{2})^{\beta}+m_{\rm s}^{2} \,\left(\sqrt{-g}\right)^{\rho_{1}\sigma_{1}\cdots\rho_{n}\sigma_{n}}\right]. \end{split} \tag{15}\]
Here and further, all momenta are directed inwards and connected by conservation law. The dotted line on the left part of the diagram notes the presence of \(n\geq 1\) graviton lines. This expression is symmetric with respect to
the scalar field momenta. In the rest of this paper, we present expressions that are also symmetric with respect to momenta.
The gravitational coupling of a scalar field potential energy is derived similarly. The scalar field potential \(V(\phi)\) shall be expanded in a power series with respect to the scalar field \(\phi\). Each term of this expansion corresponds to a separate scalar field self-interaction coupled to gravity. Therefore, it is sufficient to derive the interaction rule for a single power-law potential. Let us consider the following power-law potential with \(p\geq 3\) being a whole number, and \(\lambda_{p}\) being a coupling with the mass dimension \(4-p\):
\[\mathcal{A}_{s=0,\text{potential}}=\int d^{4}x\sqrt{-g}\left[\frac{\lambda_{p} }{p!}\;\phi^{p}\right]=\int d^{4}x\left[\sqrt{-g}\;\frac{\lambda_{p}}{p!}\; \phi^{p}\right]. \tag{16}\]
In the momentum representation, this action becomes:
\[\begin{split}\mathcal{A}_{s=0,\text{potential}}=& \sum_{n=0}^{\infty}\int\prod_{j=1}^{p}\frac{d^{4}q_{j}}{(2\pi)^{4}} \underset{i=0}{\overset{n}{\prod}}\frac{d^{4}k_{i}}{(2\pi)^{4}}(2\pi)^{4}\, \delta\Big{(}\sum q_{j}+\sum k_{i}\Big{)}\,h_{\rho_{1}\sigma_{1}}(k_{1})\cdots h _{\rho_{n}\sigma_{n}}(k_{n})\\ &\times\kappa^{n}\,\frac{\lambda_{p}}{p!}\;(\sqrt{-g})^{\rho_{1} \sigma_{1}\cdots\rho_{n}\sigma_{n}}\,\phi(q_{1})\cdots\phi(q_{p})\,.\end{split} \tag{17}\]
The corresponding interaction rule reads:
(18)
This expression can be used for any whole \(p\geq 3\), so it completely describes the gravitational coupling of scalar field potentials.
The healthy scalar field interactions described in this paper are just a small subset of the broader range of interactions described by the Horndeski and Beyond Horndeski models [18, 19, 20, 21, 22, 23, 24]. Feynman rules for these models are beyond the scope of this paper and will be discussed in future publications.
### Single Dirac field
A single Dirac field minimally coupled to gravity is described by the following action:
\[\begin{split}\mathcal{A}_{s=1/2}=&\int d^{4}x\sqrt {-g}\left[\overline{\psi}\left(i\,\widehat{\nabla}\right)\psi-m_{t}\, \overline{\psi}\,\psi\right]\\ =&\int d^{4}x\left[\sqrt{-g}\,\mathfrak{c}_{m}{}^{ \mu}\;\frac{1}{2}\,\left(i\,\overline{\psi}\,\gamma^{m}\,\nabla_{\mu}\psi-i\, \nabla_{\mu}\overline{\psi}\,\gamma^{m}\,\psi\right)-m_{\text{f}}\,\sqrt{-g} \,\overline{\psi}\,\psi\right].\end{split} \tag{19}\]
Here \(m_{\text{f}}\) is the fermion mass, \(\mathfrak{c}_{m}{}^{\mu}\) is the vierbein, and \(\nabla\) is the fermionic covariant derivative. We discuss the construction of spinors in a curved spacetime alongside their perturbative treatment in the previous paper [1] (see also [25, 26]). The following theorem specifies the perturbative structure of this action [1]:
\[\begin{split}\mathcal{A}_{s=1/2}=&\int d^{4}x\left[ \sqrt{-g}\,\mathfrak{c}_{m}{}^{\mu}\;\frac{1}{2}\,\left(i\,\overline{\psi}\, \gamma^{m}\,\partial_{\mu}\psi-i\,\partial_{\mu}\overline{\psi}\,\gamma^{m}\, \psi\right)-m_{\text{f}}\,\sqrt{-g}\,\overline{\psi}\,\psi\right]\\ =&\sum_{n=0}^{\infty}\int\frac{d^{4}p_{1}}{(2\pi)^{4} }\frac{d^{4}p_{2}}{(2\pi)^{4}}\underset{i=0}{\overset{n}{\prod}}\frac{d^{4}k_{ i}}{(2\pi)^{4}}(2\pi)^{4}\delta\left(p_{1}+p_{2}+\sum k_{i}\right)h_{\rho_{1} \sigma_{1}}(k_{1})\cdots h_{\rho_{n}\sigma_{n}}(k_{n})\\ &\times\kappa^{n}\,\,\overline{\psi}(p_{2})\left[\left(\sqrt{-g} \,\mathfrak{c}_{m}{}^{\mu}\right)^{\rho_{1}\sigma_{1}\cdots\rho_{n}\sigma_{n}} \,\frac{1}{2}\,(p_{1}-p_{2})_{\mu}\gamma^{m}-\left(\sqrt{-g}\right)^{\rho_{1} \sigma_{1}\cdots\rho_{n}\sigma_{n}}m_{\text{f}}\right]\psi(p_{1}).\end{split} \tag{20}\]
The background part of this expansion corresponds to the fermion propagator:
\[\begin{split}=&\;i\;\frac{p_{m}\,\gamma^{m}+m_{ \text{f}}}{p^{2}-m_{\text{f}}^{2}}.\end{split} \tag{21}\]
The other terms describe the following interaction rules:
(22)
As it was noted above, on this diagram all momenta are directed inwards, so \(p_{1}\) notes an in-going momentum of a fermion, \(p_{2}\) notes an in-out momentum of an anti-fermion. Moreover, this expression is applicable for the \(SU(N)\) Yang-Mills model considered below.
### Single vector field
The treatment of a vector field within the quantum field theory (and perturbative quantum gravity) is sensitive to the vector field mass. A massless vector field admits the gauge symmetry, so the gauge fixing shall be performed. If a vector field has a non-vanishing mass, then the gauge symmetry is not present and gauge fixing is not required.
We start with the case of a vector field with a non-vanishing mass, also known as the Proca field. Such a field coupled with gravity is described by the following action:
\[\mathcal{A}_{s=1,m_{\mathrm{v}}}= \int d^{4}x\sqrt{-g}\left[-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\frac{m _{\mathrm{v}}^{2}}{2}A_{\mu}\,A^{\mu}\right]. \tag{23}\]
Here \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) is the field tensor, \(m_{\mathrm{v}}\) is the vector field mass. The perturbative expansion of this action in the momentum representation reads:
\[\begin{split}\mathcal{A}_{s=1,m_{\mathrm{v}}}=& \sum_{n=0}^{\infty}\int\frac{d^{4}p_{1}}{(2\pi)^{4}}\frac{d^{4}p_{2}}{(2\pi)^{ 4}}\prod_{i=0}^{n}\frac{d^{4}k_{i}}{(2\pi)^{4}}(2\pi)^{4}\delta\left(p_{1}+p_ {2}+\sum k_{i}\right)h_{\rho_{1}\sigma_{1}}(k_{1})\cdots h_{\rho_{n}\sigma_{ n}}(k_{n})\\ &\times\kappa^{n}\left[\frac{1}{4}\left(\sqrt{-g}\,g^{\mu\alpha}g ^{\nu\beta}\right)^{\rho_{1}\sigma_{1}\cdots\rho_{n}\sigma_{n}}\,\left(p_{1} \right)_{\mu_{1}}\!\left(p_{2}\right)_{\mu_{2}}\,\left(F_{\mu\nu}\right)^{\mu _{1}\lambda_{1}}\!\left(F_{\alpha\beta}\right)^{\mu_{2}\lambda_{2}}\\ &\qquad\qquad+\frac{m_{\mathrm{v}}^{2}}{2}\!\left(\sqrt{-g}\,g^{ \lambda_{1}\lambda_{2}}\right)^{\rho_{1}\sigma_{1}\cdots\rho_{n}\sigma_{n}} \,\right]A_{\lambda_{1}}\!\left(p_{1}\right)A_{\lambda_{2}}\!\left(p_{2}\right).\end{split} \tag{24}\]
Here we introduced the following notations:
\[F_{\mu\nu}=-i\,p_{\sigma}\left(F_{\mu\nu}\right)^{\sigma\lambda}A_{\lambda}( p),\qquad\qquad\qquad\qquad\left(F_{\mu\nu}\right)^{\sigma\lambda}\stackrel{{ \mathrm{def}}}{{=}}\delta_{\mu}^{\sigma}\,\delta_{\nu}^{\lambda}-\delta_{ \nu}^{\sigma}\,\delta_{\mu}^{\lambda}. \tag{25}\]
This expression spawns the standard Proca propagator:
(26)
The interaction rules describing gravitons coupling to the Proca field kinetic energy is given by the following expression:
(27)
To proceed with the massless case we shall briefly recall the Faddeev-Popov prescription for gauge theories [27, 28, 29, 30, 31]. A quantum vector field is described by the following generating functional:
\[\mathcal{Z}=\int\mathcal{D}[A]\exp\Big{[}i\,\mathcal{A}[A]\Big{]}. \tag{28}\]
Here the integration is performed over all conceivable fields. The normalization factor is omitted for the sake of simplicity. Firstly, one adds a new term to the microscopic action:
\[\mathcal{Z}=\int\mathcal{D}[A]\exp\left[i\,\mathcal{A}[A]\right]\int\mathcal{D}[ \omega]\exp\left[\frac{i}{2}\,\epsilon\,\omega^{2}\right]=\int\mathcal{D}[A] \mathcal{D}[\omega]\exp\left[i\,\mathcal{A}+\frac{i}{2}\epsilon\,\omega^{2} \right]. \tag{29}\]
Here \(\omega\) is an arbitrary scalar, \(\epsilon\) is a free gauge fixing parameter. The new contribution is a Gauss-like integral so its introduction merely changes the (omitted) normalization factor.
Secondly, one splits the integration volume:
\[\int\mathcal{D}[A]=\int\mathcal{D}[\zeta]\int\mathcal{D}[\mathbb{A}]\delta \left(\mathcal{G}-\omega\right)\det\Delta\,. \tag{30}\]
Here \(\mathcal{G}\) is the gauge fixing condition; the new field variable \(\mathbb{A}\), the gauge transformation parameter \(\zeta\), and the field variable \(A\) are related as follows:
\[A_{\mu}=\mathbb{A}_{\mu}+\partial_{\mu}\zeta. \tag{31}\]
The integration over \(\mathbb{A}\) is performed over all conceivable fields, but because of the \(\delta\) function from each class of physically equivalent potentials only a single representative contributes to the integral. Therefore, the integration over \(\mathbb{A}\) accounts not for all conceivable potential, but for all conceivable configurations of physical fields. The last term \(\det\Delta\) is the Faddeev-Popov determinant which preserves the invariance of the integration measure. The corresponding differential operator \(\Delta\) is defined as follows:
\[\Delta\stackrel{{\rm def}}{{=}}\frac{\delta\mathcal{G}}{\delta \zeta}. \tag{32}\]
Finally, one performs integrations and obtains the following expression for the generating functional:
\[\mathcal{Z} =\int\mathcal{D}[\mathbb{A}]\mathcal{D}[\omega]\mathcal{D}[ \zeta]\left(\det\Delta\right)\ \delta\left(\mathcal{G}-\omega\right)\exp\left[i\,\mathcal{A}+\frac{i}{2} \epsilon\,\omega^{2}\right] \tag{33}\] \[=\int\mathcal{D}[\mathbb{A}]\left(\det\Delta\right)\exp\left[i\, \mathcal{A}+\frac{i}{2}\epsilon\,\mathcal{G}^{2}\right]\] \[=\int\mathcal{D}[\mathbb{c}]\mathcal{D}[\overline{\epsilon}] \mathcal{D}[\mathbb{A}]\exp\left[i\,\overline{\epsilon}\,\Delta\,c+i\, \mathcal{A}+\frac{i}{2}\epsilon\,\mathcal{G}^{2}\right].\]
Here \(\overline{\epsilon}\), \(c\) are scalar anticommuting Faddeev-Popov ghosts that are introduced to account for the Faddeev-Popov determinant. The integration over the gauge parameter \(\zeta\) is included in the normalization factor and omitted. This prescription produces a generating functional suitable for a consistent treatment of gauge models.
We use the standard Lorentz gauge fixing condition for the sake of simplicity. In a curved spacetime, it becomes:
\[g^{\mu\nu}\,\nabla_{\mu}A_{\nu}=0\leftrightarrow g^{\mu\nu}\,\partial_{\mu}A _{\nu}-g^{\mu\nu}\,\Gamma_{\mu\nu}^{\sigma}A_{\sigma}=0. \tag{34}\]
The gauge invariant part of the action admits the following perturbative expansion in the momentum representation:
\[\mathcal{A}_{s=1,m_{\rm v}=0}= \int d^{4}x\sqrt{-g}\left[-\frac{1}{4}g^{\mu\alpha}g^{\nu\beta}\, F_{\mu\nu}F_{\alpha\beta}\right] \tag{35}\] \[= \sum_{n=0}^{\infty}\int\frac{d^{4}p_{1}}{(2\pi)^{4}(2\pi)^{4}} \underset{i=1}{\overset{n}{\prod}}\frac{d^{4}k_{i}}{(2\pi)^{4}}(2\pi)^{4}\, \delta\big{(}p_{1}+p_{2}+\sum k_{i}\big{)}h_{\rho_{1}\sigma_{1}}(k_{1})\cdots h _{\rho_{n}\sigma_{n}}(k_{n})\] \[\times\ \kappa^{n}\ \left[\frac{1}{4}\left(\sqrt{-g}\,g^{\mu\alpha}g^{ \nu\beta}\right)^{\rho_{1}\sigma_{1}\cdots\rho_{n}\sigma_{n}}\,(p_{1})_{\mu_{1 }}(p_{2})_{\mu_{2}}\,(F_{\mu\nu})^{\mu_{1}\lambda_{1}}\,(F_{\alpha\beta})^{ \mu_{2}\lambda_{2}}\right]A_{\lambda_{1}}(p_{1})A_{\lambda_{2}}(p_{2}).\]
This expression matches the expression for the Proca field with \(m_{\rm v}=0\). The gauge fixing term naturally splits into three terms:
\[\mathcal{A}_{\rm gf}= \int d^{4}x\sqrt{-g}\left[\frac{\epsilon}{2}\nabla_{\lambda_{1}}A ^{\lambda_{1}}\,\nabla_{\lambda_{2}}A^{\lambda_{2}}\right] \tag{36}\] \[= \frac{\epsilon}{2}\!\int d^{4}x\left(\sqrt{-g}\,g^{\mu\nu}g^{ \alpha\beta}g^{\sigma_{1}\lambda_{1}}g^{\sigma_{2}\lambda_{2}}\right)\, \Gamma_{\sigma_{1}\mu\nu}\Gamma_{\sigma_{2}\alpha\beta}\,A_{\lambda_{1}}A_{ \lambda_{2}}.\]
Here we use the standard definition of the Christoffel symbols with only lower indices
\[\Gamma_{\mu\alpha\beta}\stackrel{{\rm def}}{{=}}g_{\mu\nu}\,\Gamma^{ \nu}_{\alpha\beta}=\frac{1}{2}\left(\partial_{\alpha}g_{\beta\mu}+\partial_{ \beta}g_{\alpha\mu}-\partial_{\mu}g_{\alpha\beta}\right). \tag{37}\]
In contrast with \(\Gamma^{\mu}_{\alpha\beta}\) these symbols admit a finite perturbative expansion:
\[\begin{split}\Gamma_{\mu\alpha\beta}=&\frac{\kappa }{2}[\partial_{\alpha}h_{\beta\mu}+\partial_{\beta}h_{\alpha\mu}-\partial_{ \mu}h_{\alpha\beta}]\Leftrightarrow\kappa\left(-i\right)p_{\lambda}\left( \Gamma_{\mu\alpha\beta}\right)^{\lambda\rho\sigma}h_{\rho\sigma}(p)\,,\\ \left(\Gamma_{\mu\alpha\beta}\right)^{\lambda\rho\sigma}=& \frac{1}{2}\big{[}\delta^{\lambda}_{\alpha}I_{\beta\mu}{}^{\rho \sigma}+\delta^{\lambda}_{\beta}I_{\alpha\mu}{}^{\rho\sigma}-\delta^{\lambda} _{\mu}I_{\alpha\beta}{}^{\rho\sigma}\big{]}\,.\end{split} \tag{38}\]
In the momentum representation the gauge fixing term reads:
\[\begin{split}\mathcal{A}_{\rm gf}=&\sum_{n=0}^{ \infty}\int\frac{d^{4}p_{1}}{(2\pi)^{4}}\frac{d^{4}p_{2}}{(2\pi)^{4}}\prod_{i=1 }^{n}\frac{d^{4}k_{i}}{(2\pi)^{4}}(2\pi)^{4}\delta\big{(}p_{1}+p_{2}+\sum k_{i} \big{)}\,h_{\rho_{1}\sigma_{1}}(k_{1})\cdots h_{\rho_{n}\sigma_{n}}(k_{n})A_{ \lambda_{1}}(p_{1})A_{\lambda_{2}}(p_{2})\\ &\times\,\kappa^{n}\,\left(\sqrt{-g}\,g^{\mu_{1}\lambda_{1}}g^{ \mu_{2}\lambda_{2}}\right)^{\rho_{1}\sigma_{1}\cdots\rho_{n}\sigma_{n}}\left[ -\frac{\epsilon}{2}(p_{1})_{\mu_{1}}(p_{2})_{\mu_{2}}\right]\\ +&\sum_{n=1}^{\infty}\int\frac{d^{4}p_{1}}{(2\pi)^{4} }\frac{d^{4}p_{2}}{(2\pi)^{4}}\prod_{i=1}^{n}\frac{d^{4}k_{i}}{(2\pi)^{4}}(2 \pi)^{4}\delta\big{(}p_{1}+p_{2}+\sum k_{i}\big{)}\,h_{\rho_{1}\sigma_{1}}(k_{ 1})\cdots h_{\rho_{n}\sigma_{n}}(k_{n})A_{\lambda_{1}}(p_{1})A_{\lambda_{2}}(p _{2})\\ &\times\,\kappa^{n}\,\left(\sqrt{-g}\,g^{\mu\nu}g^{\mu_{1}\lambda_ {1}}g^{\mu_{2}\lambda_{2}}\right)^{\rho_{2}\sigma_{2}\cdots\rho_{n}\sigma_{n}} \left[\epsilon\,\left(\Gamma_{\mu_{1}\mu\nu}\right)^{\sigma\rho_{1}\sigma_{1}} \left(k_{1}\right)_{\sigma}(p_{2})_{\mu_{2}}\right]\\ +&\sum_{n=2}^{\infty}\int\frac{d^{4}p_{1}}{(2\pi)^{4 }}\frac{d^{4}p_{2}}{(2\pi)^{4}}\prod_{i=1}^{n}\frac{d^{4}k_{i}}{(2\pi)^{4}}(2 \pi)^{4}\delta\big{(}p_{1}+p_{2}+\sum k_{i}\big{)}\,h_{\rho_{1}\sigma_{1}}(k_{ 1})\cdots h_{\rho_{n}\sigma_{n}}(k_{n})\,A_{\lambda_{1}}(p_{1})A_{\lambda_{2}} (p_{2})\\ &\times\,\kappa^{n}\,\left(\sqrt{-g}\,g^{\mu\nu}g^{\alpha\beta}g^{ \mu_{1}\lambda_{1}}g^{\mu_{2}\lambda_{2}}\right)^{\rho_{3}\sigma_{3}\cdots\rho _{n}\sigma_{n}}\left[-\frac{\epsilon}{2}(k_{1})_{\tau_{1}}(k_{2})_{\tau_{2}} \left(\Gamma_{\mu_{1}\mu\nu}\right)^{\tau_{1}\rho_{1}\sigma_{1}}\left(\Gamma_{ \mu_{2}\alpha\beta}\right)^{\tau_{2}\rho_{2}\sigma_{2}}\right].\end{split} \tag{39}\]
In full analogy with the previous cases, the background part of this expression corresponds to the following propagator1:
Footnote 1: It matches the expression for the vector propagator given in FeynCalc with \(\epsilon_{\rm FeynCalc}=-1/\epsilon_{\rm FeynGrav}\)
\[\begin{split}\mu\parbox{142.26378pt}{\includegraphics[width=142.26378pt]{
The ghost sector of the theory shall be treated as follows. The Faddeev-Popov differential operator \(\Delta\) reduces to the D'Alamber operator in curved spacetime:
\[\Delta=\frac{\delta}{\delta\zeta}\,\nabla_{\mu}\,(A^{\mu}+\nabla^{\mu}\zeta)=g^{ \mu\nu}\nabla_{\mu}\nabla_{\nu}\,. \tag{42}\]
Therefore, the ghost part of the generating functional describes a single massless scalar ghost coupled to gravity:
\[\begin{split}\mathcal{Z}_{\text{ghost}}=&\int \mathcal{D}[c]\mathcal{D}[\vec{c}]\exp\left[i\,\int d^{4}x\sqrt{-g}\,\left( \vec{c}\,\Box\,c\right)\right]\\ =&\int\mathcal{D}[c]\mathcal{D}[\vec{c}]\,\exp \left[-i\,\int d^{4}x\,\sqrt{-g}\,g^{\mu\nu}\,\nabla_{\mu}\vec{c}\,\nabla_{\nu }c\right].\end{split} \tag{43}\]
The corresponding perturbative expansion is similar to previous cases:
\[\begin{split}\mathcal{A}_{\text{ghost}}=&-\int d^{ 4}x\,\sqrt{-g}\,g^{\mu\nu}\partial_{\mu}\vec{c}\,\partial_{\nu}c\\ =&\sum_{n=0}^{\infty}\int\frac{d^{4}p_{1}}{(2\pi)^{ 4}}\frac{d^{4}p_{2}}{(2\pi)^{4}}\prod_{i=1}^{n}\frac{d^{4}k_{i}}{(2\pi)^{4}}( 2\pi)^{4}\delta\left(p_{1}+p_{2}+\sum k_{i}\right)\,h_{\rho_{1}\sigma_{1}}(k_{ 1})\cdots h_{\rho_{n}\sigma_{n}}(k_{n})\\ &\times\,\kappa^{n}\,\vec{c}(p_{1})\left[(\sqrt{-g}\,g^{\mu\nu}) ^{\rho_{1}\sigma_{1}\cdots\rho_{n}\sigma_{n}}\,(p_{1})_{\mu}(p_{2})_{\nu} \right]c(p_{2}).\end{split} \tag{44}\]
This expression results in the following ghost propagator
\[\cdots\cdots\qquad=i\,\frac{-1}{p^{2}}, \tag{45}\]
and in the following interaction rule:
\[\begin{split}\rho_{n}\sigma_{n}&\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
The gauge field \(A_{\mu}\) takes value in \(SU(N)\) algebra:
\[A_{\mu}=A_{\mu}^{a}\,T^{a}, \tag{50}\]
where \(T^{a}\) are generators. This gives the following expression of the field tensor components
\[F_{\mu\nu}^{a}=\partial_{\mu}A_{\nu}^{a}-\partial_{\nu}A_{\mu}^{a}+g_{\rm s}\,f ^{abc}\,A_{\mu}^{b}A_{\nu}^{c}\,. \tag{51}\]
Here \(f^{abc}\) are the structure constants of the algebra:
\[[T^{a},T^{b}]=i\,f^{abc}\,T^{c}\,. \tag{52}\]
Generalization of action (47) for the case of curved spacetime is rather simple. One shall use the proper four-volume invariant and modify covariant derivatives to account for the curved geometry. This produces the following action:
\[{\cal A}=\int d^{4}x\,\sqrt{-g}\left[\,\overline{\psi}\left(i\,{\epsilon_{m}} ^{\mu}\,\gamma^{m}\,{\cal D}_{\mu}-m\right)\psi-\frac{1}{4}F_{\mu\nu}^{a}\,F^{ \mu\nu}\right]. \tag{53}\]
Here \({\epsilon_{m}}^{\mu}\) is a vierbein. The covariant derivative for fermions now reads
\[{\cal D}_{\mu}\psi=\nabla_{\mu}\psi-i\,g_{\rm s}\,A_{\mu}\,\psi, \tag{54}\]
with \(\nabla_{\mu}\) begin the part accounting for the spacetime curvature via the spin connection. The field tensor \(F_{\mu\nu}\) shall also account for the spacetime curvature, but because of its structure, it preserves the simple form:
\[\begin{split} F_{\mu\nu}&=\nabla_{\mu}A_{\nu}- \nabla_{\nu}A_{\mu}-i\,g_{\rm s}\,[A_{\mu},A_{\nu}]\\ &=\partial_{\mu}A_{\nu}-\Gamma_{\mu\nu}^{\sigma}A_{\sigma}- \partial_{\nu}A_{\mu}+\Gamma_{\nu\mu}^{\sigma}A_{\sigma}-i\,g_{\rm s}\,[A_{ \mu},A_{\nu}]\\ &=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}-i\,g_{\rm s}[A_{ \mu},A_{\nu}]\,.\end{split} \tag{55}\]
Consequently, the \(SU(N)\) Yang-Mills action in a curved spacetime reads:
\[\begin{split}{\cal A}=&\int d^{4}x\sqrt{-g}\bigg{[} \overline{\psi}\left(i\,{\epsilon_{m}}^{\mu}\,\gamma^{m}\,\nabla_{\mu}-m \right)\psi-\frac{1}{4}\left(f_{\mu\nu}^{a}\right)^{2}\\ &+g_{\rm s}\,\overline{\psi}\left({\epsilon_{m}}^{\mu}\gamma^{m} \right)\psi\,A_{\mu}-g^{\mu\nu}g^{\alpha\beta}\,g_{\rm s}\,f^{abc}\partial_{ \mu}A_{\alpha}^{a}A_{\nu}^{b}A_{\beta}^{c}-\frac{1}{4}g_{\rm s}^{2}\,f^{ann} \,f^{aij}\,g^{\mu\nu}g^{\alpha\beta}\,A_{\mu}^{m}\,A_{\nu}^{i}\,A_{\alpha}^{n }\,A_{\beta}^{j}\bigg{]}\,.\end{split} \tag{56}\]
Perturbative quantization of kinetic parts of the action is discussed above, so we proceed with the derivation of Feynman's rules for the interaction sector. The perturbative expansion for the term describing the coupling of fermions to a gauge vector is given by the following expression:
\[\begin{split}&\int d^{4}x\sqrt{-g}\,g_{\rm s}\,\overline{\psi} \left({\epsilon_{m}}^{\mu}\gamma^{m}\right)\psi\,A_{\mu}\\ &=\sum_{n=0}^{\infty}\int\frac{d^{4}p_{1}}{(2\pi)^{4}}\frac{d^{4 }p_{2}}{(2\pi)^{4}}\frac{d^{4}k}{(2\pi)^{4}}\prod_{i=0}^{n}\frac{d^{4}k_{i}}{( 2\pi)^{4}}(2\pi)^{4}\,\delta\left(p_{1}+p_{2}+k+\sum k_{i}\right)\,h_{\rho_{1} \sigma_{1}}(k_{1})\cdots h_{\rho_{n}\sigma_{n}}(k_{n})\\ &\quad\times\,\kappa^{n}\,\overline{\psi}(p_{2})\left[g_{\rm s}\, \gamma^{m}\,T^{a}\,\left(\sqrt{-g}\,{\epsilon_{m}}^{\mu}\right)^{\rho_{1}\sigma _{1}\cdots\rho_{n}\sigma_{n}}\right]\psi(p_{1})\,A_{\mu}^{a}(k)\,.\end{split} \tag{57}\]
This expression produces the following Feynman rule:
\[\begin{split}&\rho_{n}\sigma_{n}\\ &\rho_{1}\sigma_{1}\end{split} \tag{58}\]
The perturbative expansion for the term cubic in gauge vectors reads:
\[\begin{split}&\int d^{4}x\sqrt{-g}\left(-g_{\rm s}\right)f^{abc} \,g^{\mu\nu}g^{\alpha\beta}\,\partial_{\mu}A_{\alpha}^{a}A_{\nu}^{b}A_{\beta} ^{c}\\ &=\sum_{n=0}^{\infty}\int\frac{d^{4}p_{1}}{(2\pi)^{4}}\frac{d^{4 }p_{2}}{(2\pi)^{4}}\frac{d^{4}p_{3}}{(2\pi)^{4}}\prod_{i=1}^{n}\frac{d^{4}k_{i} }{(2\pi)^{4}}(2\pi)^{4}\,\delta\left(p_{1}+p_{2}+p_{3}+\sum k_{i}\right)\,h_{ \rho_{1}\sigma_{1}}(k_{1})\cdots h_{\rho_{n}\sigma_{n}}(k_{n})\\ &\quad\times\,\kappa^{n}\,\left[\left(-i\,g_{\rm s}\right)f^{abc} \,(p_{1})_{\sigma}\,\left(\sqrt{-g}\,g^{\mu_{1}\mu_{3}}g^{\mu_{2}\sigma} \right)^{\rho_{1}\sigma_{1}\cdots\rho_{n}\sigma_{n}}\right]\,A_{\mu_{1}}^{a} (p_{1})\,A_{\mu_{2}}^{b}(p_{2})\,A_{\mu_{3}}^{c}(p_{3})\,.\end{split} \tag{59}\]
This expression produces the following rule:
\[\begin{split}\rho_{n}\sigma_{n}&\mu_{3},c,p_{3}\\ \rho_{1}\sigma_{1}&\mu_{2},b,p_{2}\\ =&\kappa^{n}\,g_{\text{s}}\,f^{abc}\Big{[}\big{(}p_{1}-p _{2}\big{)}\,\big{(}\sqrt{-g}\,g^{\mu_{1}\mu_{2}}g^{\mu_{3}\sigma}\big{)}^{ \rho_{1}\sigma_{1}\cdots\rho_{n}\sigma_{n}}\\ &+(p_{3}-p_{1})\sigma_{n}\,\big{(}\sqrt{-g}\,g^{\mu_{1}\mu_{3}}g ^{\mu_{2}\sigma}\big{)}^{\rho_{1}\sigma_{1}\cdots\rho_{n}\sigma_{n}}+(p_{2}-p _{3})\sigma_{\text{e}}\,\big{(}\sqrt{-g}\,g^{\mu_{2}\mu_{3}}g^{\mu_{1}\sigma} \big{)}^{\rho_{1}\sigma_{1}\cdots\rho_{n}\sigma_{n}}\,\Big{]}.\end{split} \tag{60}\]
Lastly, the term describing the four-vector coupling has the following perturbative expansion:
\[\begin{split}&\int d^{4}x\sqrt{-g}\,\Bigg{(}-\frac{1}{4}g_{ \text{s}}^{2}\,\,\Big{)}\,\,f^{amm}f^{aij}\,g^{\mu\nu}g^{a\beta}\,A_{\mu}^{i}\, A_{\nu}^{a}\,A_{\alpha}^{a}\,A_{\beta}^{j}\\ &=\sum_{n=0}^{\infty}\int\frac{d^{4}p_{1}}{(2\pi)^{4}}\,\frac{d^ {4}p_{2}}{(2\pi)^{4}}\,\frac{d^{4}p_{3}}{(2\pi)^{4}}\,\frac{d^{4}p_{4}}{(2\pi) ^{4}}\prod_{i=0}^{n}\frac{d^{4}k_{i}}{(2\pi)^{4}}\,(2\pi)^{4}\,\delta\,\Big{(}p _{1}+p_{2}+p_{3}+p_{4}+\sum k_{i}\Big{)}\,h_{\rho_{1}\sigma_{1}}(k_{1})\cdots h _{\rho_{n}\sigma_{n}}(k_{n})\\ &\quad\times\Bigg{(}-\frac{1}{4}\Bigg{)}\,g_{\text{s}}^{2}\kappa ^{n}f^{amn}f^{aij}\,\big{(}\sqrt{-g}\,g^{\mu_{1}\mu_{3}}g^{\mu_{2}\mu_{4}} \big{)}^{\rho_{1}\sigma_{1}\cdots\rho_{n}\sigma_{n}}\,A_{\mu_{1}}^{m}(p_{1})A _{\mu_{2}}^{n}(p_{2})A_{\mu_{3}}^{i}(p_{3})A_{\mu_{4}}^{j}(p_{4}).\end{split} \tag{61}\]
This results in the following interaction rule:
\[\begin{split}\rho_{n}\sigma_{n}&\mu_{4},a_{4}\\ &\mu_{3},a_{3}\\ \rho_{1}\sigma_{1}&\mu_{1},a_{1}\end{split} \tag{62}\]
Finally, we shall turn to a discussion of the gauge fixing and the Faddeev-Popov ghosts. The Yang-Mills action (56) respects the following gauge transformations:
\[\begin{split}\delta\psi=& i\theta^{a}\,T^{a}\psi,\\ \delta A_{\mu}=& i\theta^{a}\,[T^{a},A_{\mu}]+ \frac{1}{g_{\text{s}}}\partial_{\mu}\theta^{a}\,T^{a},\\ \delta A_{\mu}^{a}=&\frac{1}{g_{\text{s}}}\big{[} \partial_{\mu}\theta^{a}-g\,f^{abc}\,\theta^{b}\,A_{\mu}^{c}\big{]}\,.\end{split} \tag{63}\]
Here \(\theta^{a}\) are the gauge parameters. In the flat spacetime, one would use the standard Lorentz gauge fixing conditions
\[\partial^{\mu}A_{\mu}^{a}=0. \tag{64}\]
For the curved spacetime case, the standard derivative shall be replaced with the covariant derivative, so the Lorentz gauge fixing conditions read:
\[g^{\mu\nu}\nabla_{\mu}A_{\nu}^{a}=0. \tag{65}\]
We use this gauge fixing condition to introduce the Faddeev-Popov ghosts with the procedure discussed in the previous section. The introduction of this gauge fixing term will bring the kinetic part of the vector field to the same form obtained in the previous section.
The ghost action is defined by the Faddeev-Popov determinant obtained from the gauge fixing condition:
\[\det\left[\frac{\delta}{\delta\theta^{b}}\left\{g^{\mu\nu}\nabla_{\mu}A_{\nu}^{a} \right\}\right]=\det\left[\frac{1}{g_{\mathrm{s}}}g^{\mu\nu}\,\nabla_{\mu} \left(\delta^{ab}\nabla_{\nu}-g_{\mathrm{s}}\,f^{abc}\,A_{\nu}^{c}\right) \right]. \tag{66}\]
It results in the following action:
\[\mathcal{A}_{\mathrm{FP}}=\int d^{4}x\left[-g^{\mu\nu}\,\nabla_{\mu}\overline{ c}^{a}\nabla_{\nu}c^{a}+g_{\mathrm{s}}\,g^{\mu\nu}\nabla_{\mu}\overline{c}^{a}f^{abc}c ^{b}A_{\nu}^{c}\right]. \tag{67}\]
The kinetic part of the action is similar to the case of a single massless vector field discussed in the previous section. The part of this action describing the interaction between ghosts, vectors, and gravitons admits the following perturbative expansion:
\[\begin{split}&\int d^{4}x\sqrt{-g}\left[g_{\mathrm{s}}\,\partial_{ \mu}\overline{c}^{a}\,f^{abc}\,c^{b}\,A^{c\mu}\right]\\ =&\sum_{n=0}^{\infty}\int\frac{d^{4}p_{1}}{(2\pi)^{ 4}}\frac{d^{4}p_{2}}{(2\pi)^{4}}\frac{d^{4}k}{(2\pi)^{4}}\prod_{i=0}^{n}\frac{ d^{4}k_{i}}{(2\pi)^{4}}(2\pi)^{4}\delta\left(p_{1}+p_{2}+k+\sum k_{i}\right)\,h_{ \rho_{1}\sigma_{1}}(k_{1})\cdots h_{\rho_{n}\sigma_{n}}(k_{n})\\ &\times\,i\,s^{n}\,g_{\mathrm{s}}\,(p_{1})_{\nu}\,f^{abc}\, \overline{c}^{a}(p_{1})\,c^{b}(p_{2})\,\left(\sqrt{-g}\,g^{\mu\nu}\right)^{ \rho_{1}\sigma_{1}\cdots\rho_{n}\sigma_{n}}\,A_{\mu}^{c}(k).\end{split} \tag{68}\]
This expression produced the following rule:
\[\begin{split}&\rho_{n}\sigma_{n}\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
The situation is different for the consistent treatment of general relativity (or any other geometrical theory). Within the geometrical approach, gauge transformations are not introduced arbitrarily, but are related to coordinate frame transformations. This has two immediate implications. Firstly, within a geometrical theory gauge transformations are given by the so-called Lie derivatives:
\[\delta g_{\mu\nu}\stackrel{{\rm def}}{{=}}\mathcal{L}_{\zeta}g_{ \mu\nu}=\nabla_{\mu}\zeta_{\nu}+\nabla_{\nu}\zeta_{\mu}. \tag{72}\]
Here \(\mathcal{L}_{\zeta}\) is the Lie derivative with respect to an arbitrary vector field \(\zeta\) which plays the role of gauge parameters. Secondly, any suitable gauge fixing conditions must be expressed in terms of geometrical quantities. Because of this, gauge fixing conditions (71) are inconsistent with the geometrical approach and they cannot be imposed. Instead, we use the following gauge fixing conditions:
\[\mathcal{G}^{\mu}\stackrel{{\rm note}}{{=}}g^{\alpha\beta} \Gamma^{\mu}_{\alpha\beta}=0. \tag{73}\]
Together with the perturbative expansion (1) gauge fixing conditions (73) spawns the following infinite series:
\[\mathcal{G}^{\nu}=\frac{\kappa}{2}g^{\mu\nu}g^{\alpha\beta}\left[\partial_{ \alpha}h_{\beta\mu}+\partial_{\beta}h_{\alpha\mu}-\partial_{\mu}h_{\alpha\beta }\right]=\kappa\left[\partial_{\mu}h^{\mu\nu}-\frac{1}{2}\partial^{\nu}h \right]+\mathcal{O}(\kappa^{2}), \tag{74}\]
The leading term of this series reproduces (71). Within the geometric theory, this series cannot be truncated, so the ghost sector is defined by the whole infinite expansion.
The need to use gauge fixing condition (73) except (71) marks the difference between geometrical theories of gravity and a gauge theory of \(h_{\mu\nu}\) tensor. The Faddeev-Popov prescription for the geometrical approach shall be constructed as follows. Firstly, we shall note that the gauge fixing condition \(\mathcal{G}^{\mu}\) defined by (73) is a vector with mass dimension \(+1\). Consequently, the general relativity action with the corresponding gauge fixing term shall be equipped with an additional dimensional parameter:
\[\mathcal{A}_{\rm H+gf}=\int d^{4}x\sqrt{-g}\left[-\frac{2}{\kappa^{2}}R+\frac {\epsilon}{2\,\kappa^{2}}g_{\mu\nu}\,\mathcal{G}^{\mu}\mathcal{G}^{\nu}\right]. \tag{75}\]
Secondly, the corresponding Faddeev-Popov ghosts are also vectors. The structure of their action is defined by the variation of the gauge fixing term (73):
\[\delta G^{\mu}=\mathcal{L}_{\zeta}\left[g^{\alpha\beta}\,\Gamma^{\mu}_{\alpha \beta}\right]=\square\zeta^{\mu}-2\,\Gamma^{\mu}_{\alpha\beta}\,\nabla^{ \alpha}\zeta^{\beta}+R^{\mu}{}_{\nu}\zeta^{\nu} \tag{76}\]
with \(R_{\mu\nu}\) begin the Ricci tensor. Consequently, the ghost action reads:
\[\mathcal{A}_{\rm ghost}=\int d^{4}x\sqrt{-g}\left[-g^{\alpha\beta}g^{\mu\nu} \nabla_{\alpha}\overline{c}_{\mu}\nabla_{\beta}c_{\nu}-2\,\Gamma^{\mu}_{ \alpha\beta}\,\overline{c}_{\mu}\,\nabla^{\alpha}c^{\beta}+R_{\mu\nu}\, \overline{e}^{\mu}\,c^{\nu}\right].\]
In the rest respect, the treatment of the Faddeev-Popov ghosts remains the same.
The structure of Feynman rules for gravity in this gauge is derived via the standard perturbative expansion. The structure of graviton interactions is given by action (75):
\[\begin{split}\mathcal{A}_{\rm H+gf}&=\int d^{4}x \sqrt{-g}\left[-\frac{2}{\kappa^{2}}R+\frac{\epsilon}{2\,\kappa^{2}}g_{\mu \nu}\,g^{\alpha\beta}\,g^{\rho\sigma}\,\Gamma^{\mu}_{\alpha\beta}\,\Gamma^{ \nu}_{\rho\sigma}\right]=\\ &=\int d^{4}x\sqrt{-g}\,g^{\mu\nu}g^{\alpha\beta}g^{\rho\sigma} \left(-\frac{2}{\kappa^{2}}\right)\left[\Gamma_{\alpha\mu\rho}\Gamma_{\sigma \nu\beta}-\Gamma_{\alpha\mu\nu}\Gamma_{\rho\beta\sigma}-\frac{\epsilon}{4} \Gamma_{\mu\alpha\beta}\Gamma_{\nu\rho\sigma}\right]\\ &=-\frac{1}{2}\int d^{4}x\sqrt{-g}\,g^{\mu\nu}g^{\alpha\beta}g^{ \rho\sigma}\Bigg{[}\partial_{\mu}h_{\alpha\beta}\partial_{\nu}h_{\rho\sigma}- \partial_{\mu}h_{\alpha\rho}\partial_{\nu}h_{\beta\sigma}+2\,\partial_{\mu}h_ {\alpha\rho}\partial_{\beta}h_{\nu\sigma}-2\,\partial_{\mu}h_{\nu\alpha} \partial_{\beta}h_{\rho\sigma}\\ &\hskip 113.811024pt-\epsilon\,\left(\partial_{\mu}h_{\nu\rho} \partial_{\alpha}h_{\beta\sigma}-\partial_{\mu}h_{\alpha\beta}\partial_{\rho}h _{\sigma\nu}+\frac{1}{4}\,\partial_{\mu}h_{\alpha\beta}\partial_{\nu}h_{\rho \sigma}\right)\Bigg{]}.\end{split} \tag{77}\]
It admits the following perturbative expansion:
\[\begin{split}\mathcal{A}_{\rm H+gf}=&\sum_{n=0}^{ \infty}\int\frac{d^{4}p_{1}}{(2\pi)^{4}(2\pi)^{4}}\prod_{i=1}^{n}\frac{d^{4}k_ {i}}{(2\pi)^{4}}\,(2\pi)^{4}\,\delta\Big{(}p_{1}+p_{2}+\sum k_{i}\Big{)}\,h_{ \rho_{1}\sigma_{1}}(k_{1})\cdots h_{\rho_{n}\sigma_{n}}(k_{n})\\ &\times(2\,\kappa^{n})\,\big{(}\sqrt{-g}\,g^{\mu\nu}g^{\alpha \beta}g^{\rho\sigma}\big{)}^{\rho_{1}\sigma_{1}\cdots\rho_{n}\sigma_{n}}\,(p_{ 1})_{\lambda_{1}}(p_{2})_{\lambda_{2}}\,h_{\mu_{1}\nu_{1}}(p_{1})h_{\mu_{2} \nu_{2}}(p_{2})\\ &\times\Bigg{[}(\Gamma_{\alpha\mu\rho})^{\lambda_{1}\mu_{1}\nu_{ 1}}\,(\Gamma_{\sigma\nu\beta})^{\lambda_{2}\mu_{2}\nu_{2}}-(\Gamma_{\alpha\mu \nu})^{\lambda_{1}\mu_{1}\nu_{1}}\,(\Gamma_{\rho\beta\sigma})^{\lambda_{2}\mu_ {2}\nu_{2}}-\frac{\epsilon}{4}(\Gamma_{\mu\alpha\beta})^{\lambda_{1}\mu_{1} \nu_{1}}\,(\Gamma_{\nu\rho\sigma})^{\lambda_{2}\mu_{2}\nu_{2}}\Bigg{]}.\end{split} \tag{78}\]
The complete expression for the graviton vertex is given by the following formula:
\[\mu_{n}\nu_{n},p_{n}\] \[\mu_{2}\nu_{2},p_{2}\] \[\mu_{3}\nu_{3},p_{3}\] \[\mu_{1}\nu_{1},p_{1}\] \[= i\,2\,\kappa^{n-2}\,\left(\sqrt{-g}\,g^{\mu\nu}g^{\alpha\beta}g^{ \rho\sigma}\right)^{\mu_{1}\nu_{3}\cdots\mu_{n}\nu_{n}}\,(p_{1})_{\lambda_{1}}( p_{2})_{\lambda_{2}}\] \[\times\left[\,(\Gamma_{\alpha\mu\rho})^{\lambda_{1}\mu_{1}\nu_{1 }}\,(\Gamma_{\sigma\nu\beta})^{\lambda_{2}\mu_{2}\nu_{2}}-(\Gamma_{\alpha\mu \nu})^{\lambda_{1}\mu_{1}\nu_{1}}\,(\Gamma_{\rho\beta\sigma})^{\lambda_{2}\mu_ {2}\nu_{2}}-\frac{\epsilon}{4}(\Gamma_{\mu\alpha\beta})^{\lambda_{1}\mu_{1}\nu_ {1}}\,(\Gamma_{\nu\rho\sigma})^{\lambda_{2}\mu_{2}\nu_{2}}\,\right]\] \[+\text{permutations}.\]
Here the summation is performed over all possible permutations of graviton parameters \(\{\mu_{i}\,\nu_{i}\,p_{i}\}\).
The ghost action is treated similarly.
\[\mathcal{A}_{\text{ghost}}= \int d^{4}x\sqrt{-g}\left[-g^{\alpha\beta}g^{\mu\nu}\nabla_{ \alpha}\overline{c}_{\mu}\nabla_{\beta}c_{\nu}-2\,\Gamma^{\mu}_{\alpha\beta} \,\overline{c}_{\mu}\,\nabla^{\alpha}c^{\beta}+R_{\mu\nu}\,\overline{c}^{\mu} \,c^{\nu}\right] \tag{80}\] \[= \int d^{4}x\sqrt{-g}\left[-g^{\mu\nu}g^{\alpha\beta}\,\partial_{ \alpha}\overline{c}_{\mu}\,\partial_{\beta}c_{\nu}\right]\] \[+\int d^{4}x\sqrt{-g}\,g^{\mu\alpha}g^{\nu\beta}g^{\rho\sigma} \left[\Gamma_{\beta\rho\alpha}\partial_{\sigma}\overline{c}_{\mu}c_{\nu}- \Gamma_{\alpha\rho\beta}\,\overline{c}_{\mu}\partial_{\sigma}c_{\nu}+\partial _{\rho}\Gamma_{\sigma\alpha\beta}\,\overline{c}_{\mu}\,c_{\nu}-\partial_{ \alpha}\Gamma_{\rho\beta\sigma}\overline{c}_{\mu}c_{\nu}\right]\] \[+\int d^{4}x\sqrt{-g}\,g^{\mu\alpha}g^{\nu\beta}g^{\rho\sigma}g^{ \lambda\tau}\left[\Gamma_{\rho\alpha\lambda}\Gamma_{\sigma\beta\tau}-\Gamma_{ \rho\alpha\beta}\Gamma_{\sigma\lambda\tau}+\Gamma_{\alpha\rho\lambda}\Gamma_{ \beta\sigma\tau}\right]\overline{c}_{\mu}c_{\nu}.\]
It has the following perturbative expansion:
\[\mathcal{A}_{\text{ghost}}= \sum_{n=0}^{\infty}\int\frac{d^{4}p_{1}}{(2\pi)^{4}}\frac{d^{4}p_{ 2}}{(2\pi)^{4}}\prod_{i=1}^{n}\frac{d^{4}k_{i}}{(2\pi)^{4}}\,(2\pi)^{4}\delta \big{(}p_{1}+p_{2}+\sum k_{i}\big{)}h_{\rho_{1}\sigma_{1}}(k_{1})\cdots h_{ \rho_{n}\sigma_{n}}(k_{n})\,\overline{c}_{\mu}(p_{1})c_{\nu}(p_{2}) \tag{81}\] \[\times\kappa^{n}\,\left(\sqrt{-g}\,g^{\mu\nu}g^{\alpha\beta}g^{ \rho\sigma}\right)^{\mu_{1}\sigma_{1}\cdots\rho_{n}\sigma_{n}}\,(p_{1})_{ \alpha}(p_{2})_{\beta}\] \[+\sum_{n=1}^{\infty}\int\frac{d^{4}p_{1}}{(2\pi)^{4}}\frac{d^{4}p _{2}}{(2\pi)^{4}}\prod_{i=1}^{n}\frac{d^{4}k_{i}}{(2\pi)^{4}}\,(2\pi)^{4}\delta \big{(}p_{1}+p_{2}+\sum k_{i}\big{)}h_{\rho_{1}\sigma_{1}}(k_{1})\cdots h_{ \rho_{n}\sigma_{n}}(k_{n})\,\overline{c}_{\mu}(p_{1})c_{\nu}(p_{2})\] \[\times\kappa^{n}(-1)\left(\sqrt{-g}\,g^{\mu\alpha}g^{\nu\beta}g^{ \rho\sigma}\right)^{\rho_{2}\sigma_{2}\cdots\rho_{n}\sigma_{n}}\,(k_{1})_{ \lambda}\left[(p_{1})_{\sigma}\left(\Gamma_{\beta\rho\alpha}\right)^{\lambda \rho_{1}\sigma_{1}}-(p_{2})_{\sigma}\left(\Gamma_{\alpha\rho\beta}\right)^{ \lambda\rho_{1}\sigma_{1}}\right.\] \[\left.\hskip 142.26378pt+(k_{1})_{\rho}\left(\Gamma_{\sigma\alpha \beta}\right)^{\lambda\rho_{1}\sigma_{1}}-(k_{1})_{\alpha}\left(\Gamma_{\rho \beta\sigma}\right)^{\lambda\rho_{1}\sigma_{1}}\right]\] \[+\sum_{n=2}^{\infty}\int\frac{d^{4}p_{1}}{(2\pi)^{4}}\frac{d^{4} p_{2}}{(2\pi)^{4}}\prod_{i=1}^{n}\frac{d^{4}k_{i}}{(2\pi)^{4}}\,(2\pi)^{4}\delta \big{(}p_{1}+p_{2}+\sum k_{i}\big{)}h_{\rho_{1}\sigma_{1}}(k_{1})\cdots h_{ \rho_{n}\sigma_{n}}(k_{n})\,\overline{c}_{\mu}(p_{1})c_{\nu}(p_{2})\] \[\times\kappa^{n}(-1)\left(\sqrt{-g}\,g^{\mu\alpha}g^{\nu\beta}g^{ \rho\sigma}g^{\lambda\tau}\right)^{\rho_{3}\sigma_{3}\cdots\rho_{n}}\,(k_{1})_{ \lambda_{1}}(k_{2})_{\lambda_{2}}\left[\left(\Gamma_{\rho\alpha\lambda}\right)^{ \lambda_{1}\rho_{1}\sigma_{1}}\left(\Gamma_{\sigma\beta\tau}\right)^{\lambda_{2} \rho_{2}\sigma_{2}}\right.\] \[\left.\hskip 142.26378pt-\left(\Gamma_{\rho\alpha\beta}\right)^{ \lambda_{1}\rho_{1}\sigma_{1}}\left(\Gamma_{\sigma\lambda\tau}\right)^{\lambda_{2} \rho_{2}\sigma_{2}}+\left(\Gamma_{\alpha\rho\lambda}\right)^{\lambda_{1}\rho_{1} \sigma_{1}}\left(\Gamma_{\beta\sigma\tau}\right)^{\lambda_{2}\rho_{2}\sigma_{2}} \right].\]
The complete expression describing graviton-ghost vertices reads:
(82)
Propagators for ghosts and gravitons are derived by the standard procedure. The ghost propagator is given by the following expression.
\[\mu\ \cdots\cdots\ \nu\ \ =i\,\frac{\eta_{\mu\nu}}{k^{2}}\,. \tag{83}\]
The graviton propagator contains the gauge fixing parameter \(\epsilon\). The propagator corresponds to the part of the microscopic action quadratic in perturbations:
\[\int d^{4}x\sqrt{-g}\left[-\frac{2}{\kappa^{2}}R+\frac{\epsilon}{2\,\kappa^{2} }\,\mathcal{G}_{\mu}\mathcal{G}^{\mu}\right]=\int d^{4}x\left[-\frac{1}{2}h^{ \mu\nu}\mathcal{D}_{\mu\nu\alpha\beta}(\epsilon)\,\Box h^{\alpha\beta}\right]+ \mathcal{O}\left(\kappa^{1}\right). \tag{84}\]
In the momentum representation the operator \(\mathcal{D}\) is given in terms of the Nieuwenhuizen operators [32, 33]
\[\mathcal{D}_{\mu\nu\alpha\beta}(\epsilon)=\frac{3\epsilon-8}{4}P^{0}_{\mu\nu \alpha\beta}+\frac{\epsilon}{2}\,P^{1}_{\mu\nu\alpha\beta}+P^{2}_{\mu\nu \alpha\beta}-\frac{\epsilon}{4}\,\overline{\mathcal{P}}^{0}_{\mu\nu\alpha \beta}-\frac{\epsilon}{4}\,\overline{\mathcal{P}}^{0}_{\mu\nu\alpha\beta}\,. \tag{85}\]
Only \(P^{0}\) and \(P^{2}\) operators are gauge invariant, so the operator is invertible if \(\epsilon\neq 0\). The inverse operator reads:
\[\mathcal{D}^{-1}_{\mu\nu\alpha\beta}(\epsilon)=-\frac{1}{2}P^{0}_{\mu\nu \alpha\beta}+\frac{2}{\epsilon}P^{1}_{\mu\nu\alpha\beta}+P^{2}_{\mu\nu\alpha \beta}-\frac{3\,\epsilon-8}{2\,\epsilon}\,\overline{\mathcal{P}}^{0}_{\mu\nu \alpha\beta}-\frac{1}{2}\,\overline{\mathcal{P}}^{0}_{\mu\nu\alpha\beta}. \tag{86}\]
Therefore, in an arbitrary gauge the graviton propagator is given by the following expression:
(87)
We will consider the general case within this paper. However, on the practical ground, the simplest choice of the gauge fixing parameter is \(\epsilon=2\). With this value of the gauge fixing parameter the operator \(\mathcal{D}^{-1}\) takes an extremely simple form:
\[\mathcal{D}^{-1}_{\mu\nu\alpha\beta}(2)=\frac{1}{2}\big{[}\eta_{\mu\alpha} \eta_{\nu\beta}+\eta_{\mu\beta}\eta_{\nu\alpha}-\eta_{\mu\nu}\eta_{\alpha\beta }\big{]}\,. \tag{88}\]
## 5 FeynGrav v2
FeynGrav, a package for computing Feynman rules for gravity, has been updated with new features and improvements. The latest version includes support for the interaction rules presented in the sections above, as well as additional capabilities to enhance its functionality. The code is publicly available [34].
Firstly, the package structure has been changed. The main file "FeynGrav.wl" contains the code providing tools to operate with Feynman rules for gravity. The package is based on FeynCalc and requires it to run [2, 3, 4]. The package operates with pre-generated libraries which contain data on gravitational interaction. The folder "Rules" contains realizations of both interaction rules and supplementary functions. The folder "Libs" contains files with evaluated expressions for the interaction rules. The folder also contains a script "FeynGravLibrariesGenerator.wl" which generates those libraries. FeynGrav is distributed with libraries for gravitational interaction up to \(\mathcal{O}\left(\kappa^{3}\right)\) order. The previous version of FeynGrav generated expressions for interaction vertices on a user's call which negatively affected the performance.
Secondly, the package is distributed with an example file "FeynGrav_Examples.nb". The file contains the following examples:
* Realization of the Nieuwenhuizen operators [32, 33];
* Calculation of matrix element for on-shell tree-level \(2\to 2\) graviton scattering that agrees with [35].
* Calculation of various contributions to the graviton self-energy at the one-loop level.
* One-loop matter polarization operators induced by gravity.
* One-loop vertex function for a graviton-scalar vertex.
Thirdly, the package contains a few supplementary functions that are often used in quantum gravity calculations. These are propagator for a scalar field (14), propagator for the Proca field (massive vector field) (26), and the Nieuwenhuizen operators [32, 33]. The Nieuwenhuizen operators are a generalization of the standard gauge projectors and they are discussed in many other publications, so we will not discuss them further. It must be noted that the original Nieuwenhuizen operators are defined in \(d=4\) where they have a few special features. Within FeynGrav these operators are given in arbitrary \(d\). This is done for the sake of consistency as most parts of tools provided by FeynCalc are designed to operate with arbitrary \(d\).
The new version of FeynGrav also includes interaction rules for matter with \(s=0\),\(1\), and \(1/2\) of arbitrary mass, as well as interaction rules for \(SU(N)\) Yang-Mills that are consistent with the realization used within FeynCalc. The complete list of commands for interaction rules is given in Appendix A.
Lastly, the gravitational sector of the new FeynGrav version supports an arbitrary gauge fixing parameter present in (75). The package is initiated with the corresponding parameter being unspecified and entering all expressions as a constant. At any point, the user is free to fix this parameter and proceed with the calculations. As was noted before, from the practical point of view \(\epsilon=2\) gauge is the simplest because the graviton propagator takes a much simpler form.
All other features of FeynGrav remained unchanged from the previous version. They are described in detail in the previous paper [1], so we will not discuss them further. However, we present some calculations that provide a suitable illustration of FeynGrav's applicability.
### Example of polarization operators
To demonstrate the applicability of FeynGrav, we can use it to perform some typical quantum field theory calculations. Let's start with the calculation of various contributions to the graviton self-energy. In the following calculations, we express all loop integrals in terms of the Passarino-Veltman integrals [36]. Since the calculations are performed within FeynCalc, we omit all \(A_{0}(0)\) integrals but preserve \(A_{0}(m^{2})\) integrals.
Graviton polarization operator induced by a single scalar field:
\[\begin{split} i\,\Pi^{s=0,m_{s}}_{\mu\nu\alpha\beta}(p)& =\mu\nu\parbox{142.367913pt}{\includegraphics[width=142.367913pt]{fig/FeynGrav. pdf}}\alpha\beta+\mu\nu\parbox{142.367913pt}{\includegraphics[width=142.367913pt]{fig/FeynGrav. pdf}}\alpha\beta\\ =&\kappa^{2}\ i\,\pi^{2}B_{0}(p^{2},m_{\rm s}^{2},m_{ \rm s}^{2})\left[\frac{1}{12}\left(p^{2}+2\,m_{\rm s}^{2}\right)^{2}P^{0}_{\mu \nu\alpha\beta}+\frac{1}{120}\left(p^{2}-4\,m_{\rm s}^{2}\right)^{2}P^{2}_{ \mu\nu\alpha\beta}\right]\\ &-\kappa^{2}\ i\,\pi^{2}A_{0}(m_{\rm s}^{2})\left[\frac{1}{6} \left(p^{2}+2\,m_{\rm s}^{2}\right)P^{0}_{\mu\nu\alpha\beta}+\frac{1}{60} \left(p^{2}+8\,m_{\rm s}^{2}\right)P^{2}_{\mu\nu\alpha\beta}\right].\end{split} \tag{89}\]
Graviton polarization operator induced by a single Dirac field:
\[\begin{split}& i\,\Pi^{s=1,m_{t}}_{\mu\nu\alpha\beta}(p)=\,\mu \nu\,\raisebox{-15.0pt}{\includegraphics[scale=0.5]{fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig// fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig
Polarization operator for a Dirac field:
(96)
Polarization operator for a Proca field:
(97)
Here the following definitions of gauge projectors are used:
\[\theta_{\mu\nu}(p)=\eta_{\mu\nu}-\frac{p_{\mu}p_{\nu}}{p^{2}},\qquad\qquad\qquad \qquad\omega_{\mu\nu}(p)=\frac{p_{\mu}p_{\nu}}{p^{2}}. \tag{98}\]
The polarization operator for a massless vector field reads:
\[\begin{split}\Pi^{\mu=1,m=0}_{\mu\nu}=&\quad\mu \quad\raisebox{-14.226378pt}{\includegraphics[scale=0.5]{fig/Polarization_2.pdf}}\,\,\, \nu\quad+\quad\mu\quad\raisebox{-14.226378pt}{\includegraphics[scale=0.5]{fig/Polarization_3.pdf}}\,\,\nu\\ =&-\frac{1}{6}\kappa^{2}\,i\,\pi^{2}\,B_{0}(p^{2},0, 0)\,p^{4}\,\theta_{\mu\nu}(p)-\frac{5}{2}\kappa^{2}\,i\,\pi^{2}\,B_{0}(p^{2}, 0,0)\,p^{4}\,\omega_{\mu\nu}(p).\end{split} \tag{99}\]
Lastly, for the \(SU(N)\) Yang-Mills theory there are polarization operators for gluons:
(100)
and polarization operators for quarks:
(101)
### Example of a vertex operator
Let us briefly consider another example of calculations that can be performed within FeynGrav. For the sake of illustration, we can address a one-loop scalar-graviton vertex function:
(102)
A detailed discussion of this function lies far beyond the scope of this paper and will be presented elsewhere. Here we will only consider a very specific limit of this function that was already studied in [37] (see also [38, 39, 40] for a detailed discussion). We only consider the case when both scalars are placed on the mass shell \(p_{1}^{2}=p_{2}^{2}=m^{2}\) and the graviton four-momentum only has spacial components \(k^{2}=-(\vec{k})^{2}\) and they are small. This setup allows one to recover the classical limit of the theory.
The example file"FeynGrav_Examples.nb" contains expressions calculating all the amplitude given above. Using FeynCalc tools we separate the Passarino-Veltman integrals and keep terms relevant for \(\left|\vec{k}\right|\to 0\) limit:
\[i\,\Gamma_{\mu\nu}(k,p_{1},p_{2})\to i\,\pi^{2}\,\kappa^{3}\,(p_{1}+p_{2})_{ \mu}(p_{1}+p_{2})_{\nu}\left[\frac{101}{96}\ln k^{2}+\frac{\pi^{2}}{32}\frac{m }{k}\right]. \tag{103}\]
This expression is in agreement with the previous studies in [37, 39, 40], where the leading-order contributions are non-analytic functions that correspond to power-law corrections to the Newtonian potential.
### Example of a tree-level scattering amplitude
Lastly, we want to briefly touch upon the implementation of FeynGrav for scattering amplitudes. In full analogy with the previous case, a more detailed discussion of scattering amplitudes lies far beyond the scale of this paper. Because of this, we will only consider a single tree-level scattering amplitude for two scalars of different masses.
\[i\,\mathcal{M}(p_{1},p_{2},p_{3},p_{4},m_{1},m_{2})= \tag{104}\]
It is more convenient to express this amplitude in terms of the Mandelstam variables [41]:
\[s=(p_{1}+p_{2})^{2}\,, t=(p_{1}+p_{3})^{2}\,, u=(p_{1}+p_{4})^{2}. \tag{105}\]
The scattering amplitude reads:
\[i\,\mathcal{M}=-i\,\frac{\kappa^{2}}{4\,t}\,\Big{(}u^{2}+t\,u-(t+2\,u)(m_{1}^{ 2}+m_{2}^{2})+m_{1}^{4}+m_{2}^{4}\Big{)}. \tag{106}\]
In full analogy with the previous case, it is convenient to consider a quasi-static limit:
\[s=(m_{1}+m_{2})^{2}\,, t=-\,(\vec{p})^{2}\to 0\,. \tag{107}\]
That limit recovers the part amplitude leading in the weak interaction limit which reads
\[\mathcal{M}\sim\frac{\kappa^{2}}{2}\,\frac{m_{1}^{2}\,m_{2}^{2}}{p^{2}}\,. \tag{108}\]
In full agreement with [37, 38, 39, 40] the recovered contribution corresponds to the leading-order term in the Newtonian potential.
## 6 Conclusions
In this paper, we present the latest developments of FeynGrav, which offers a simple and efficient way to derive Feynman's rules for gravity. Building on our previous work in [1], where we derived the Feynman rules for gravitational interaction with massless matter, we extend the formalism to cover matter with arbitrary mass. We also revisit the implementation of the Faddeev-Popov prescription within the formalism and derive the corresponding rules for the Faddeev-Popov ghosts present in the theory of a single massless vector field. Additionally, we implement the formalism to the \(SU(N)\) Yang-Mills model and obtain all the required interaction rules. These interaction rules are sufficient for calculating gravitational corrections to standard model processes, which opens up new opportunities to search for relevant gravitational effects within the standard model.
The explicit examples of tree and loop-level calculations performed with FeynGrav demonstrate the usefulness of the presented rules, and the potential for further applications of FeynGrav for scattering amplitudes is promising. The contemporary methods of scattering amplitude calculations are well-developed for on-shell amplitudes [42, 43, 44]. FeynGrav provides a way to calculate off-shell scattering amplitudes, which is an important step toward studying higher-order effects in gravitational interactions.
Future developments of FeynGrav will focus on several directions. First, we plan to implement non-minimal interactions, particularly non-minimal interactions with scalar fields [18, 19, 20]. This will allow us to study their influence on quantum gravity behavior. Secondly, we aim to improve the performance of the package, as quantum gravitational calculations are notoriously complicated due to a large number of terms and Lorentz indices involved. We plan to explore techniques such as parallel computations to increase FeynGrav's performance. Lastly, we intend to extend the formalism to supersymmetric models, which will provide another effective tool to operate with supergravity scattering amplitudes.
## Acknowledgment
The work was supported by the Foundation for the Advancement of Theoretical Physics and Mathematics "BASIS".
|
2310.00174 | ADMET property prediction through combinations of molecular fingerprints | While investigating methods to predict small molecule potencies, we found
random forests or support vector machines paired with extended-connectivity
fingerprints (ECFP) consistently outperformed recently developed methods. A
detailed investigation into regression algorithms and molecular fingerprints
revealed gradient-boosted decision trees, particularly CatBoost, in conjunction
with a combination of ECFP, Avalon, and ErG fingerprints, as well as 200
molecular properties, to be most effective. Incorporating a graph neural
network fingerprint further enhanced performance. We successfully validated our
model across 22 Therapeutics Data Commons ADMET benchmarks. Our findings
underscore the significance of richer molecular representations for accurate
property prediction. | James H. Notwell, Michael W. Wood | 2023-09-29T22:39:18Z | http://arxiv.org/abs/2310.00174v1 | # ADMET property prediction through combinations of molecular fingerprints
###### Abstract
While investigating methods to predict small molecule potencies, we found random forests or support vector machines paired with extended-connectivity fingerprints (ECFP) consistently outperformed recently developed methods. A detailed investigation into regression algorithms and molecular fingerprints revealed gradient-boosted decision trees, particularly CatBoost, in conjunction with a combination of ECFP, Avalon, and ErG fingerprints, as well as 200 molecular properties, to be most effective. Incorporating a graph neural network fingerprint further enhanced performance. We successfully validated our model across 22 Therapeutics Data Commons ADMET benchmarks. Our findings underscore the significance of richer molecular representations for accurate property prediction.
## 1 Summary
We set out to find a potency predictor for small molecules that inhibit a novel class A GPCR (subsequently referred to as GPRX) where no crystal structure was available. Excited by the developments in applying different neural network architectures to molecular property prediction, we applied deep message passing neural networks (Chemprop [6]), text-based transformers (ChemBERTa [4] and ChemBERTa2 [2]), and graph neural networks (Grover [17]) to more than 300 molecules we had synthesized for this program. Unfortunately, even models that leveraged pretraining on large molecular datasets did not outperform random forests or support vector machines coupled with extended-connectivity fingerprints (ECFP) [16].
This observation led us to investigate different regression algorithms coupled with different molecular fingerprints. We learned several things from this exercise: 1. Gradient-boosted decision trees (GBDT) performed better than other algorithms, and CatBoost [15] performed slightly but consistently better than other GBDT implementations, 2. ECFP and Avalon fingerprints [5] performed better than other molecular fingerprints available through RDKit [12], and modifying the parameters of these fingerprints had little effect on potency prediction error, and 3. Combining the ECFP and Avalon fingerprints with 200 molecular properties, e.g. number of rings, molecular weight, etc., performed better than the ECFP and Avalon fingerprints on their own.
Having now syntesized and assayed hundreds of additional inhibitors of GPRX, we wanted to see if we could get more performance out of our best performing model through hyperparameter optimization. To ensure the generalization of our model, we used a time-split validation [18] and set aside all molecules profiled after a given date, as well as those being synthesized, to serve as a test set. With the remaining molecules, we performed a scaffold split, reserving 20% of the molecules for model validation. While CatBoost was the most accurate GBDT regressor in our testing, we found LigGBM [10] easier to work with when combined with the Optuna [3] hyperparameter optimization framework, including having its own tuner [14]. We explored a number of cross-validation strategies, including grouping molecules with similar scaffolds or potencies into the same or different folds, as well as randomizing across these properties, but observed consistent overfitting across extensive hyperparameter searching. When evaluating the validation set, we found it difficult to outperform
LightGBM or CatBoost with default parameters, with the sole exception of modifying the random strength in CatBoost.
The idea of combining molecular features is not new. Concatenating featurizers is supported in the molfeat package [13] and was used in the top submission [11] for the 1st EUOS/SLAS Joint Challenge to predict compound solubility [20]. Intrigued by our initial results, we leveraged molfeat featurizers to explore all combinations of 1, 2, or 3 featurizers. This search highlighted a third type of fingerprint that performed well when used in conjunction with ECFP and Avalon fingerprints: extended reduced graph approach (ErG) [19]. Interestingly, the neural network fingerprints, regardless of architecture type, performed worse compared to traditional molecular fingerprints. Having performed extensive searches across model space, we applied our model to the test portion of our time split and observed excellent generalization with error similar to the validation set. We wondered if our model's performance would extend to other molecular property prediction tasks.
The Therapeutics Data Commons (TDC) has compiled 22 benchmarks across different ADMET property prediction tasks, such as aqueous solubility or cytochrome P450 enzyme inhibition [9]. To further probe its generalization, we applied our final model, consisting of a CatBoost classifier or regressor coupled with molecular representations that were a combination of ECFP, Avalon, and ErG fingerprints, as well as 200 molecular properties, to these benchmarks. When doing so, our model achieved top-1 performance in 6 of 22 benchmarks and top-3 performance in 16 of 22 benchmarks (Figure 1). To test the limits of combining fingerprints, we added a graph isomorphism network (GIN - supervised masking variant [7]) fingerprint to our molecular representation, and achieved top-1 performance in 11 of 22 benchmarks and top-3 performance in 19 of 22 benchmarks (Figure 1).
than common implementations of ECFP fingerprints (ECFP counts (length 1024) + Avalon counts (length 1024) + ErG (length 315) + molecular properties (length 200) + GIN supervised masking (length 300) = length 2863). We modify a single CatBoost hyperparameter (random strength), although qualitatively similar performance can be achieved with other GBDT implementations, e.g. LightGBM, with default parameters. In comparison with other approaches, such as those from Huang et al. [8], we achieve strong results with a single model.
A recent editorial was titled "For Chemists, the AI revolution has yet to happen" [1]. While this article places much of the blame on the availability of training data, a contributing factor is how we describe molecules. A key finding from our work is that richer molecular representations, achieved through combining fingerprints, provide strong predictive power. Looking across the 22 TDC ADMET leaderboards, we see additional evidence for this: the deep message passing neural network (Chemprop) is a generally good performer but does even better when combined with molecular properties (Chemprop-RDKit). This is a promising avenue for future work.
## 2 Software Availability
The software for combining molecular fingerprints can be found at [https://github.com/maplightrx/MapLight-TDC](https://github.com/maplightrx/MapLight-TDC) and is released under the MIT License.
|
2309.14062 | FeCAM: Exploiting the Heterogeneity of Class Distributions in
Exemplar-Free Continual Learning | Exemplar-free class-incremental learning (CIL) poses several challenges since
it prohibits the rehearsal of data from previous tasks and thus suffers from
catastrophic forgetting. Recent approaches to incrementally learning the
classifier by freezing the feature extractor after the first task have gained
much attention. In this paper, we explore prototypical networks for CIL, which
generate new class prototypes using the frozen feature extractor and classify
the features based on the Euclidean distance to the prototypes. In an analysis
of the feature distributions of classes, we show that classification based on
Euclidean metrics is successful for jointly trained features. However, when
learning from non-stationary data, we observe that the Euclidean metric is
suboptimal and that feature distributions are heterogeneous. To address this
challenge, we revisit the anisotropic Mahalanobis distance for CIL. In
addition, we empirically show that modeling the feature covariance relations is
better than previous attempts at sampling features from normal distributions
and training a linear classifier. Unlike existing methods, our approach
generalizes to both many- and few-shot CIL settings, as well as to
domain-incremental settings. Interestingly, without updating the backbone
network, our method obtains state-of-the-art results on several standard
continual learning benchmarks. Code is available at
https://github.com/dipamgoswami/FeCAM. | Dipam Goswami, Yuyang Liu, Bartłomiej Twardowski, Joost van de Weijer | 2023-09-25T11:54:33Z | http://arxiv.org/abs/2309.14062v3 | # FeCAM: Exploiting the Heterogeneity of Class Distributions in Exemplar-Free Continual Learning
###### Abstract
Exemplar-free class-incremental learning (CIL) poses several challenges since it prohibits the rehearsal of data from previous tasks and thus suffers from catastrophic forgetting. Recent approaches to incrementally learning the classifier by freezing the feature extractor after the first task have gained much attention. In this paper, we explore prototypical networks for CIL, which generate new class prototypes using the frozen feature extractor and classify the features based on the Euclidean distance to the prototypes. In an analysis of the feature distributions of classes, we show that classification based on Euclidean metrics is successful for jointly trained features. However, when learning from non-stationary data, we observe that the Euclidean metric is suboptimal and that feature distributions are heterogeneous. To address this challenge, we revisit the anisotropic Mahalanobis distance for CIL. In addition, we empirically show that modeling the feature covariance relations is better than previous attempts at sampling features from normal distributions and training a linear classifier. Unlike existing methods, our approach generalizes to both many- and few-shot CIL settings, as well as to domain-incremental settings. Interestingly, without updating the backbone network, our method obtains state-of-the-art results on several standard continual learning benchmarks. Code is available at [https://github.com/dipamgoswami/FeCAM](https://github.com/dipamgoswami/FeCAM).
## 1 Introduction
In Continual Learning (CL), the learner is expected to accumulate knowledge from the ever-changing stream of new tasks data. As a result, the model only has access to the data from the current task, making it susceptible to _catastrophic forgetting_ of previously learned knowledge [45; 36]. This phenomenon has been extensively studied in the context of Class Incremental Learning (CIL) [35; 60; 8; 73], where the objective is to incrementally learn new classes and achieve the highest accuracy for all classes encountered so far in a task-agnostic way without knowing from which tasks the evaluated samples are [55]. While one of the simplest approaches to mitigate forgetting is storing exemplars of each class, it has limitations due to storage and privacy concerns, e.g., in medical images. Hence, the focus has shifted towards more challenging exemplar-free CIL methods [79; 62; 42; 40; 34].
In exemplar-free CIL methods, the challenge is to discriminate between old and new classes without access to old data. While some methods [78; 50; 77; 79] trained the model on new classes favoring plasticity and used knowledge distillation [19] to preserve old class representations, other methods [42; 2; 12; 40; 34] froze the feature extractor after the first task, thus favoring stability and incrementally
learned the classifier. One of the drawbacks is the inability to learn new representations with a frozen feature extractor. Inspired by transfer learning [48], the objective of these classifier-incremental methods is to make best use of the learned representations from the pretrained model and continually adapt the classifier to learn new tasks. Recently, pretrained feature extractors have been used in exemplar-free CIL by prompt-based methods [62], using linear discriminant analysis [40] and with a simple nearest class mean (NCM) classifier [21]. These methods use a transformer model pretrained on large-scale datasets like ImageNet-21k [44] and solely focus on classifier-incremental learning.
This paper investigates methods to enhance the representation of class prototypes in CIL, aiming to improve plasticity within the stability-favoring classifier-incremental setting. A standard practice in few-shot CIL [30; 41; 75; 69; 71] is to obtain the feature embeddings of new class samples and average them to generate class-wise prototypes. The test image features are then classified by computing the Euclidean distance to the mean prototypes. The Euclidean distance is used in the NCM classifier, following [16], which claims that the highly non-linear nature of learned representations eliminates the need to learn the Mahalanobis [64; 10] metric previously used [37]. Our analysis shows that this holds true for classes that are considered during training, however, for new classes, the Euclidean distance is suboptimal. To address this problem, we propose to use the anisotropic Mahalanobis distance. In Fig. 1, we explain how the feature representations vary in CIL settings. Here, the high-stability case in CIL is explored, where the model does not achieve spherical representations for new classes in the feature space, unlike joint training. Thus, it is intuitive to take into account the feature covariances while computing the distance. The covariance relations between the feature dimensions better captures the more complex class structure in the high-dimensional feature space. Additionally, in Fig. 3, we analyze singular values for old and new class features to observe the changes in variances in their feature distributions, suggesting a shift towards more anisotropic representations.
While previous methods [37] proposed learning Mahalanobis metrics, we propose using an optimal Bayes classifier by modeling the covariance relations of the features and employing class prototypes. We term this approach **F**eature **C**ovariance-**A**ware **M**etric (FeCAM). We compute the covariance matrix for each class from the feature embeddings corresponding to training samples and perform correlation normalization to ensure similar variances across all class representations, which is crucial for distance comparisons. We investigate various ways of using covariance relations in continual settings. We posit that utilizing a Bayes classifier enables better learning of optimal decision boundaries compared to previous attempts [66] involving feature sampling from Gaussian distributions and training linear classifiers. The proposed approach is simple to implement and requires no training since we employ a Bayes classifier. The Bayes classifier FeCAM can be used for both many-shot CIL and few-shot CIL, unlike existing methods. Additionally, we achieve superior performance with pretrained models on both class-incremental and domain-incremental benchmarks.
## 2 Related Work
Many-shot class-incremental learning (MSCIL) is the conventional setting, where sufficient training data is available for all classes. A critical aspect in many-shot CIL methods is the semantic drift in
Figure 1: Illustration of feature representations in CIL settings. In Joint Training (a), deep neural networks learn good isotropic spherical representations [16] and thus the Euclidean metric can be used effectively. However, it is challenging to learn isotropic representations of both old and new classes in CIL settings. When the model is too stable in (b), it is unable to learn good spherical representations of new classes and when it is too plastic in (c), it learns spherical representations of new classes but loses the spherical representations of old classes. Thus, it is suboptimal to use the isotropic euclidean distance. We propose FeCAM in (d) which models the feature covariance relations using Mahalanobis metric and learns better non-linear decision boundaries for new classes.
the feature representations [67] while training on new tasks. While recent methods use knowledge distillation [19] or regularization strategies [24; 67] to maintain the representations of old classes, these methods are dependent on storing images [29; 6; 20; 13; 15; 5; 4; 23], representations [22] or instances [32] from old tasks and becomes ineffective in practical cases where data privacy is required. Another set of methods [42; 2; 12; 40; 34] proposed freezing the feature extractor after the first task and learning only the classifier on new classes. We follow the same setting and do not violate privacy concerns by storing exemplars.
On the other hand, Few-Shot Class-Incremental Learning (FSCIL) considers that very few (1 or 5) samples per class are available for training [30; 52; 57]. Generally, these settings assume a big first task and freezes the network after training on the initial classes. To address the challenges of FSCIL, various techniques have been proposed. One approach is to use meta-learning to learn how to learn new tasks from few examples [38; 47; 3; 53]. Another approach is to incorporate variational inference to learn a distribution over models that can adapt to new tasks [39]. Most FSCIL methods [1; 7; 30; 52; 41; 75; 69; 71] obtains feature embeddings of a small number of examples and averages them to get the class-wise prototypes. These methods uses the Euclidean distance to classify the features assuming equally spread classes in the feature space. We explore using the class prototypes in both MSCIL and FSCIL settings.
Since the emergence of deep neural networks, Euclidean distance is used in NCM classifier following [16], instead of Mahalanobis distance [37]. Mahalanobis distance has been recently explored for out-of-distribution detection with generative classifiers [28] and an ensemble using Mahalanobis [23] and also within the context of cross-domain continual learning [49].
## 3 Proposed Approach
### Motivation
For the classification of hand-crafted features, Mensink _et al._[37] proposed the nearest class mean (NCM) classifier using (squared) Mahalanobis distance \(\mathcal{D}_{M}\) instead of Euclidean distance to assign an image to the class with the closest mean (see also illustration in Fig. 2) :
\[y^{*}=\operatorname*{argmin}_{y=1,\ldots,Y}\mathcal{D}_{M}(x,\mu_{y}),\quad \mathcal{D}_{M}(x,\mu_{y})=(x-\mu_{y})^{T}M(x-\mu_{y}) \tag{1}\]
where \(Y\) is the number of classes, \(x,\mu_{y}\in\mathbb{R}^{D}\), class mean \(\mu_{y}=\frac{1}{|X_{y}|}\sum_{x\in X_{y}}x\), and \(M\) is a positive definite matrix. They learned a low-rank matrix \(M=W^{T}W\) where \(W\in\mathbb{R}^{m\times D}\), with \(m\leq D\).
However, with the shift towards deep feature representations, Guerriero _et al._[16] assert that the learned representations with a deep convolutional network \(\phi:\mathcal{X}\rightarrow\mathbb{R}^{D}\), eliminate the need of learning the Mahalanobis metric \(M\) and the isotropic Euclidean distance \(\mathcal{D}_{e}\) can be used as follows:
\[y^{*}=\operatorname*{argmin}_{y=1,\ldots,Y}\mathcal{D}_{e}(\phi(x),\mu_{y}), \quad\mathcal{D}_{e}(\phi(x),\mu_{y})=(\phi(x)-\mu_{y})^{T}(\phi(x)-\mu_{y}) \tag{2}\]
where \(\phi(x),\mu_{y}\in\mathbb{R}^{D}\), \(\mu_{y}=\frac{1}{|X_{y}|}\sum_{x\in X_{y}}\phi(x)\) is the class prototype or the average feature vector of all samples of class \(y\). Here, \(\phi(x)\) is the feature vector corresponding to image \(x\) and could be the output of penultimate layer of the network. In Euclidean space, \(M=I\), where \(I\) is an identity matrix.
The success of the NCM classifier for deep learned representation (as observed by [16]) has also been adopted by the incremental learning community. The NCM classifier with Euclidean distance is now commonly used in incremental learning [43; 67; 9; 78; 71; 41; 69; 75]. However, in incremental learning, we do not jointly learn the features of all data, but are learning on a non-static data stream. As a result, the underlying learning dynamics which result in representations on which Euclidean distances perform excellently might no longer be valid. Therefore, we perform a simple comparison with the Euclidean and Mahalanobis distance (see Fig. 4) where we use a network trained on 50% of
Figure 2: Illustration of distances (contour lines indicate points at equal distance from prototype).
classes of CIFAR100 (identified as _old classes_). Interestingly, indeed the _old classes_, on which the feature representation is learned, are very well classified with Euclidean distance, however, for the _new classes_ this no longer holds, and the Mahalanobis distance obtains far superior results.
As a second experiment, we compare singular values of _old_ and _new classes_ (see Fig. 2(a)). We observe that the singular values of new class features vary more and are in general larger than those of old classes. This points out that new class distributions are more heterogeneous (and are more widely spread) compared to the old classes and hence the importance of a heterogeneous distance measure (like Mahalanobis). This is also confirmed by a t-SNE plot of both old and new classes (see Fig. 2(b),c) showing that the new classes, which were not considered while training the backbone are badly modeled by a spherical distribution assumption, as is underlying the Euclidean distance.
Based on these results, two extreme cases of CIL are illustrated in Fig. 1, considering maximum stability where the backbone is frozen, and maximum plasticity where the training is done with fine-tuning without preventing forgetting. In this paper, we revisit the nearest class mean classifier based on a heterogeneous distance measure. We perform this for classifier incremental learning, where the backbone is frozen after the first task. However, we think that conclusions based on the heterogeneous nature of class distributions also have consequences for continual deep learning where the representations are continually updated.
### Bayesian FeCAM Classifier
When modeling the feature distribution of classes with a multivariate normal feature distribution \(\mathcal{N}(\mu_{y},\mathbf{\Sigma}_{y})\), the probability of a sample feature \(x\) belonging to class \(y\) can be expressed as,
\[P(x|C=y)\approx\exp\frac{-1}{2}(x-\mu_{y})^{T}\mathbf{\Sigma}_{y}^{-1}(x-\mu_{y}), \tag{3}\]
It is straightforward to see that this is the optimal Bayesian classifier, since:
\[\operatorname*{argmax}_{y}P(Y|X)=\operatorname*{argmax}_{y}\frac{P(X|Y)P(Y)}{ P(X)}=\operatorname*{argmax}_{y}P(X|Y)P(Y)=\operatorname*{argmax}_{y}P(X|Y) \tag{4}\]
where optimal boundary occurs at those points where each class is equally probable \(P(y_{i})=P(y_{j})\). Since logarithm is a concave function and thus \(\operatorname*{argmax}_{y}P(X|Y)=\operatorname*{argmax}_{y}logP(X|Y)\)
\[\operatorname*{argmax}_{y}logP(X|Y)=\operatorname*{argmax}_{y}\{-(x-\mu_{y}) ^{T}\mathbf{\Sigma}_{y}^{-1}(x-\mu_{y})\}=\operatorname*{argmin}_{y}\mathcal{D}_{ M}(x,\mu_{y}) \tag{5}\]
where the squared mahalanobis distance \(\mathcal{D}_{M}(x,\mu_{y})=(x-\mu_{y})^{T}\mathbf{\Sigma}_{y}^{-1}(x-\mu_{y})\).
In the following, we elaborate on several techniques that can be applied to improve and stabilize Mahalanobis-based distance classification. We apply these, resulting in our FeCAM classifier (an ablation study in section 4.2.4 confirms the importance of these on overall performance).
Figure 4: Accuracy Comparison of NCM (Euclidean) and FeCAM (Mahalanobis) using common covariance matrix and a matrix per class on CIFAR100 50-50 (2 task) sequence, for Old, New, and All classes at the end of the learning sequence.
Figure 3: (a) Singular values comparison for old and new classes, (b-c) Visualization of features for old classes and new classes by t-SNE, where the colors of points indicate the corresponding classes.
**Covariance Matrix Approximation.** We obtain the covariance matrix from the feature vectors \(\phi(x)\) corresponding to the samples \(x\) at any task. The covariance matrix can be obtained in different ways. A common covariance matrix \(\mathbf{\Sigma}^{1:t}\) can be incrementally updated as the mean covariance matrix for all seen classes till task \(t\) as follows:
\[\mathbf{\Sigma}^{1:t}=\mathbf{\Sigma}^{1:t-1}\cdot\frac{|Y^{1:t-1}|}{|Y^{1:t}|}+\mathbf{ \Sigma}^{t}\cdot\frac{|Y^{1:t}|-|Y^{1:t-1}|}{|Y^{1:t}|} \tag{6}\]
where \(\mathbf{\Sigma}^{t}\) refers to the common covariance matrix obtained from the features of samples \(x\in X^{t}\) from all classes seen in task \(t\) and \(|.|\) denotes the number of classes seen till that task. This common covariance matrix will represent the feature distribution for all seen classes and requires storing only the common covariance matrix from the last task.
The other alternative is to use a covariance matrix \(\mathbf{\Sigma}_{y}\) for \(y\in 1,..Y\) to represent the feature distribution of each class separately. Here, \(\mathbf{\Sigma}_{y}\) is the covariance matrix obtained from the feature vectors of all the samples \(x\in X_{y}\). This will involve storing a separate matrix for all seen classes.
**Normalization of Covariance Matrices.** The covariance matrix \(\mathbf{\Sigma}_{y}\) obtained for each class will have different levels of scaling and variances along different dimensions. Particularly, due to the notable shift in feature distributions between the old and new classes, the variances are much higher for the new classes. As a result, the Mahalanobis distance of features from different classes will have different scaling factors, and the distances will not be comparable. So, in order to be able to use a covariance matrix per class, we perform a correlation matrix normalization on all the covariance matrices. In order to make the multiple covariance matrices comparable, we make their diagonal elements equal to 1. A normalized covariance matrix can be obtained as:
\[\mathbf{\hat{\Sigma}}_{y}(i,j)=\frac{\mathbf{\Sigma}_{y}(i,j)}{\sigma_{y}(i)\sigma_{y }(j)},\quad\sigma_{y}(i)=\sqrt{\mathbf{\Sigma}_{y}(i,i)},\quad\sigma_{y}(j)=\sqrt{ \mathbf{\Sigma}_{y}(j,j)} \tag{7}\]
where \(\sigma_{y}(i)\) and \(\sigma_{y}(j)\) refers to the standard deviations along the dimensions \(i\) and \(j\) respectively.
We identify the difficulties of obtaining an invertible covariance matrix in cases when the number of samples are less than the number of dimensions. So, we use a covariance shrinkage method to get a full-rank matrix. We also show that a simple gaussianization of the features using tukey's normalization [54] is also helpful.
**Covariance Shrinkage.** When the number of samples available for a class is less than the number of feature dimensions, the covariance matrix \(\mathbf{\Sigma}\) is not invertible, and thus it is not possible to use \(\mathbf{\Sigma}\) to get the Mahalanobis distance. This is a very serious problem since most of the deep learning networks have large number of feature dimensions [68]. Similar to [26, 56], we perform a covariance shrinkage to obtain a full-rank matrix as follows:
\[\mathbf{\Sigma}_{s}=\mathbf{\Sigma}+\gamma_{1}V_{1}I+\gamma_{2}V_{2}(1-I), \tag{8}\]
where \(V_{1}\) is the average diagonal variance, \(V_{2}\) is the average off-diagonal covariance of \(\mathbf{\Sigma}\) and \(I\) is an identity matrix.
**Tukey's Ladder of Powers Transformation.** Tukey's Ladder of Powers transformation aims to reduce the skewness of distributions and make them more Gaussian-like. We transform the feature vectors \(\phi(x)\) using Tukey's Ladder of Powers transformation [54]. It can be formulated as:
\[\tilde{\phi(x)}=\left\{\begin{array}{ll}\phi(x)^{\lambda}&\text{if }\lambda\neq 0\\ \log(\phi(x))&\text{if }\lambda=0\end{array}\right. \tag{9}\]
where \(\lambda\) is a hyperparameter to decide the degree of transformation of the distribution. In our experiments, we use \(\lambda=0.5\) following [66].
We obtain the normalized features using Tukey's transformation and then use the transformed features to obtain the covariance matrices. When using multiple matrices, we perform correlation normalization to make them comparable. The final prediction and the squared Mahalanobis distance to the different class prototypes using one covariance matrix per class can be obtained as:
\[y^{*}=\operatorname*{argmin}_{y=1,\dots,Y}\mathcal{D}_{M}(\phi(x),\mu_{y}), \quad\mathcal{D}_{M}(\phi(x),\mu_{y})=(\tilde{\phi(x)}-\tilde{\mu}_{y})^{T}( \hat{\mathbf{\Sigma}}_{y})_{s}^{-1}(\tilde{\phi(x)}-\tilde{\mu}_{y}) \tag{10}\]
where \(\tilde{\phi(x)}\) and \(\tilde{\mu}_{y}\) refers to the tukey transformed features and prototypes respectively and \(\left(\hat{\mathbf{\Sigma}}_{y}\right)_{s}^{-1}\) denotes the inverse of the covariance matrices which first undergoes shrinkage followed by normalization. Note that the covariance matrices are computed using the tukey transformed features. Similarly, the common covariance matrix \(\mathbf{\Sigma}^{1:t}\) can be used in Eq. (10).
### On the Suboptimality of Learning Linear Classifier
Previous methods like [66; 26] in few-shot learning assumed Gaussian distributions of classes in feature space and proposed to transfer statistics of old classes to obtain calibrated distributions of new classes and then sample examples from the calibrated distribution to train a linear logistic regression classifier. In our setting, we consider a similar baseline for comparison which assumes Gaussian distributions for features of old classes (by storing the mean and the covariance matrix from the features of the old classes) and samples features from these distributions to learn a linear classifier. We state that this is not an ideal solution since the optimal decision boundaries need not be linear. The optimal decision boundaries are linear when the covariances of all classes are equal like in the Euclidean space. When the covariances of classes are not equal, the optimal decision boundaries are non-linear and forms a quadratic surface in high-dimensional feature space. We show in Fig. 5 that using the optimal Bayesian classifier obtains much better performance compared to sampling features and training a linear classifier, even when sampling many examples per class.
## 4 Experiments
### Experimental Setup
We evaluated FeCAM with strong baselines on multiple datasets and different scenarios.
**MSCIL datasets and setup.** We conduct experiments on three publicly available datasets: 1) CIFAR100 [25] - consisting of 100 classes, 32\(\times\)32 pixel images with 500 and 100 images per class for training and testing, respectively; 2) TinyImageNet [27] - a subset of ImageNet with 200 classes, 64\(\times\)64 pixel images, and 500 and 50 images per class for training and testing, respectively; 3) ImageNet-Subset [11] - a subset of the ImageNet LSVRC dataset [46] consisting of 100 classes with 1300 and 50 images per class for training and testing, respectively. We divide these datasets into incremental settings, where the number of initial classes in the first task is larger and the remaining classes are evenly distributed among the incremental tasks. We experiment with three different incremental settings for CIFAR100 and ImageNet-Subset: 1) 50 initial classes and 5 incremental learning (IL) tasks of 10 classes; 2) 50 initial classes and 10 IL tasks of 5 classes; 3) 40 initial classes and 20 IL tasks of 3 classes. For TinyImageNet, we use 100 initial classes and distribute the remaining classes into three incremental settings: 1) 5 IL tasks of 20 classes; 2) 10 IL tasks of 10 classes; 3) 20 IL tasks of 5 classes. At test time, task IDs are not available.
**FSCIL datasets and setup.** We conduct experiments on three publicly available datasets: 1) CIFAR100 (described above); 2) miniImageNet [57] - consisting of 100 classes, 84\(\times\)84 pixel images with 500 and 100 images per class for training and testing, respectively; 3) Caltech-UCSD Birds-200-2011 (CUB200) [58] - consisting of 200 classes, 224\(\times\)224 pixel images with 5994 and 5794 images for training and testing, respectively. For CIFAR100 and miniImageNet, we divide the 100 classes into 60 base classes and 40 new classes. The new classes are formulated into 8-step 5-way 5-shot incremental tasks. For CUB200, we divide the 200 classes into 100 base classes and 100 new classes. The new classes are formulated into 10-step 10-way 5-shot incremental tasks.
**Compared methods.** We compare with several exemplar-free CIL methods in the many-shot setting [24; 43; 2; 20; 31; 67; 78; 77; 79; 42] and in the few-shot setting [57; 52; 69; 75; 7; 30; 71; 41]. Results of compared methods marked with \(*\) are reproduced. For the upper bound of CIL, a joint training on all data is presented as a reference.
**Implementation details.** We use PyCIL [72] framework for our experiments. For both MSCIL and FSCIL settings, the main network architecture is ResNet-18 [17] trained on the first task using SGD with an initial learning rate of \(0.1\) and a weight decay of \(0.0001\) for \(200\) epochs. For the shrinkage, we use \(\gamma_{1}=1\) and \(\gamma_{2}=1\) for many-shot CIL and higher values \(\gamma_{1}=100\) and \(\gamma_{2}=100\) for few-shot CIL in our experiments. Following most methods, we store all the class prototypes. Similar to [77], we also store the covariance matrices for all classes seen until the current task. In the experiments with visual transformers, we use ViT-B/16 [14] architecture pretrained on ImageNet
Figure 5: Avg \(\mathcal{A}\)cc comparison of Bayesian and linear classifier on CIFAR100 (T=5) setting.
21k [51]. The extracted features are 512 dimensional when using Resnet-18 and 768 dimensional when using pretrained ViT. More implementation details for all hyperparameters are provided in the supplementary material.
### Experimental Results
#### 4.2.1 Many-shot CIL Results
The results for an exemplar-free MSCIL setup are presented in Table 1. We present the results for a set of different MSCIL methods, joint training of a classifier, and a simple NCM classifier (Eucl-NCM) on a frozen backbone. FeCAM outperformed all others by a large margin in all settings. FeCAM version with \(\Sigma^{1:t}\), storing a single covariance matrix representing all classes, already gave significantly better results than the current state-of-the-art method - FeTrIL. However, FeCAM with covariance matrix approximation \(\Sigma_{y}\) pushes the average incremental accuracy even higher and present excellent results. It is worth noticing that Eucl-NCM outperforms many existing CIL methods. Only FeTrIL performs better than Eucl-NCM on all datasets. In Fig. 6, we present accuracy curves after each task for ten task scenarios. SSRE has a lower starting point due to a different network architecture. The rest of the methods, despite having the same starting point, end up with very different accuracies at the last task. Eucl-NCM still presents more competitive results than SSRE. FeTrIL presents better performance but is still far from FeCAM with a common covariance matrix. The FeCAM with a covariance matrix per class outperforms all other methods starting from the first incremental task. Here, in comparison to common covariance matrix, we pay the price in memory and need to store covariance matrix per class. Despite storing a matrix per class, we have less memory overhead compared to exemplar-based methods and do not violate privacy concerns by storing images.
Additionally, we compare our method against popular exemplar-based CIL methods in Table 3, where the memory buffer is set to 2K exemplars. Our method outperforms all others that do not expand the model significantly (see #P column for the number of parameters after the last task). Only Dynamically Expandable Representation (DER), which grows the model almost six times, can outperform our method.
#### 4.2.2 Experiments with pre-trained models
In Table 2 different settings for exemplar-free MSCIL are presented where we follow experimental settings of Learning-to-Prompt (L2P) method [62]. Here, all methods use a ViT encoder pre-trained on ImageNet-21K. L2P [62] is a strong baseline that does not train the encoder and learns an additional 46K prompt parameters. However, as Janson _et al._[21] presented, a simple NCM classifier can perform better for some datasets in CIL, e.g., Split-ImageNet-R [18], Split-CIFAR100 [25] and for domain-incremental learning on CoRe50 [33].
We use the widely-used benchmark in continual learning, Split-CIFAR-100 which splits the original CIFAR-100 [25] into 10 tasks with 10 classes in each task unlike the other settings in Table 1, which have different task splits. Based on ImageNet-R [18], Split-ImageNet-R was recently proposed
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{CIL Method} & \multicolumn{3}{c}{CIFAR-100} & \multicolumn{3}{c}{TinyImageNet} & \multicolumn{3}{c}{ImageNet-Subset} \\ \cline{2-10} & \(T\)=5 & \(T\)=10 & \(T\)=20 & \(T\)=5 & \(T\)=10 & \(T\)=20 & \(T\)=5 & \(T\)=10 & \(T\)=20 \\ \hline EWC [24] & 24.5 & 21.2 & 15.9 & 18.8 & 15.8 & 12.4 & - & 20.4 & - \\ LwF-MC [43] & 45.9 & 27.4 & 20.1 & 29.1 & 23.1 & 17.4 & - & 31.2 & - \\ DeeSIL [2] & 60.0 & 50.6 & 38.1 & 49.8 & 43.9 & 34.1 & 67.9 & 60.1 & 50.5 \\ MUC [31] & 49.4 & 30.2 & 21.3 & 32.6 & 26.6 & 21.9 & - & 35.1 & - \\ SDC [67] & 56.8 & 57.0 & 58.9 & - & - & - & - & 61.2 & - \\ PASS [78] & 63.5 & 61.8 & 58.1 & 49.6 & 47.3 & 42.1 & 64.4 & 61.8 & 51.3 \\ IL2A [77] & 66.0 & 60.3 & 57.9 & 47.3 & 44.7 & 40.0 & - & - & - \\ SMSE [79] & 65.9 & 65.0 & 61.7 & 50.4 & 48.9 & 48.2 & - & 67.7 & - \\ FeTrIL- [42] & 67.6 & 66.6 & 63.5 & 55.4 & 54.3 & 53.0 & 73.1 & 71.9 & 69.1 \\ \hline Eucl-NCM & 64.8 & 64.6 & 61.5 & 54.1 & 53.8 & 53.6 & 72.2 & 72.0 & 68.4 \\ FeCAM (ours) - \(\mathbf{\Sigma}^{1:t}\) & 68.8 & 68.6 & 67.4 & 56.0 & 55.7 & 55.5 & 75.8 & 75.6 & 73.5 \\ FeCAM (ours) - \(\mathbf{\Sigma}_{y}\) & **70.9** & **70.8** & **69.4** & **59.6** & **59.4** & **59.3** & **78.3** & **78.2** & **75.1** \\ \hline Upper Bound & 79.2 & 79.2 & 79.2 & 66.1 & 66.1 & 66.1 & 81.2 & 81.2 & 81.2 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average top-1 incremental accuracy in exemplar-free many-shot CIL with different numbers of incremental tasks. Best results - **in bold**, second best - underlined.
by [61] for continual learning which contains 200 classes randomly divided into 10 tasks of 20 classes each. It contains data with different styles like cartoon, graffiti and origami, as well as hard examples from ImageNet with a high intra-class diversity making it more challenging for CIL experiments. We use CoRe50 [33] for domain-incremental settings where the domain of the same class of objects is changing in new tasks. It consists of 50 different types of objects from 11 domains. The first 8 domains are used for learning and the other 3 domains are used for testing. Since it has a single test task, we report the test accuracy after learning on all 8 domains similar to [62, 21]. Results of compared methods excerpted from [21].
We use the proposed FeCAM method with the pre-trained ViT using a covariance matrix per class on CIL settings. In the domain-incremental setting, we maintain a single covariance matrix per class across domains and update the matrix in every new domain by averaging the matrix from the previous domain and from the current one. FeCAM outperformed both L2P and NCM, in all the settings. Notably, FeCAM outperforms L2P by 11.55% and NCM by 4.45% on the CoRe50 dataset.
#### 4.2.3 Few-shot CIL Results
In many-shot settings, it is possible to obtain a very informative covariance matrix from the large number of available samples, while it can be difficult to get a good matrix in FSCIL from just 5 samples per class. To stabilize the covariance estimation, we use higher values of \(\gamma_{1}\) and \(\gamma_{2}\). In Fig. 7 we present accuracy curves after each task for FSCIL on miniImageNet, CIFAR100, and CUB200 datasets. While TOPIC [52] does better than the finetuning methods, it does not perform well in comparison to the recent methods which has significantly improved the performance. ALICE [41] recently proposed to obtain compact and well-clustered features in the base task which helps in generalizing to new classes. We follow the experimental settings from ALICE [41]. We take the strong base model after the first task from ALICE and use the proposed FeCAM classifier (with a covariance matrix per class) instead of NCM classifier in the incremental tasks. We outperform ALICE significantly on all the FSCIL benchmarks.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{CIL Method} & Split-Clart100 & Split-ImageNet-R & CoRe50 \\ \cline{2-4} & Avg \(\mathcal{A}\)cc & Avg \(\mathcal{A}\)cc & Test Acc \\ \hline FT-Horns & 17.7 & 99.5 & - \\ FT & 33.6 & 28.9 & - \\ EWC [24] & 47.0 & 35.0 & 74.8 \\ Law [29] & 60.7 & 38.5 & 75.5 \\ L2P [62] & 83.8 & 61.6 & 78.3 \\ NCM [21] & 83.7 & 55.7 & 85.4 \\ FeCAM (ours) & **85.7** & **63.7** & **89.9** \\ \hline Joint & 90.9 & 79.1 & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: Avg \(\mathcal{A}\)cc or Test \(\mathcal{A}\)cc at the end of the last task on class-incremental Split-Cifar100 [25], Split-ImageNet-R [18] and domain-incremental CoRe50 [33] benchmarks. All methods are initialized with pretrained weights from ViT-B/16 [14] for fair comparison.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{CIL Method} & \multicolumn{4}{c}{CIFAR100 (\(\mathcal{T}\) = 5)} & ImageNet-Subnet (\(\mathcal{T}\) = 5) \\ \cline{2-5} & **\#P** & **Ex.** & Avg \(\mathcal{A}\)cc & Last\(\mathcal{A}\)cc & Avg \(\mathcal{A}\)cc & Last\(\mathcal{A}\)cc \\ \hline ICRL [43] & 11.17 & 65.4 & 56.3 & 62.6 & 53.7 \\ PODNet [15] & 11.17 & \(\mathcal{\prime}\) & - & - & 59.8 & 43.4 \\ GAU [76] & 11.17 & \(\mathcal{\prime}\) & 69.9 & 61.5 & 65.8 & 56.6 \\ BIC [63] & 11.17 & \(\mathcal{\prime}\) & 66.1 & 55.3 & 66.4 & 49.9 \\ FOSTER [19] & 11.17 & \(\mathcal{\prime}\) & 67.9 & 60.2 & 69.9 & 63.1 \\ DER [65] & 67.2 & 67.2 & **73.2** & **66.2** & **72.6** & **71.1** \\ MEMO [74] & 53.14 & \(\mathcal{\prime}\) & - & - & 76.7 & 70.2 \\ FeCAM(ours) & 11.17 & \(\mathcal{\prime}\) & 70.9 & 62.1 & **78.3** & 70.9 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of our method with recent exemplar-based methods which store 2000 exemplars. For fair comparison, we show the number of parameters (in millions) by #P. Results on Imagenet-Subset excerpted from [73].
Figure 6: Accuracy of each incremental task for: (a) CIFAR100, (b) TinyImageNet, and (c) ImageNet-Subset and multiple MSCIL methods. We annotate the Avg \(\mathcal{A}\)cc. of all sessions between FeCAM and the runner-up method at the end of each curve. Here, prototype refers to NCM with euclidean distance.
#### 4.2.4 Ablation Studies
FeCAM propose to use multiple different components to counteract the CIL effect on the classifier performance. In Table 4 the ablation study for MSCIL is presented, where contribution of each component is exposed in a meaning of average incremental and last task accuracy. Here, we consider the settings using a covariance matrix per class. We show that Tukey's transformation significantly reduces the skewness of the distributions and improves the accuracy using both Euclidean and Mahalanobis distances. The effect of the covariance shrinkage is more significant for CIFAR100 which has 500 images per class (less than 512 dimensions of feature space) while it also improves the performance on ImageNet-Subset. Here, we also show the usage of the diagonal matrix where we use only the diagonal (variance) values from the covariance matrices. We divide the diagonal matrices by the norm of the diagonal to normalize the variances. When using the diagonal matrix, the storage space is reduced from \(D^{2}\) to \(D\). Finally, we show that using the correlation normalization from Eq. (7) gives the best accuracy by tackling the variance shift and better using the feature covariances.
**Time complexity.** While previous methods train the classifier [42] or the model [78, 79] for several epochs in the new tasks, we do not perform any such training in new tasks. Among the existing methods, FeTrIL [42] claims to be the fastest. We compare the time taken for the incremental tasks on ImageNet-Subset (T=5) for FeTrIL and the proposed FeCAM method. Using one Nvidia RTX 6000 GPU, FeTrIL takes 44 minutes to complete all the new tasks while FeCAM takes only 6 minutes.
**Feature transformations.** We analyze the t-SNE plot for the feature distributions of old and new classes from CIFAR100 50-50 (2 tasks) setting in different scenarios in Fig. 8. When the model is trained jointly on all classes, the features of all classes are well clustered and separated from each other. In CIL settings with frozen backbone when the model is trained on only the first 50 classes, the features are well-clustered for the old classes while the features for new classes are scattered and not well-separated. When we make the feature distributions more gaussian using Tukey's transformation, we observe that the new class features are comparatively better clustered.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Distance} & \multirow{2}{*}{Cov. Matrix} & \multirow{2}{*}{Tokey Eq. (9)} & \multirow{2}{*}{Shrinkage Eq. (8)} & \multirow{2}{*}{Norm. Eq. (7)} & \multicolumn{2}{c}{CIFAR-100 (T=5)} & \multicolumn{2}{c}{ImageNet-Subset (T=5)} \\ \cline{5-8} & & & & Last Acc & Avg Acc & Last Acc & Avg Acc \\ \hline Euclidean & - & ✗ & - & - & 51.6 & 64.8 & 60.0 & 72.2 \\ Euclidean & - & ✓ & - & - & 54.4 & 66.6 & 66.2 & 73.6 \\ Mahalanobis & Full & ✗ & ✗ & ✗ & 14.6 & 29.7 & 33.5 & 45.1 \\ Mahalanobis & Full & ✓ & ✗ & ✗ & 20.6 & 36.2 & 54.0 & 65.6 \\ Mahalanobis & Full & ✗ & ✓ & ✗ & 44.6 & 59.3 & 39.9 & 56.9 \\ Mahalanobis & Full & ✓ & ✓ & ✗ & 52.1 & 62.8 & 56.5 & 67.3 \\ Mahalanobis & Diagonal & ✓ & ✓ & ✗ & 55.2 & 66.9 & 64.0 & 74.1 \\ Mahalanobis & Full & ✗ & ✓ & ✓ & 55.4 & 65.9 & 58.1 & 68.5 \\ Mahalanobis & Full & ✓ & ✓ & ✓ & **62.1** & **70.9** & **70.9** & **78.3** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation of the performance indicating the contribution from the different components of our proposed method FeCAM for MSCIL with five tasks on CIFAR-100 and ImageNet-Subset datasets. Note that here we use variance normalization (different from Eq. (7)) when using diagonal matrix.
Figure 7: FSCIL methods accuracy of each incremental task for (a) miniImageNet, (b) CIFAR100 and (c) CUB200. We annotate the performance gap after the last session and Avg \(\mathcal{A}\)cc. of all sessions between OURS and the runner-up method at the end of each curve. Refer to supplementary material for detailed values.
**CIL settings with a small first task.** Usually one method sticks to one setting. Exemplar-free methods use 50% of data in the first task as equally splitting is a much more challenging setting which is usually tackled by storing exemplars or by expanding the network in new tasks.
When half of the total classes is present in the first task, the feature extractor is better. When we start with fewer classes (20 classes in the first step) and add 20 new classes at every task, we can observe the same behavior in Table 5. FeCAM still works and outperforms other methods. However, the average incremental accuracy is not very high in this challenging setting because the representation learned in the first task is not as good as in the big first task setting.
## 5 Conclusions
In this paper, we revisit the anisotropic Mahalanobis distance for exemplar-free CIL and propose using it to effectively model covariance relations for classification based on prototypes. We analyze the heterogeneity in the feature distributions of the new classes that are not learned by the feature extractor. To address the feature distribution shift, we propose our Bayes classifier method FeCAM, which uses Mahalanobis distance formulation and additionally uses some techniques like correlation normalization, covariance shrinkage, and Tukey's transformation to estimate better covariance matrices for continual classifier learning. We validate FeCAM on both many- and few-shot incremental settings and outperform the state-of-the-art methods by significant margins without the need for any training steps. Additionally, FeCAM evaluated in the class- and domain-incremental benchmarks with pretrained vision transformers yields state-of-the-art results. FeCAM does not store any exemplars and performs better than most exemplar-based methods on many-shot CIL settings. As a future work, FeCAM can be adapted to CIL settings where the feature representations are continually learned.
**Limitations.** The proposed approach needs a strong feature extractor or a large amount of data in the first task to learn good representations, as we do not learn new features but reuse the ones learned on the first task (or from pretrained network). Therefore, the method is not apt when training from scratch, starting with small tasks. We would then need to extend the theory to feature distributions which undergo feature drift during training; next to prototype drift [67] also covariance changes should be modeled.
**Acknowledgement.** We acknowledge projects TED2021-132513B-I00 and PID2022-143257NB-I00, financed by MCIN/AEI/10.13039/501100011033 and FSE+ and the Generalitat de Catalunya CERCA Program. Bartlomiej Twardowski acknowledges the grant RYC2021-032765-I.
Figure 8: The t-SNE plot for the features of new and old classes after Joint-Training (a) and after learning only the first 50 classes (b,c). In Joint-Training, the features are well clustered for all classes, however when the feature extractor is trained only on the first 50 classes, the new class representations are spread out. On applying Tukey's transformation, the new class embeddings are better clustered.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{CIFAR-100} & \multicolumn{2}{c}{ImageNet-Subset} \\ \cline{2-5} & Last Acc & Avg Acc & Last Acc & Avg Acc \\ \hline Euclidean-NCM & 30.6 & 50.0 & 35.0 & 54.5 \\ FeTrIL & 46.2 & 61.3 & 48.4 & 63.1 \\ FeCAM (ours) & **48.1** & **62.3** & **52.3** & **66.4** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Analysis of performance when the first task has 20 classes only and 20 new classes are added in incremental tasks, on CIFAR-100 and ImageNet-Subset datasets.
**Supplementary Materials for**
**FeCAM: Exploiting the Heterogeneity of Class Distributions in Exemplar-Free Continual Learning**
**Dipam Goswami\({}^{1,2}\)****Yuyang Liu\({}^{3,4,5}\)****Bartlomiej Twardowski\({}^{1,2,6}\)****Joost van de Weijer\({}^{1,2}\)****\({}^{1}\)Department of Computer Science, Universitat Autonoma de Barcelona
\({}^{2}\)Computer Vision Center, Barcelona \({}^{1}\)University of Chinese Academy of Sciences
\({}^{4}\)State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences
\({}^{5}\)Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences \({}^{6}\)IDEAS-NCBR
{dgoswami, btwardowski, joost}@cvc.uab.es, [email protected]
## 6 Definitions
The Mahalanobis distance is generally used to measure the distance between a data sample \(x\) and a distribution \(\mathcal{D}\). Given the distribution has a mean representation \(\mu\) and an invertible covariance matrix \(\mathbf{\Sigma}\in\mathbb{R}^{D\times D}\), then the squared Mahalanobis distance can be expressed as:
\[\mathcal{D}_{M}(x,\mu)=(x-\mu)^{T}\mathbf{\Sigma}^{-1}(x-\mu) \tag{11}\]
where \(\mathbf{\Sigma}^{-1}\) is the inverse of the covariance matrix.
The covariance matrix is symmetric in nature and can be defined as:
\[\mathbf{\Sigma}(i,j)=\left\{\begin{array}{ll}var(i)&i=j\\ cov(i,j)&i\neq j\end{array}\right. \tag{12}\]
where \(i,j\in 1,...D\), \(var(i)\) denotes the variance of the data along the \(ith\) dimension and \(cov(i,j)\) denotes the covariance between the dimensions \(i\) and \(j\). The diagonals of the matrix represent the variances and the non-diagonal entries are the covariance values.
In euclidean space, \(\mathbf{\Sigma}=I\), where \(I\) is an identity matrix. Thus, in euclidean space, we consider identical variance along all dimensions and ignore the positive and negative correlations between the variables.
## 7 Implementation Details
We analyze the effect of the covariance shrinkage hyperparamaters \(\gamma_{1}\) and \(\gamma_{2}\) in Fig. 9 for the many-shot setting (T=5) on Cifar100. Based on the observations, we see that the chosen parameters \(\gamma_{1}=1\) and \(\gamma_{2}=1\) obtain good results. Similarly, we use \(\gamma_{1}=1\) and \(\gamma_{2}=1\) for all many-shot experiments on CIFAR100, TinyImageNet and ImageNet-Subset. We use \(\gamma_{1}=1\) and \(\gamma_{2}=0\) for the experiments on Split-CIFAR100 and Core50 datasets. For Split-ImageNet-R, We use \(\gamma_{1}=10\) and \(\gamma_{2}=10\). For all the few-shot CIL settings, we obtain better results with \(\gamma_{1}=100\) and \(\gamma_{2}=100\).
Since the Resnet-18 feature extractor uses a ReLU activation function, the feature representation values are all non-negative, so the inputs to tukey's ladder of powers transformation are all valid. However, when using the ViT encoder pre-trained on ImageNet-21K, we also have negative values in the feature representations, hence we do not apply the tukey's transformation on the features for those experiments.
**Evaluation.** Similar to [42; 79; 78], we evaluate the methods in terms of average incremental accuracy. Average incremental accuracy \(A_{inc}\) is the average of the accuracy \(a_{t}\) of all incremental tasks (including the first task) and is a fair metric to compare the performances of different methods across multiple tasks.
\[A_{inc}=\frac{1}{T}\sum_{t=1}^{t=T}a_{t} \tag{13}\]
## 8 Further Analysis
**Storage requirements.** We analyze the storage requirements of FeCAM and compare it with the exemplar-based CIL methods in Table 6 for ImageNet-Subset (T=5) setting. Due to the symmetric nature of covariance matrices, we can store half (lower or upper triangular) of the covariance matrices and reduce the storage to half. While most of the exemplar-based methods preferred a constant storage requirement of 2000 exemplars, storage requirement for FeCAM gradually increases across steps and is still less by about 206 MBs after the last task.
**Pre-training with dissimilar classes.** Similar to [23], we perform experiments using the DeiT-S/16 vision transformer pretrained on the ImageNet data with different pre-training data splits and then evaluate the performance of NCM (with euclidean distance) and the proposed FeCAM method on Split-CIFAR100 (10 tasks with 10 classes in each task). In order to make sure that the pretrained classes are not similar to the classes of CIFAR100, [23] manually removed 389 classes from the 1000 classes in ImageNet. We take the publicly available DeiT-S/16 weights pre-trained on remaining 611 classes of ImageNet by [23] and evaluate NCM and FeCAM as shown in Table 7. As expected, the performance of both methods drops a bit when the pre-training is not done on the similar classes. Still FeCAM outperforms NCM by about 10% on the final accuracy. Thus, this experiment further validates the effectiveness of modeling the covariance relations using our FeCAM method in settings where images from the initial task are dissimilar to new task images.
## 9 Few-Shot CIL results
FeCAM can easily be adapted to available few-shot methods in CIL since most methods obtain class prototypes from few-shot data of new classes and then use the euclidean distance for classification. We show in our paper that starting from the base task model from ALICE and simply using the FeCAM metric for classification significantly improves the performance across all tasks for the standard few-shot CIL benchmarks.
We report the average accuracy after each task for all methods on Cifar100 in Table 8, on CUB200 in Table 9 and on miniImageNet in Table 10.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Method & Task 0 & Task 1 & Task 2 & Task 3 & Task 4 & Task 5 \\ \hline Exemplar-based & 312 MB & 312 MB & 312 MB & 312 MB & 312 MB & 312 MB \\ FeCAM (ours) & 53 MB & 63 MB & 75 MB & 85 MB & 96 MB & 106 MB \\ \hline \hline \end{tabular}
\end{table}
Table 6: Analysis of storage requirements across tasks for FeCAM and the exemplar-based methods (storing 2000 exemplars) for the ImageNet-Subset (T=5) setting.
Figure 9: Impact of covariance shrinkage hyperparameters on many-shot CIFAR100 (T=5) setting using the proposed FeCAM method
For further analysis to demonstrate the applicability of FeCAM, we take the base task model from FACT [71] and use FeCAM in the incremental tasks for the CUB200 dataset. FeCAM improves the performance on all tasks when applied to FACT as shown in Table 9.
One of the main drawbacks of the many-shot continual learning methods is overfitting on few-shot data from new classes and hence these methods are not suited for few-shot settings. FeCAM is a single solution for both many-shot and few-shot settings and thus can be applied in both continual learning settings.
## 10 Pseudo Code
In Algorithm 1, we present the pseudo code for using FeCAM classifier.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{8}{c}{Accuracy in each session (\%)} \\ \cline{2-11} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & Avg \(A\) \\ \hline Finetune & 61.31 & 27.22 & 16.37 & 6.08 & 2.54 & 1.56 & 1.93 & 2.6 & 1.4 & 13.45 \\ D-Cosine [57] & 70.37 & 65.45 & 61.41 & 58.00 & 54.81 & 51.89 & 49.10 & 47.27 & 45.63 & 55.99 \\ CEC [69] & 72.00 & 66.83 & 62.97 & 59.43 & 56.70 & 53.73 & 51.19 & 49.24 & 47.63 & 57.75 \\ LIMIT [75] & 72.32 & 68.47 & 64.30 & 60.78 & 57.95 & 55.07 & 52.70 & 50.72 & 49.19 & 59.06 \\ MetaSFCL [7] & 72.04 & 67.94 & 63.77 & 60.29 & 57.58 & 55.16 & 52.90 & 50.79 & 49.19 & 58.85 \\ Data-free Replay [30] & 71.84 & 67.12 & 63.21 & 59.77 & 57.01 & 53.95 & 51.55 & 49.52 & 48.21 & 58.02 \\ FACT [71] & 72.56 & 69.63 & 66.38 & 62.27 & 60.6 & 57.33 & 54.34 & 52.16 & 50.49 & 60.70 \\ ALICE [41] & **81.87** & 70.88 & 67.77 & 64.41 & 62.58 & 60.07 & 57.73 & 56.21 & 55.31 & 64.09 \\ \hline ALICE+FeCAM & **81.87** & **76.06** & **72.24** & **67.92** & **65.49** & **62.69** & **59.98** & **58.54** & **57.16** & **66.88** \\ \hline \hline \end{tabular}
\end{table}
Table 10: Detailed accuracy of each incremental session on miniImageNet dataset. Best among columns in **bold**.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{8}{c}{Accuracy in each session (\%)} \\ \cline{2-7} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & Avg \(A\) \\ \hline Finetune & 64.10 & 39.61 & 15.37 & 9.80 & 6.67 & 3.80 & 3.70 & 3.14 & 2.65 & 16.54 \\ D-Cosine [57] & 74.55 & 67.43 & 63.63 & 59.55 & 56.11 & 53.80 & 51.68 & 49.67 & 47.68 & 58.23 \\ CEC [69] & 73.07 & 68.88 & 65.26 & 61.19 & 58.09 & 55.57 & 53.22 & 51.34 & 49.14 & 59.53 \\ LIMIT [75] & 73.81 & 72.09 & 67.87 & 63.89 & 60.70 & 57.77 & 55.67 & 53.52 & 51.23 & 61.84 \\ MetaSFCL [7] & 74.50 & 70.10 & 66.84 & 62.77 & 59.48 & 56.52 & 54.36 & 52.56 & 59.97 & 60.79 \\ Data-free Replay [30] & 74.40 & 70.20 & 66.54 & 62.51 & 59.71 & 56.58 & 54.52 & 52.39 & 50.14 & 60.78 \\ FACT [71] & 74.60 & 72.09 & 67.56 & 63.52 & 61.38 & 58.36 & 56.28 & 54.24 & 52.10 & 62.24 \\ ALICE [41] & **80.03** & 70.38 & 66.6 & 62.72 & 60.28 & 58.06 & 56.38 & 55.35 & 53.56 & 62.65 \\ \hline ALICE+FeCAM & **80.03** & **74.15** & **70.16** & **65.57** & **62.82** & **60.25** & **58.46** & **56.86** & **54.94** & **64.80** \\ \hline \hline \end{tabular}
\end{table}
Table 8: Detailed accuracy of each incremental session on CIFAR100 dataset. Best among columns in **bold**.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{8}{c}{Accuracy in each session (\%)} \\ \cline{2-11} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & Avg \(A\) \\ \hline Finetune & 64.10 & 39.61 & 15.37 & 9.80 & 6.67 & 3.80 & 3.70 & 3.14 & 2.65 & 16.54 \\ D-Cosine [57] & 74.55 & 67.43 & 63.63 & 59.55 & 56.11 & 53.80 & 51.68 & 49.67 & 47.68 & 58.23 \\ CEC [69] & 73.07 & 68.88 & 65.26 & 61.19 & 58.09 & 55.57 & 53.22 & 51.34 & 49.14 & 59.53 \\ LIMIT [75] & 73.81 & 72.09 & 67.87 & 63.89 & 60.70 & 57.77 & 55.67 & 53.52 & 51.23 & 61.84 \\ MetaSFCL [7] & 74.50 & 70.10 & 66.84 & 62.77 & 59.48 & 56.52 & 54.36 & 52.56 & 59.97 & 60.79 \\ Data-free Replay [30] & 74.40 & 70.20 & 66.54 & 62.51 & 59.71 & 56.58 & 54.52 & 52.39 & 50.14 & 60.78 \\ FACT [71] & 74.60 & 72.09 & 67.56 & 63.52 & 61.38 & 58.36 & 56.28 & 54.24 & 52.10 & 62.24 \\ ALICE [41] & **80.03** & 70.38 & 66.6 & 62.72 & 60.28 & 58.06 & 56.38 & 55.35 & 53.56 & 62.65 \\ \hline ALICE+FeCAM & **80.03** & **74.15** & **70.16** & **65.57** & **62.82** & **60.25** & **58.46** & **56.86** & **54.94** & **64.80** \\ \hline \hline \end{tabular}
\end{table}
Table 9: Detailed accuracy of each incremental session on CIFAR100 dataset. Best among columns in **bold**.
```
0: Training data \((D_{1},D_{2},..,D_{T})\), Test data for evaluation \((X_{1}^{e},X_{2}^{e},..,X_{T}^{e})\), Model \(\phi\)
1:for task \(t\in[1,2,..,T]\)do
2:if\(t==1\)then
3: Train \(\phi\) on \(D_{1}=(X_{1},Y_{1})\)\(\triangleright\) Train the feature extractor
4:endif
5:for\(y\in Y_{t}\)do
6:\(\mu_{y}=\frac{1}{|X_{y}|}\sum_{x\in X_{y}}\phi(x)\)\(\triangleright\) Compute the prototypes
7:\(\phi(\tilde{X}_{y})=Tukeys(\phi(X_{y}))\)\(\triangleright\) Tukeys transformation Eq. (9)
8:\(\mathbf{\Sigma}_{y}=Cov(\phi(\tilde{X}_{y}))\)\(\triangleright\) Compute the covariance matrices
9:\((\mathbf{\Sigma}_{y})_{s}=Shrinkage(\mathbf{\Sigma}_{y})\)\(\triangleright\) Apply covariance shrinkage Eq. (8)
10:\((\mathbf{\hat{\Sigma}}_{y})_{s}=Normalization((\mathbf{\Sigma}_{y})_{s})\)\(\triangleright\) Apply correlation normalization Eq. (7)
11:endfor
12:for\(x\in X_{t}^{e}\)do
13:\(y^{*}=\underset{y=1,...,Y_{t}}{\operatorname{argmin}}\mathcal{D}_{M}(\phi(x),\mu_{y})\) where
14:\(\mathcal{D}_{M}(\phi(x),\mu_{y})=(\phi(\tilde{x})-\tilde{\mu}_{y})^{T}(\mathbf{ \hat{\Sigma}}_{y})_{s}^{-1}(\phi(x)-\tilde{\mu}_{y})\)
15:\(\triangleright\) Compute the squared mahalanobis distance to prototypes
16:endfor
17:endfor
```
**Algorithm 1** FeCAM
|
2309.16885 | Accretion disk wind of Hercules X-1 during the Short High state | Hercules X-1 is a nearly edge-on X-ray binary with a warped, precessing
accretion disk, which manifests through a 35-day cycle of alternating High and
Low flux states. This disk precession introduces a changing line of sight
towards the X-ray source, through an ionized accretion disk wind. The sightline
variation allows us to uniquely determine how the wind properties vary with
height above the disk. All the previous wind measurements were made in the
brighter Main High state of Her X-1. Here, we analyze the only Chandra
observation during the fainter `Short' High state, and significantly detect
blueshifted ionized absorption. We find a column density of
$2.0_{-0.6}^{+1.1}\times10^{22}$ cm$^{-2}$, an ionization parameter $\log
(\xi$/erg cm s$^{-1})=3.41_{-0.12}^{+0.15}$ and an outflow velocity of $380 \pm
40$ km/s. The properties of the outflow measured during the Short High state
are in good agreement with those measured at equivalent precession phases
during Main High. We conclude that we are sampling the same wind structure,
seen during both Main and Short High, which is precessing alongside with the
warped accretion disk every 35 days. Finally, the high spectral resolution of
Chandra gratings above 1 keV in this observation enabled us to measure the
abundances of certain elements in the outflow. We find
Mg/O$=1.5_{-0.4}^{+0.5}$, Si/O$=1.5 \pm 0.4$ and S/O$=3.0_{-1.1}^{+1.2}$,
whereas in our previous study of Her X-1 with XMM-Newton, we found an
over-abundance of N, Ne and Fe compared with O. These peculiar abundance ratios
were likely introduced by pollution of the donor by the supernova which created
Her X-1. | P. Kosec, E. Kara, A. C. Fabian, C. Pinto, I. Psaradaki, D. Rogantini, R. Staubert, D. J. Walton | 2023-09-28T22:47:09Z | http://arxiv.org/abs/2309.16885v1 | # Accretion disk wind of Hercules X-1 during the Short High state
###### Abstract
Hercules X-1 is a nearly edge-on X-ray binary with a warped, precessing accretion disk, which manifests through a 35-day cycle of alternating High and Low flux states. This disk precession introduces a changing line of sight towards the X-ray source, through an ionized accretion disk wind. The sightline variation allows us to uniquely determine how the wind properties vary with height above the disk. All the previous wind measurements were made in the brighter Main High state of Her X-1. Here, we analyze the only _Chandra_ observation during the fainter 'Short' High state, and significantly detect blueshifted ionized absorption. We find a column density of \(2.0^{+1.1}_{-0.6}\times 10^{22}\) cm\({}^{-2}\), an ionization parameter \(\log(\xi/\)erg cm s\({}^{-1}\))=\(3.41^{+0.15}_{-0.12}\) and an outflow velocity of \(380\pm 40\) km/s. The properties of the outflow measured during the Short High state are in good agreement with those measured at equivalent precession phases during Main High. We conclude that we are sampling the same wind structure, seen during both Main and Short High, which is precessing alongside with the warped accretion disk every 35 days. Finally, the high spectral resolution of _Chandra_ gratings above 1 keV in this observation enabled us to measure the abundances of certain elements in the outflow. We find Mg/O\(=1.5^{+0.5}_{-0.4}\), Si/O\(=1.5\pm 0.4\) and S/O\(=3.0^{+1.2}_{-1.1}\), whereas in our previous study of Her X-1 with _XMM-Newton_, we found an over-abundance of N, Ne and Fe compared with O. These peculiar abundance ratios were likely introduced by pollution of the donor by the supernova which created Her X-1.
Accretion (14) 0000-0002-4000]P. Kosec
0000-0002-3880-7888]E. Kara
0000-0002-4888-0888]A. C. Fabian
0000-0002-4707-0708]C. Pinto
0000-0002-4888-0888]I. Psaradaki
0000-0002-4707-0708]D. Rogantini
0000-0002-0701-8888]R. Staubert
0000-0002-4888-0888]D. J. Walton
## 1 Introduction
Hercules X-1 (hereafter Her X-1, Giacconi et al., 1972) is one of the most complex and fascinating X-ray binaries in the X-ray sky. Fortuitously, its proximity (6.1 kpc, Leahy & Abdallah, 2014) and high intrinsic luminosity (\(\sim 4\times 10^{37}\) erg/s) permit detailed X-ray studies and allowed us to understand some of its mysteries. The compact object is a neutron star with a rotation period of 1.24 s (determined from the presence of X-ray pulsations, Tananbaum et al., 1972; Giacconi et al., 1973) and a surface magnetic field of \(4\times 10^{12}\) G, estimated from the energy of the cyclotron resonance scattering feature (Truemper et al., 1978; Staubert et al., 2019). The neutron star is fed from an intermediate mass donor with an orbital period of 1.7 days (HZ Her, Davidsen et al., 1972; Bahcall & Bahcall, 1972; Middleditch & Nelson, 1976) through a tilted, twisted and precessing accretion disk (Gerend & Boynton, 1976; Ogilvie & Dubus, 2001), oriented almost edge-on towards us, such that eclipses by HZ Her are observed.
The disk shape and precession manifests in the long-term evolution of Her X-1 flux, by introducing a \(\sim 35\) day cycle of High and Low flux states (Katz, 1973). The cycle begins by a turn-on into the Main High state, during which the inner accretion flow (emitting the majority of X-rays) is directly observable from our line of sight. About 10 days later, the precessing inner edge of the accretion disk starts to obscure the X-ray source. This Low State is followed by a second, fainter Short High state (Fabian, 1973), during which the X-ray source is
just barely uncovered by the accretion disk (Scott et al., 2000; Leahy, 2002) and Her X-1 reaches about one third of the maximum Main High state flux. Finally, a second Low State of the 35-day cycle follows, when the X-ray source is again fully covered. For an illustration of these variations, we refer to Fig. 2 of Scott et al. (2000), which shows an average lighturve of the 35-day Her X-1 cycle, obtained by stacking many cycles observed with the _RXTE_ instrument (see also Fig. 4 in this paper for an example of a single cycle).
The warped disk precession results in our line of sight passing through different paths through the accretion flow, and specifically at different heights above the warped disk. A unique opportunity is thus provided to us - our line of sight samples and allows us to study any ionized material over a range of heights above the disk. Namely, we are able to study the vertical distribution of the properties of the accretion disk wind of Her X-1. A schematic of this situation is shown in Fig. 1.
Accretion disk winds were detected via Doppler shifted emission or absorption lines in X-ray spectra in both black hole (Kotani et al., 2000; Miller et al., 2006; Kubota et al., 2007) and neutron star X-ray binaries (Ueda et al., 2001; Miller et al., 2011) and show typical velocities of 100s km/s (for a recent review, see Neilsen and Degenaar, 2023). They appear to be common especially in high inclination soft state black hole systems (Ponti et al., 2012), suggesting an equatorial wind structure. More recently, outflows were also detected in X-ray binaries outside the X-ray band - in the UV, optical and IR bands, and during both hard and soft states (e.g. Munoz-Darias et al., 2019; Sanchez-Sierras and Munoz-Darias, 2020; Castro Segura et al., 2022; Munoz-Darias and Ponti, 2022).
The origin and the launching mechanisms of these phenomena are widely debated - they could be driven by Compton heating of the outer parts of the accretion disk (Begelman et al., 1983; Tomaru et al., 2018), or via magnetic fields (Miller et al., 2006; Chakraverty et al., 2016; Fukumura et al., 2017), but are unlikely to be driven by radiation pressure alone (Proga and Kallman, 2002). These outflows have potential to drive away significant fractions of the originally accreting mass (e.g. Lee et al., 2002; Kosec et al., 2020) and thus affect the evolution of the X-ray binary (Verbunt, 1993; Ziolkowski and Zdziarski, 2018). However, it is challenging to accurately constrain the wind energetics as in most systems we only observe a single line of sight towards the X-ray source. Therefore, X-ray absorption line spectroscopy gives us only a very limited view of the 3D wind structure (although see Miller et al., 2015, for an approach using wind re-emission). As Her X-1 uniquely offers a range of sightlines above the accretion disk, it is an ideal source to study disk winds in order to determine their launching mechanisms and measure their energetics.
An accretion disk wind was confirmed in Her X-1 using Main High state _XMM-Newton_ observations by Kosec et al. (2020), but previous UV observations already showed the presence of an ionized outflow within the binary system (Boroson et al., 2001). This outflow was predicted by Schandl and Meyer (1994) and could be the driver of the disk warping (although see Ogilvie and Dubus, 2001, for an alternative radiation-driven warping mechanism). To follow-up on the opportunity to study the vertical structure of a disk wind, we performed a large _XMM-Newton_ and _Chandra_ campaign, densely sampling the wind properties during a single Her X-1 Main High state (Kosec et al., 2023). We found that the wind is weaker and clumpier at greater heights above the disk, and combining the wind measurements with a Her X-1 warped disk model, we produced a 2D map of the wind properties.
Both of the X-ray studies described above were based solely on Main High state observations, where the high flux of Her X-1 results in the highest statistics spectra. However, the fact that the wind is detected in all \(\sim 30\) Main High state observations suggests that the wind is most likely present in Her X-1 at all times. As the outer accretion disk obscures the main X-ray source during the Low States, the detection of an ionized absorber at these periods is challenging. On the other hand, the central source is directly visible again during the Short High state, meaning it should also be possible to see the wind during this state too (Fig. 1).
Additionally, the binary system of Her X-1 and HZ Her is not oriented perfectly edge-on, but has an inclination of \(\sim 85^{\circ}\)(Leahy and Abdallah, 2014), which is also the average inclination of the warped accretion disk. Therefore, our line of sight during at least part of the Short High state could sample lower heights above the warped disk compared with Main High. This may happen because during the Main High, the outer disk quickly moves out of our line of sight right after the Turn-on, whereas during the Short High, different parts of the disk move away slower and remain relatively close to our line of sight for prolonged periods of time. This situation and the position of our line of sight over time with respect to different parts of the warped disk is illustrated in Fig. 2 of Scott et al. (2000) and in Fig. 3 of Leahy (2002). Following the results of the Main High state wind studies, the wind column density is expected to be higher at lower heights above the disk.
Here we study an archival high-spectral resolution _Chandra_ observation of Her X-1 from Nov 2002, which
reveals a number of narrow absorption lines, similar to those detected in Her X-1 spectra taken during the Main High state. The structure of this paper is as follows. Section 2 describes our data preparation and Section 3 the spectral modelling methods. In Section 4 we present the results of this study, and in Section 5 we discuss their implications. Throughout the paper, we assume a distance of 6.1 kpc to Her X-1 (Leahy & Abdallah, 2014).
## 2 Data Reduction and Preparation
_Chandra_(Weisskopf et al., 2002) observed Her X-1 once during its Short High state, on November 3 2002 (observation ID 4375). This is the first time this observation has been studied in detail and published. The observation exposure was 20.4 ks, and the HETG gratings (Canizares et al., 2005) were used to acquire X-ray data in high spectral resolution. The midpoint of the observation was at MJD=52581.414. The data were taken in the continuous clocking (CC) mode and the mean count rate (all instruments summed) was 9.4 ct/s.
The data were downloaded from the TGCAT archive (Huenemoerder et al., 2011) in fully reduced form. We use both Medium Energy Grating (MEG) and High Energy Grating (HEG) instruments, and stack the positive and negative first order spectra for each instrument using the combine_grating_spectra routine in CIAO (Fruscione et al., 2006). We analyzed the data in full spectral resolution without any binning. We also tested our final spectral fit with the optimal binning scheme (Kaastra & Bleeker, 2016), but found no differences in the final results. The HEG and MEG spectra were fitted simultaneously using a cross-calibration constant. The value of this parameter was always very close to 1, indicating \(<3\%\) calibration difference between the two instruments. We used HEG data in the energy range between 1 and 9 keV (1.4 to 12.4 A), and MEG data in the range between 0.62 keV and 5 keV (2.5 to 20A). These limits were imposed by the instrument calibration and the signal-to-noise.
The spectra were fitted in the spex fitting package version 3.06.01 (Kaastra et al., 1996). All reduced spectra were converted from ogip format into spex format using the trafo routine. We used the Cash statistic (Cash, 1979), which is appropriate for un-binned data, to analyze all spectra. The uncertainties are provided at \(1\sigma\) significance.
## 3 Spectral Modelling
We apply the spectral model from Kosec et al. (2022) to describe the Her X-1 continuum emission during Short High. Fortunately, we are able to make some simplifications to the complex original model due to the different instrument used here. For further details about the individual spectral components, we refer the reader to section 4 of Kosec et al. (2022).
A comt model (Titarchuk, 1994) is used to describe the primary accretion column emission from the X-ray pulsar, and a bb model describes the soft (T\(\sim 0.1\) keV) reprocessed emission. As our coverage of the soft X-ray band (\(<1\) keV) is limited, we fix the blackbody temperature to 0.1 keV. We also model Fe line emission in the Fe K band. We use two Gaussian emission lines, a narrow line at 6.4 keV (low ionization Fe), and a medium-width line at 6.7 keV (Fe XXV). A highly broadened (FWHM\(\sim 2\) keV) Fe K line at \(\sim 6.5\) keV, seen during the Main High state of Her X-1 (Kosec et al., 2022), is not required in our Short High state observation. This is likely a signal-to-noise issue - our _Chandra_ HETG spectrum has much lower statistics in the Fe K band than the Main High state _XMM-Newton_ EPIC-pn spectrum.
Figure 1: A schematic of the Her X-1 system, adapted from Fig. 1 in Kosec et al. (2023). In this simplified scenario, the warped precessing disk is fixed, and instead the observer line of sight is moving throughout the various states of the 35-day precession cycle. The disk precession allows our line of sight to pass through different parts of the accretion disk wind, at different heights above the disk. While Kosec et al. (2023) focused on the Main High state, this work focuses on the fainter, Short High state.
On the other hand, we do observe a strong excess at 1 keV (the '1 keV feature' seen in the Main High state, e.g. Furst et al., 2013), which we model with a highly broadened Gaussian emission line. This excess is likely due to a forest of Fe L lines, possibly mixed with Ne IX-X emission. Finally, we also include a broadened emission line at 18.96 A (0.65 keV) to describe O VIII emission, seen also during the Main High state. The O VIII position is very close to the lower energy limit of our coverage, so we freeze the line wavelength to 18.96 A. As our band does not cover softer energies than 0.6 keV, we do not need to include any further lines to describe O VII, N VII and N VI emission. Finally, all of the source emission is obscured by Galactic absorption, which is described by the hot model as an almost neutral absorber with a temperature of \(8\times 10^{-6}\) keV (de Plaa et al., 2004; Steenbrugge et al., 2005). We fit for the column density of this absorber, but set a lower limit of \(1\times 10^{20}\) cm\({}^{-2}\)(HI4PI Collaboration et al., 2016).
The absorption lines from the disk wind are described with the pion spectral model (Miller et al., 2015; Mehdipour et al., 2016). pion takes the spectral energy distribution (SED) from the currently loaded continuum spectral model and self-consistently calculates the ionizing balance and the line depths produced by an illuminated, photo-ionized slab of plasma. We use the model to determine the disk wind properties such as its column density, ionization parameter, systematic velocity, and its velocity width.
Additionally, pion can be used to fit for the elemental abundances in the outflow. In Kosec et al. (2023) we fitted for the abundance of N, Ne and Fe, while fixing the abundance of O to 1 (one of the elemental abundances must be fixed as all the disk wind absorption lines are from metals). Here we use the best-fitting abundances from that paper, determined from the higher statistics Main High state _XMM-Newton_ spectra: we fix N/O=3.4, Ne/O=2.3 and Fe/O=2.1. However, the broader energy band in which HETG offers very good spectral resolution (and does not suffer from instrumental issues as EPIC pn on _XMM-Newton_) allows us to reliably fit for further elemental abundances: we can determine the abundance of Mg, Si and S. We have previously used 3 _Chandra_ HETG spectra to study Her X-1 in the Main High state (Kosec et al., 2023), but were not able to constrain the Mg, S and Si elemental abundances because the wind was weaker in those 3 _Chandra_ observations in comparison with the Short High state exposure analyzed here. The abundances of these three elements are left as free parameters in our spectral fit.
The final model, in symbolic form, is thus: hot x pion x (comt + bb + 4 ga).
## 4 Results
The high-resolution _Chandra_ HETG spectrum reveals a number of absorption lines located near strong Ly\(\alpha\) elemental transitions of O, Ne, Si and S. The HETG spectrum, focusing on only on the strongest elemental transitions, is shown in Fig. 3. These absorption lines suggest the presence of a highly ionized outflow with a low blueshift/velocity (\(<1000\) km/s). We fit the spectrum with the spectral model described in Section 3. This results in a fit with C-stat=8599 for 7888 degrees of freedom (DoF). A continuum-only spectral fit, where the pion component is omitted, results in fit statistic C-stat=8767 for 7895 DoF. The very large fit statistic improvement of \(\Delta\)C-stat=168 upon adding the ionized absorber to the baseline spectral fit, even compared to observations during the brighter Main High state (Extended Data Table 1 of Kosec et al., 2023), indicates a very high statistical detection (\(>5\sigma\), Kosec et al., 2018) of the disk wind during this Short High state observation of Her X-1. The best-fitting spectral model is shown in Fig. 2 (which contains the full \(0.6-9\) keV broadband fit) and in Fig. 3 (focusing only on the narrow energy bands around the strongest disk wind absorption lines).
The best-fitting disk wind properties are as follows. We determine a column density of \(2.0^{+1.1}_{-0.6}\times 10^{22}\) cm\({}^{-2}\), an ionization parameter log(\(\xi\)/erg cm s\({}^{-1}\)) of \(3.41^{+0.15}_{-0.12}\), a systematic velocity of \(570\pm 40\) km/s and a velocity width \(240\pm 40\) km/s. The orbital motion-corrected out
Figure 2: The 0.6-9 keV Short High state _Chandra_ HETG spectrum of Her X-1 (heavily overbinned), showing the broadband continuum fit with the full spectral model. MEG data are shown in blue, HEG data are in black, and the best-fitting model is in red.
flow velocity is \(380\pm 40\) km/s, determined by following the steps from Kosec et al. (2020). The measured elemental abundances are Mg/O=\(1.5^{+0.5}_{-0.4}\), Si/O=\(1.5\pm 0.4\) and S/O=\(3.0^{+1.2}_{-1.1}\). These results are summarized alongside all the best-fitting continuum parameters in Table 1.
We tested for the multi-phase nature of the disk wind by adding a second pion component to the previous spectral fit. A second ionization component could potentially alter the best-fitting elemental abundances and shift them back towards Solar ratios. However, by fitting this more complex spectral model for a range of pion parameter values, we did not find any significant evidence (\(\Delta\)C-stat\(>9\)) for a second ionization zone. Similar results were found in the Main High State (Kosec et al., 2023).
## 5 Discussion and Conclusions
We study an archival Short High state observation of Her X-1 taken with the _Chandra_ HETG instrument. The gratings allow us to resolve and detect the same narrow absorption lines from an ionized disk wind that are present in the Main High state observations. We model the absorber with the photo-ionization model pion and determine the wind properties and the abundances of certain elements within the outflow, leveraging the broader energy band of HETG in comparison with _XMM-Newton_ RGS.
To compare the properties of the disk wind during the Short High state observation with our previous study during Main High (Kosec et al., 2023), we determine the current precession phase as well as the time elapsed since the Short High state Rise for the archival _Chandra_ observation. In order to estimate these quantities, we use archival RXTE/ASM observations. A lightcurve of the relevant Her X-1 precession cycle is shown in Fig. 4.
The measurements of the Turn-on (into the Main High state) and the Rise (into the Short High state) are based on formal best fits with a double exponential function, which has been found to produce good fits to 35-day cycle lightcurves of Her X-1 (Staubert et al., 2016). Following the method of Staubert et al. (2016), we determined the Main High Turn-on times for the current (MJD\(=52559.3\pm 0.1\)) and the following (MJD\(=52594.6\pm 0.1\)) 35-day cycles. We then estimated the precession phase of the Chandra observation to be \(0.627\pm 0.004\). To find the phase elapsed since the Short High state Rise, we also determined the Short High Rise time from the Her X-1 lightcurve to be MJD\(=52579.0\pm 0.2\). This means that the Short High Rise occurred at a precession phase of \(0.559\pm 0.009\). The mid-point of the _Chandra_ observation hence happened at phase of \(0.068\pm 0.009\) since Her X-1 rose into the Short High state.
We note that the Rise into the Short High state in Her X-1 does not occur exactly one half of the 35-day cycle after the Main High Turn-on. This is because the binary system is not aligned exactly edge-on towards us, but with a more moderate inclination of about \(85^{\circ}\)(Leahy and Abdallah, 2014). For this reason, for any comparisons between the Main High and the Short High state wind properties, it is necessary to correctly account for both Main High Turn-ons and Short High Rise times in order
Figure 3: Her X-1 spectrum from the Short High state _Chandra_ observation, focusing only on the narrow energy bands of interest around the strongest elemental transitions. The absorption lines near these transitions imply the presence of an ionized disk wind. MEG data are in blue, HEG data are in black and the best-fitting spectral model (including the ionized absorption component) is in red.
to determine the correct equivalent precession phases of both high states. Otherwise, the Short High state properties will be shifted.
In Fig. 5, we compare the best-fitting wind column density and ionization parameter during the Short High with previous measurements made during the Main High state (presented in Kosec et al., 2023). We find that both the column density and the ionization parameter are in good general agreement with wind parameter measurements at equivalent precession phases during the Main High state. This suggests that we are observing the same wind structure which was detected during the Main High state, and that the structure does not vary strongly between the two high states, as it precesses alongside with the warped accretion disk.
During the Main High, we found that the wind column density decreases with increasing precession phase, i.e. with increasing height above the warped accretion disk (Kosec et al., 2023). In other words, the wind is weakening at greater heights above the disk. It is plausible that the same relationship is followed during the Short High state. This will be probed by our upcoming _XMM-Newton_ and _Chandra_ observation campaign (to occur in 2023).
We can also estimate the location of the outflow during the Short High state observation and compare it with Main High measurements. Following the approach of Kosec et al. (2023), we can determine the maximum distance of the outflow from the ionizing source such as:
\[R_{\rm max}=\frac{L_{\rm ion}}{N_{\rm H}\xi} \tag{1}\]
where \(L_{\rm ion}\) is the ionizing (13.6 eV - 13.6 keV) luminosity. The estimated observed \(L_{\rm ion}\) during the _Chandra_ observation is \((6.2\pm 0.4)\times 10^{36}\) erg/s. Therefore, the maximum distance of the outflow from the neutron star is \(1.2^{+0.5}_{-0.7}\times 10^{11}\) cm. We compare this value with the maximum distances of the wind from the ionizing source during the Main High state (Fig. 3 of Kosec et al., 2023). We find that the maximum distance of the outflow from the X-ray source is roughly comparable with the maximum distances derived for wind measurements taken around phase \(\mathbf{0.04-0.05}\) during the Main High state.
In addition to the column density and the ionization parameter, we also measure the wind outflow velocity and velocity width. These parameters are again consistent with those measured during Main High. The outflow velocity was observed to vary between 200 and 1000 km/s during the Main High (compared with \(380\pm 40\) km/s during Short High), while the velocity width was in most observations measured to be between 70 and 500 km/s (versus \(240\pm 40\) km/s during the Short High observation).
Thanks to the _Chandra_ HETG broader bandpass (observed in high-resolution), we are additionally able to measure elemental abundances of certain elements inaccessible to _XMM-Newton_. Those elements are Mg, Si and S. While technically included in the RGS band, the Mg XII line is not as well detected and resolved in the RGS data as with HETG data, preventing a direct abundance measurement. We find that both Mg and Si are about 50% super-abundant compared to O, while the S/O ratio is as high as 3.
Figure 4: _RXTE/ASM_ lightcurve (\(2-10\) keV) of Her X-1 super-orbital cycle number 323, during which the archival _Chandra_ observation of the Short High state occurred. The cycle counting is according to Staubert et al. (1983), following Davison & Fabian (1977). The midpoint of the _Chandra_ exposure is at MJD=52581.4 and is shown with the vertical blue dashed line.
Combining these measurements with previous _XMM-Newton_ results (Kosec et al., 2023) on other elemental abundances (N/O\(=3.4^{+0.6}_{-0.8}\), Ne/O\(=2.3^{+0.4}_{-0.5}\), Fe/O\(=2.1\pm 0.3\)), we find that all the elements are super-abundant compared with O. This is seen at low statistical significance for Mg and Si, but is significant for the remaining 4 elements. These peculiar abundances are presumed to have originated by pollution of the donor by the supernova which created Her X-1 (Jimenez-Garate et al., 2002, 2005). An alternative explanation may be that O is depleted rather than other elements being over-abundant. However, it is not likely that O would be strongly under-abundant since the CNO cycle (presumably operating in the Her X-1 progenitor) will create a strong over-abundance of N and a strong under-abundance of C, without heavily depleting the O abundance (Przybilla et al., 2010). The UV spectra of Her X-1 indeed show over-abundance of N and depletion of C (Boroson et al., 1997). On the other hand, how the heavier elements such as S and Fe become super-abundant by a factor of 2-3, is unclear. We note that Allen et al. (2018) found evidence for non-Solar elemental ratios of certain heavier elements including Mg and
\begin{table}
\begin{tabular}{c c c} \hline \hline Component & Parameter & Best-fitting value \\ \hline Disk & column density & \(2.0^{+1.1}_{-0.6}\times 10^{22}\) cm\({}^{-2}\) \\ wind & \(\log(\xi/\)erg cm s\({}^{-1}\)) & \(3.41^{+0.15}_{-0.12}\) \\ & outflow velocity & \(380\pm 40\) km/s \\ & velocity width & \(240\pm 40\) km/s \\ & Mg/O & \(1.5^{+0.5}_{-0.4}\) \\ & Si/O & \(1.5\pm 0.4\) \\ & S/O & \(3.0^{+1.2}_{-1.1}\) \\ & \(\Delta\)C-stat & 168 \\ \hline Primary & seed temp. & \(0.33\pm 0.02\) keV \\ Comptonization & electron temp. & \(3.8^{+0.4}_{-0.3}\) keV \\ & optical depth & \(9.4\pm 0.3\) \\ & luminosity & \((6.5\pm 0.4)\times 10^{36}\) erg/s \\ \hline Soft blackbody & temperature & 0.1 keV (fixed) \\ & luminosity & \(1.62^{+0.09}_{-0.10}\times 10^{36}\) erg/s \\ \hline
1 keV & energy & \(0.91\pm 0.02\) keV \\ feature & FWHM & \(0.46\pm 0.02\) keV \\ & luminosity & \((2.1\pm 0.2)\times 10^{35}\) erg/s \\ \hline Fe I & energy & \(6.397\pm 0.004\) keV \\ & FWHM & \(0.037^{+0.015}_{-0.012}\) keV \\ & luminosity & \(1.9^{+0.4}_{-0.3}\times 10^{34}\) erg/s \\ \hline Fe XXV & energy & \(6.61^{+0.05}_{-0.04}\) keV \\ & FWHM & \(0.60^{+0.13}_{-0.09}\) keV \\ & luminosity & \(7.4^{+1.4}_{-1.2}\times 10^{34}\) erg/s \\ \hline O VIII & energy & 0.654 keV (fixed) \\ & FWHM & \(0.034^{+0.014}_{-0.010}\) keV \\ & luminosity & \(8^{+4}_{-3}\times 10^{33}\) erg/s \\ \hline \end{tabular}
\end{table}
Table 1: Best-fitting properties of the continuum and the disk wind in the Short High spectrum of Her X-1. The continuum component luminosities are calculated from observed, absorption-corrected X-ray fluxes assuming a distance of 6.1 kpc.
Figure 5: Wind column density (top panel) and ionization parameter (bottom panel) versus the equivalent disk precession phase. Main High state observations (taken from Kosec et al., 2023) are shown in black (_XMM-Newton_) and blue (_Chandra_), the only _Chandra_ observation during the Short High is in red. The X-axis value of the Short High state data point is the precession phase elapsed since the Rise into the Short High state.
Ca in the neutron star X-ray binary GX 13+1. Similarly, Kallman et al. (2009) found an over-abundance of Fe-group elements in the black hole X-ray binary GRO J1655-40 and suggested that these findings are consistent with enrichment by a core-collapse supernova. In any case, further spectroscopic observations (of Her X-1 and other X-ray binaries) are necessary to decrease the currently significant uncertainties on the measured abundances to confirm these findings.
Finally, we also measure the properties of the various prominent emission lines of Her X-1 during Short High. The position of the narrow Fe line is fully consistent with 6.4 keV, i.e. with low ionization Fe (Fe I up to Fe XIII), while the position of the medium-width Fe line is consistent at 1\(\sigma\) with Fe XXV. These line energies as well as the line widths are consistent with the lines detected in Her X-1 during Main High (Kosec et al., 2022). The same is found for the width of the O VIII line, and for both the energy and the width of the 1 keV broad residual.
The upcoming _XRISM_ observatory (XRISM Science Team, 2020), expected to be launched in 2023 will revolutionize the X-ray spectroscopic studies of X-ray binaries thanks to its _Resolve_ calorimeter with a resolution of just 5-7 eV (e.g. Tomaru et al., 2020). To investigate how _XRISM_ will be able to improve our measurements of the disk wind in Her X-1, we simulated a brief 10 ks observation of the Short High state using the best-fitting spectral parameters from the _Chandra_ observation used in this work. We used publicly available _XRISM_ simulation files1 assuming the goal 5 eV spectral resolution. The simulated data are shown in Fig. 6, focusing just on the Fe K energy band where _XRISM_ will achieve the best results.
Footnote 1: [https://heasarc.gsfc.nasa.gov/docs/xrism/proposals/](https://heasarc.gsfc.nasa.gov/docs/xrism/proposals/)
The _XRISM_ statistics are much higher than those of _Chandra_ HETG across the full energy range, in agreement with its superior effective area over _Chandra_. The simulated spectrum has a count rate of about 45 ct/s. At soft energies (e.g. the Ne X transition at \(\sim 1\) keV), the spectral resolution of _XRISM_ is lower than that of _Chandra_ HETG. The resolutions of the instruments are comparable at medium energies (e.g. around the S XVI transition at 2.6 keV). However, _XRISM_ shines at high energies, particularly in the Fe K band, where it easily resolves individual wind absorption lines, their widths and their shapes. We particularly note that the broadening of the Fe XXV and XXVI absorption lines shown in Fig. 6 is not instrumental but due to the velocity width of the disk wind measured from the _Chandra_ spectral fit.
We performed a spectral fit of this 10 ks _XRISM_ mock simulation. The disk wind is detected over the baseline continuum with a fit improvement of \(\Delta\)C-stat=507, indicating a much stronger detection than in the longer 20 ks archival _Chandra_ dataset (where \(\Delta\)C-stat=168 was measured). This indicates that we will be able to detect the wind during Short High and measure its properties in even shorter _XRISM_ snapshots, with exposures possibly as low as 2-3 ks. Such low exposure requirement for measurements will allow us to search for any short-term wind variability.
The wind parameters are also significantly better constrained in the 10 ks _XRISM_ exposure in comparison with _Chandra_ data: the column density will be determined with a precision of \(2\times 10^{21}\) cm\({}^{-2}\) (10% precision), the ionization parameter \(\log(\xi/\)erg cm s\({}^{-1}\)) will be measured with a precision of 0.04, and the outflow velocity and velocity width with a precision of 20 km/s. The uncertainties on the abundance of Mg and Si will shrink by roughly a third, while the uncertainties on the S abundance will decrease by a factor of 2. Clearly, even very brief _XRISM_ snapshots will allow detailed measurements of the accretion disk wind in Her X-1 and in comparably bright X-ray binaries.
In conclusion, the _Chandra_ observation of Her X-1 during the Short High state reveals the same outflow detected during the Main High. The best-fitting wind properties during Short High are broadly consistent with those determined at an equivalent precession phase during the Main High, after taking into account the delayed Short High state Rise time. By combining the results from both the Main High and the Short High, we will be able to probe the vertical structure of the disk wind of Her X-1 in great detail over a broad range of heights. This will allow us to understand, for the first time, the 3D structure and energetics of such an outflow, inaccessible in other systems with fixed sightlines without full 3D modelling of the wind re-emission. Future observations with _XRISM_ (to be launched in 2023) will allow _precision_ measurements of the Her X-1 wind properties, particularly thanks to its superior resolution in the Fe K band. Her X-1 may thus be the Rosetta stone of X-ray binary accretion disk winds, which will allow us to fully determine their physics, energetics and 3D geometry.
This work is based on data, obtained by the Chandra X-ray Observatory, available at DOI: cdc.155. Support for this work was provided by the National Aeronautics and Space Administration through the Smithsonian Astrophysical Observatory (SAO) contract SV3-73016 to MIT for Support of the Chandra X-Ray Center and Science Instruments. PK and EK acknowledge support from NASA grants 80NSSC21K0872 and DD0-21125X.
|
2305.19863 | Multi-Channel Operation for the Release 2 of ETSI Cooperative
Intelligent Transport Systems | Vehicles and road infrastructure are starting to be equipped with
vehicle-to-everything (V2X) communication solutions to increase road safety and
provide new services to drivers and passengers. In Europe, the deployment is
based on a set of Release 1 standards developed by ETSI to support basic use
cases for cooperative intelligent transport systems (C-ITS). For them, the
capacity of a single 10 MHz channel in the ITS band at 5.9 GHz is considered
sufficient. At the same time, the ITS stakeholders are working towards several
advanced use cases, which imply a significant increment of data traffic and the
need for multiple channels. To address this issue, ETSI has recently
standardized a new multi-channel operation (MCO) concept for flexible,
efficient, and future-proof use of multiple channels. This new concept is
defined in a set of new specifications that represent the foundation for the
future releases of C-ITS standards. The present paper provides a comprehensive
review of the new set of specifications, describing the main entities extending
the C-ITS architecture at the different layers of the protocol stack, In
addition, the paper provides representative examples that describe how these
MCO standards will be used in the future and discusses some of the main open
issues arising. The review and analysis of this paper facilitate the
understanding and motivation of the new set of Release 2 ETSI specifications
for MCO and the identification of new research opportunities. | Alessandro Bazzi, Miguel Sepulcre, Quentin Delooz, Andreas Festag, Jonas Vogt, Horst Wieker, Friedbert Berens, Paul Spaanderman | 2023-05-31T13:55:52Z | http://arxiv.org/abs/2305.19863v1 | # Multi-Channel Operation for the Release 2 of ETSI Cooperative Intelligent Transport Systems
###### Abstract
Vehicles and road infrastructure are starting to be equipped with vehicle-to-everything (V2X) communication solutions to increase road safety and provide new services to drivers and passengers. In Europe, the deployment is based on a set of Release 1 standards developed by ETSI to support basic use cases for cooperative intelligent transport systems (C-ITS). For them, the capacity of a single 10 MHz channel in the ITS band at 5.9 GHz is considered sufficient. At the same time, the ITS stakeholders are working towards several advanced use cases, which imply a significant increment of data traffic and the need for multiple channels. To address this issue, ETSI has recently standardized a new multi-channel operation (MCO) concept for flexible, efficient, and future-proof use of multiple channels. This new concept is defined in a set of new specifications that represent the foundation for the future releases of C-ITS standards. The present paper provides a comprehensive review of the new set of specifications, describing the main entities extending the C-ITS architecture at the different layers of the protocol stack, In addition, the paper provides representative examples that describe how these MCO standards will be used in the future and discusses some of the main open issues arising. The review and analysis of this paper facilitate the understanding and motivation of the new set of Release 2 ETSI specifications for MCO and the identification of new research opportunities.
Cooperative Intelligent Transport Systems (C-ITS); Vehicle to everything (V2X); Multi-Channel Operation (MCO); Cooperative, connected and automated mobility (CCAM)
## I Introduction
Cooperative, connected and automated mobility (CCAM) will require the use of wireless communications to contribute to the "Vision Zero" of the EU, which targets no road deaths by 2050. In the past years, various organizations including IEEE, ETSI, SAE, ISO, and 3GPP have developed different standards to enable direct data exchange among vehicles, other road users, and the infrastructure. In Europe, the effort has resulted in a set of ETSI specifications implementing the Release 1 of cooperative intelligent transport systems (C-ITS) as listed in ETSI TR 101 607.1
Footnote 1: ETSI standards are available free of charge at [https://www.etsi.org](https://www.etsi.org).
Release 1 covers so-called "Day-1" applications [1, 2], based essentially on the exchange of cooperative awareness messages (CAMs), sent repetitively by each vehicle to inform about their status and movements, and decentralized environmental notification messages (DENMs), sent on an event basis to warn about safety-critical situations. Due to the limited amount of shared data, a single 10 MHz radio channel was regarded as sufficient, and Release 1 standards were not designed to support the simultaneous use of multiple channels.
The emergence of new applications2 that go beyond "Day-1" motivate the creation of the ETSI Release 2 set of C-ITS standards. With Release 2, road users will share information about the surrounding environment, using collective perception (ETSI TS 103 324), will create platoons of vehicles (ETSI TR 103 299) or coordinate their maneuvers (ETS ITS 103 561). Vulnerable road users (i.e., bicycles, scooters, etc.) will also generate messages to inform about their presence (ETSI TS 103 300-3). The messages generated and the estimated number of channels needed are summarized in Table I, from which it is clear that a single channel is not sufficient.
Footnote 2: In C-ITS, messages are generated by applications or by entities called services, which are implemented at the facilities layer. To improve readability, in this paper we use the term _application_ to include both sources of messages.
Release 2 will require several channels, possibly using more than one transceiver and more than one radio access technology [3, 4]. Given the necessity to define rules for the use of multiple channels, ETSI has recently approved a set of specifications about multi-channel operation (MCO). These standards, presented in Table II, define how the various entities inside the C-ITS station collect information and make decisions to use multiple channels. To enable efficient management of multiple channels, the new set of specifications adds to the C-ITS station architecture a new core entity acting at the facilities layer. This entity collects information about the implemented applications with their requirements and the available radio access technologies. It is designed to control and negotiate various settings to optimize channel utilization and ensure compliance with application requirements. Additional entities at the networking & transport and the access layers, and the corresponding internal communication flows, are defined to allow software components to be developed |
2302.14338 | Turning a CLIP Model into a Scene Text Detector | The recent large-scale Contrastive Language-Image Pretraining (CLIP) model
has shown great potential in various downstream tasks via leveraging the
pretrained vision and language knowledge. Scene text, which contains rich
textual and visual information, has an inherent connection with a model like
CLIP. Recently, pretraining approaches based on vision language models have
made effective progresses in the field of text detection. In contrast to these
works, this paper proposes a new method, termed TCM, focusing on Turning the
CLIP Model directly for text detection without pretraining process. We
demonstrate the advantages of the proposed TCM as follows: (1) The underlying
principle of our framework can be applied to improve existing scene text
detector. (2) It facilitates the few-shot training capability of existing
methods, e.g., by using 10% of labeled data, we significantly improve the
performance of the baseline method with an average of 22% in terms of the
F-measure on 4 benchmarks. (3) By turning the CLIP model into existing scene
text detection methods, we further achieve promising domain adaptation ability.
The code will be publicly released at https://github.com/wenwenyu/TCM. | Wenwen Yu, Yuliang Liu, Wei Hua, Deqiang Jiang, Bo Ren, Xiang Bai | 2023-02-28T06:06:12Z | http://arxiv.org/abs/2302.14338v3 | # Turning a CLIP Model into a Scene Text Detector
###### Abstract
The recent large-scale Contrastive Language-Image Pretraining (CLIP) model has shown great potential in various downstream tasks via leveraging the pretrained vision and language knowledge. Scene text, which contains rich textual and visual information, has an inherent connection with a model like CLIP. Recently, pretraining approaches based on vision language models have made effective progresses in the field of text detection. In contrast to these works, this paper proposes a new method, termed TCM, focusing on Turning the CLIP Model directly for text detection without pretraining process. We demonstrate the advantages of the proposed TCM as follows: (1) The underlying principle of our framework can be applied to improve existing scene text detector. (2) It facilitates the few-shot training capability of existing methods, _e.g_., by using 10% of labeled data, we significantly improve the performance of the baseline method with an average of 22% in terms of the F-measure on 4 benchmarks. (3) By turning the CLIP model into existing scene text detection methods, we further achieve promising domain adaptation ability. The code will be publicly released at [https://github.com/wenwenyu/TCM](https://github.com/wenwenyu/TCM).
## 1 Introduction
Scene text detection is a long-standing research topic aiming to localize the bounding box or polygon of each text instance from natural images, as it has wide practical applications scenarios, such as office automation, instant translation, automatic driving, and online education. With the rapid development of fully-supervised deep learning technologies, scene text detection has achieved remarkable progresses. Although supervised approaches have made remarkable progress in the field of text detection, they require extensive and elaborate annotations, _e.g_., character-level, word-level, and text-line level bounding boxes, especially polygonal boxes for arbitrarily-shaped scene text. Therefore, it is very important to investigate text detection methods under small amount of labeled data, _i.e_., few-shot training.
Recently, through leveraging the pretrained vision and language knowledge, the large-scale Contrastive Language-Image Pretraining (CLIP) model [27] has demonstrated its significance in various downstream tasks. _e.g_., image classification [56], object detection [5], and semantic segmentation [13, 28, 45].
Compared to general object detection, scene text in natural images usually presents with both visual and rich character information, which has a natural connection with the CLIP model. Therefore, how to make full use of cross-modal information from visual, semantic, and text knowledge to improve the performance of the text detection models receives increasing attentions in recent studies. For examples, Song _et al_. [31], inspired by CLIP, adopts fine-grained cross-modality interaction to align unimodal embeddings for learning better representations of backbone via carefully designed pretraining tasks. Xue _et al_. [48] presents a weakly supervised pretraining method to jointly learn and align vi
Figure 1: Comparisons of different paradigms of using text knowledge for scene text detection.
sual and partial textual information for learning effective visual text representations for scene text detection. Wan _et al._[37] proposes self-attention based text knowledge mining to enhance backbone via an image-level text recognition pretraining tasks.
Different from these works, as shown in Figure 1, this paper focuses on turning the CLIP model for text detection without pretraining process. However, it is not trivial to incorporate the CLIP model into a scene text detector. The key is seeking a proper method to exploit the visual and semantic prior information conditioned on each image. In this paper, we develop a new method for scene text detection, termed as TCM, short for **T**urning a **CL**IP **M**odel into a scene text detector, which can be easily plugged to improve the scene text detection frameworks. We design a cross-modal interaction mechanism through visual prompt learning, which is implemented by cross-attention to recover the locality feature from the image encoder of CLIP to capture fine-grained information to respond to the coarse text region for the subsequent matching between text instance and language. Besides, to steer the pretrained knowledge from the text encoder conditioned independently on different input images, we employ the predefined language prompt, learnable prompt, and a language prompt generator using simple linear layer to get global image information. In addition, we design an instance-language matching method to align the image embedding and text embedding, which encourages the image encoder to explicitly refine text regions from cross-modal visual-language priors. Compared to previous pretraining approaches, our method can be directly finetuned for the text detection task without pretraining process, as elaborated in Fig. 1. In this way, the text detector can absorb the rich visual or semantic information of text from CLIP. We summarize the advantages of our method as follows:
* We construct a new text detection framework, termed as TCM, which can be easily plugged to enhance the existing detectors.
* Our framework can enable effective few-shot training capability. Such advantage is more obvious when using less training samples compared to the baseline detectors. Specifically, by using 10% of labeled data, we improve the performance of the baseline detector by an average of 22% in terms of the F-measure on 4 benchmarks.
* TCM introduces promising domain adaptation ability, _i.e._, when using training data that is out-of-distribution of the testing data, the performance can be significantly improved. Such phenomenon is further demonstrated by a NightTime-ArT text dataset1, which we collected from the ArT dataset. Footnote 1: NightTime-ArT Download Link
* Without pretraining process using specific pretext tasks, TCM can still leverage the prior knowledge from the CLIP model, outperforming previous scene text pretraining methods [31, 48, 37].
## 2 Related works
Unimodal Scene Text Detection.Unimodal scene text detection represents the method directly adopts the bounding boxes annotation only [21]. It can be roughly divided into two categories: Segmentation-based methods and regression-based methods. The segmentation-based methods usually conduct pixel-level [14, 17, 18, 35, 39, 43, 47], segment-level [23, 29, 32, 34, 54, 46, 51], or contour-level [41, 38] segmentation, then grouping segments into text instances via post-processing. The regression-based methods [8, 9, 10, 15, 58, 55, 40, 53, 55] regards text as a whole object and regress the bounding boxes of the text instances directly.
Cross-modal Assisted Scene Text Detection.Unlike unimodal based scene text detection, cross-modal assisted scene text detection aims to make full use of cross-modal information including visual, semantic, and text knowledge to boost the performance. Wan _et al._[37] utilized an image-level text recognition pretraining tasks to enhance backbone via the proposed self-attention based text knowledge mining mechanism. Song _et al._[31], inspired by CLIP, designed three pretraining fine-grained cross-modality interaction tasks to align unimodal embeddings for learning better representations of backbone. Xue _et al._[48] jointly learned and aligned visual and partial text instances information for learning effective visual text representations via the proposed weakly supervised pretraining method. Long _et al._[22] proposed an end-to-end model to perform unified scene text detection and visual layout analysis simultaneously. The above methods explicitly leverage text or visual information to assist text detection. Instead, our method focuses on improving the performance results by turning a CLIP model into a scene text detector via leveraging pretrained text knowledge.
## 3 Methodology
We begin by illustrating the CLIP model which we used for fetching the prior knowledge. Next, we introduce the technical details of TCM as well as the rationale behind it. An overview of our approach is shown in Fig. 2.
### Contrastive Language-Image Pretraining
CLIP [27], which collects 400 million image-text pairs without human annotation for model pretraining, has well demonstrated the potential of learning transferable knowledge and open-set visual concepts. Previous study [4] shows
that different neurons in CLIP model can capture the corresponding concept literally, symbolically, and conceptually, As shown in Fig. 4, the CLIP model is an inborn text-friendly model which can effectively abstract the mapping space between image and text [26]. During training, CLIP learns a joint embedding space for the two modalities via a contrastive loss. Given a batch of image-text pairs, for each image, CLIP maximizes the cosine similarity with the matched text while minimizing that with all other unmatched text. For each text, the loss is computed similarly as each image. In this way, CLIP can be used for zero-shot image recognition [56]. However, to exploit the relevant information from such a model, there are two prerequisites: 1) A proper method to effectively request the prior knowledge from the CLIP. 2) The original model can only measure the similarity between an integrated image and a single word or sentence. For scene text detection, there are usually many text instances per image, which are all required to be recalled equivalently.
### Turning a CLIP into a Text Detector
To turn the CLIP model into the scene text detector, we propose TCM, as shown in Fig. 2 and Fig.3. TCM is a pluggable module that can be directly applied to enhance the existing scene text detectors. It extracts the image and text embeddings from the image encoder and text encoder of CLIP model, respectively. We then design a cross-modal interaction mechanism through visual prompt learning to recover the locality feature from the image encoder of CLIP, which can capture fine-grained information to respond to the coarse text region for the subsequent matching between text instance and language. For better steering the pretrained knowledge, we introduce a language prompt generator to generate conditional cue for each image and design a visual prompt generator that learns image prompts for adapting the frozen clip text encoder for the text detection task. The TCM can be directly applicable to broader text detection methods only with some minor modifications. In addition, we design an instance-language matching method to align the image embedding and text embedding, which encourages the image encoder to explicitly refine text regions from cross-modal visual-language priors.
Image Encoder.We use the pretrained ResNet50 [7] of CLIP as the image encoder, which produces an embedding vector for every input pixel. Given the input image \(\mathbf{I}^{\prime}\in\mathbb{R}^{H\times W\times 3}\), image encoder outputs image embedding \(\mathbf{I}\in\mathbb{R}^{\tilde{H}\times\tilde{W}\times C}\), where \(\tilde{H}=\frac{H}{s}\), \(\tilde{W}=\frac{W}{s}\), and \(C\) is the image embedding dimension (\(C\) is set to 1024) and \(s\) is the downsampling ratio (s is empirically set to 32), which can be expressed as:
\[\mathbf{I}=\mathrm{ImageEncoder}(\mathbf{I}^{\prime})\,. \tag{1}\]
Text Encoder.The text encoder takes input a number of of \(K\) classes prompt and embeds it into a continuous vector space \(\mathbb{R}^{C}\), producing text embeddings \(\mathbf{T}=\{\mathbf{t}_{1},\dots,\mathbf{t}_{K}\}\in\mathbb{R}^{K\times C}\) as outputs of the text encoder, where \(\mathbf{t}_{i}\in\mathbb{R}^{C}\). Specifically, we leverage the frozen pretrained text encoder of CLIP throughout as the text encoder can provide language knowledge prior for text detection. \(K\) is set to 1 because there is only one text class in text detection task. Different from the original model that uses templates like "a photo of a [CLS].", we predefine discrete language prompt as "_Text_". Then, a part of the text encoder input \(\mathbf{t}^{\prime}_{in}\) is defined as follows:
\[\mathbf{t}^{\prime}_{in}=\mathrm{WordEmbedding}(\mathrm{Text})\in\mathbb{R}^{ \mathrm{D}}\,, \tag{2}\]
where \(\mathrm{WordEmbedding}(\cdot)\) denotes word embedding for predefined prompt "Text" class. \(D\) is the word embedding dimension and set to 512.
Inspired by CoOp [56, 57], we also add learnable prompt \(\{\mathbf{c}_{1},\dots,\mathbf{c}_{n}\}\) to learn robust transferability of text embedding for facilitating zero-shot transfer of CLIP model, where \(n\) is the number of learnable prompt, which is set to 4 by default, and \(\mathbf{c}_{i}\in\mathcal{R}^{D}\). Thus, the input \(\mathbf{t}_{in}\) of the text encoder is as follows:
\[\mathbf{t}_{in}=[\mathbf{c}_{1},\dots,\mathbf{c}_{n},\mathbf{t}^{\prime}_{in}]\in\mathbb{R}^{ (n+1)\times D}\,. \tag{3}\]
The text encoder takes \(\mathbf{t}_{in}\) as input and generates text embedding \(\mathbf{T}=\{\mathbf{t}_{1}\}\in\mathbb{R}^{C}\), and \(\mathbf{T}\) is donated by \(\mathbf{t}_{out}\in\mathcal{R}^{C}\) for simplification:
\[\mathbf{t}_{out}=\mathrm{TextEncoder}(\mathbf{t}_{in})\in\mathbb{R}^{C}\,. \tag{4}\]
Figure 3: The details of the TCM. The image encoder and text encoder are directly from the CLIP model. Det. Head short for detection head.
Figure 2: The overall framework of our approach.
Language Prompt Generator.Although the predefined prompt and learnable prompt are effective for steering the CLIP model, it may suffer from limited few-shot or generalization ability to open-ended scenarios where the testing text instance is out-of-distribution from the training images. To this end, we present a language prompt generator to generate a feature vector, termed as conditional cue (\(\mathbf{cc}\)). For each image, the \(\mathbf{cc}\) is then combined with the input of the text encoder \(\mathbf{t}_{in}\), formulated as follows:
\[\hat{\mathbf{t}}_{in}=\mathbf{cc}+\mathbf{t}_{in}\in\mathbb{R}^{(n+1)\times D}\,, \tag{5}\]
where \(\hat{\mathbf{t}}_{in}\) is the new prompt input of the text encoder conditioned on the input image, and we replace \(\mathbf{t}_{in}\) with \(\hat{\mathbf{t}}_{in}\) in Eq. 4.
In practice, the language prompt generator is built with a two-layer feed-forward network, which is applied to generate conditional cue (\(\mathbf{cc}\)) from the globality image embedding \(\mathbf{I}\). This consists of two layer normalization followed by linear transformations, with a ReLU activation in between, which is formulated as follows:
\[\mathbf{cc}=\mathrm{LN}(\sigma(\mathrm{LN}(\bar{\mathbf{I}})\mathbf{W}_{1}+\mathbf{b}_{1})) \mathbf{W}_{2}+\mathbf{b}_{2}\in\mathbb{R}^{D}\,, \tag{6}\]
where \(\bar{\mathbf{I}}\in\mathbb{R}^{\mathbb{C}}\) is the global image-level feature generated from image embedding \(\mathbf{I}\) by the same global attention pooling layer as in CLIP. \(\mathbf{W}_{1}\in\mathbb{R}^{C\times C}\), \(\mathbf{W}_{2}\in\mathbb{R}^{C\times D}\), \(\mathbf{b}_{1}\in\mathbb{R}^{C}\), \(\mathbf{b}_{2}\in\mathbb{R}^{D}\), and we broadcast \(\mathbf{cc}\) with \(\mathbf{t}_{in}\) to get \(\hat{\mathbf{t}}_{in}\) in Eq. 5.
Visual Prompt Generator.We design a visual prompt generator to adaptively propagate fine-grained semantic information from textual features to visual features. Formally, we use the cross-attention mechanism in Transformer [36] to model the interactions between image embedding (\(\mathbf{Q}\)) and text embedding (\(\mathbf{K}\), \(\mathbf{V}\)). The visual prompt \(\tilde{\mathbf{I}}\) is then learned for transferring the information prior from image-level to text instance-level, which is defined as:
\[\tilde{\mathbf{I}}=\mathrm{TDec}(Q=\mathbf{I},K=\mathbf{t}_{out},V=\mathbf{t}_{out})\in \mathbb{R}^{\tilde{H}\times\tilde{W}\times C}\,, \tag{7}\]
where \(\mathrm{TDec}\) denotes the Transformer Decoder.
Based on the conditional visual prompt, the original image embedding \(\mathbf{I}\) is equipped with \(\tilde{\mathbf{I}}\) to produce the prompted text-aware locality embeddings \(\tilde{\mathbf{I}}\) used for instance-language matching (Eq. 9) and downstream detection head:
\[\hat{\mathbf{I}}=\mathbf{I}+\tilde{\mathbf{I}}. \tag{8}\]
Instance-language Matching.Given the output of the text encoder and image encoder, we perform text instance-language matching alignment on text-aware locality image embedding \(\hat{\mathbf{I}}\) and text embedding \(\mathbf{t}_{out}\) by the dot product followed by sigmoid activation to get binary score map. The mixture of the generated conditional fine-grained embedding \(\tilde{\mathbf{I}}\) and visual embedding \(\mathbf{I}\) can allow text instance existing in visual features to be better matched with pretrained language knowledge in collaboration. The matching mechanism is formulated as follows:
\[\mathbf{P}=\mathrm{sigmoid}(\hat{\mathbf{I}}\mathbf{t}_{out}^{T}/\tau)\in\mathbb{R}^{ \tilde{H}\times\tilde{W}\times 1}, \tag{9}\]
where \(\mathbf{t}_{out}\) is text embedding because of only one text class in text detection scenarios, and \(\mathbf{P}\) is the binary text segmentation map. The segmentation maps are supervised using the ground-truths as an auxiliary loss and concatenated by the prompted embedding \(\tilde{\mathbf{I}}\) for downstream text detection head to explicitly incorporate language priors for detection. During training, we minimize a binary cross-entropy loss between the segmentation map \(\mathbf{P}\) and ground-truth, which is defined as follows:
\[\mathcal{L}_{aux}=\sum_{i}^{\tilde{H}}\sum_{j}^{\tilde{W}}y_{ij}\log(P_{ij}) +(1-y_{ij})\log(1-P_{ij})\,, \tag{10}\]
where \(y_{ij}\) and \(P_{ij}\) are the label and predicted probability of pixel \((i,j)\) belonging to the text instances, respectively.
Optimization.The loss function \(\mathcal{L}_{total}\) is the sum of detection loss \(\mathcal{L}_{det}\) and auxiliary loss \(\mathcal{L}_{aux}\), formulated as follows:
\[\mathcal{L}_{total}=\mathcal{L}_{det}+\lambda\mathcal{L}_{aux}\,, \tag{11}\]
where \(\lambda\) is a trade-off hyper-parameters and set to 1 in this paper. \(\mathcal{L}_{det}\) depends on downstream text detection method including segmentation and regression categories. In the inference period, we use the output of the detection head as the final result.
## 4 Experiments
We conduct four sets of experiments to validate TCM. Our first set of experiment examines how TCM can be incorporated into existing text detectors to achieve consistent performance improvements. Next, we demonstrate the few-shot training capability and generalization ability by incorporating the TCM method. In the third set of experiments, we compare our method with previous pretraining methods. Finally, we provide thorough experiments to evaluate the sensitivity w.r.t. the proposed designs.
Datasets.Our experiments are conducted on a number of commonly known scene text detection benchmarks including ICDAR2013 (IC13) [12], ICDAR2015 (IC15) [11],
Figure 4: The neurons in the clip model can directly respond to the text. The source images are from [4].
MSRA-TD500 (TD) [50], CTW1500 (CTW) [20], TotalText (TT) [3], ArT [2], MLT17 [25], and MLT19 [24]. More details of the datasets refer to appendix.
Evaluation Metric.We use intersection over union (IoU) to determine whether the model correctly detects the region of text, and we calculate precision (P), recall (R), and F-measure (F) for comparison following common practice [12]. For fair comparisons, text regions labeled with either "do not care" or "###" will be ignored in all datasets during training and testing.
Implementation Details.For text detection tasks, we experiment with the popular text detection methods including DBNet (DB) [18]2, PAN [39]3, and FCENet (FCE) [60]4 to evaluate TCM. For consistent settings with these methods, we train the detector using both SynthText and the real datasets. Specifically, the backbone is instantiated with the pretrained image encoder ResNet50 [7] of the CLIP unless specified. The visual prompt generator has 3 transformer decoder layers with 4 heads; transformer width is 256; and the feed-forward hidden dimension is set to 1024. We use the corresponding detection head of the DBNet, PAN, and FCENet to predict the final results. For testing few-shot learning of model, we directly train on the benchmark with different proportions of training data without pretraining and test it on the corresponding test data. For testing the generalization ability, we use the model trained on the corresponding source datasets and evaluating it on the target dataset that has dissimilar distribution. We consider two kinds of adaptation including synthtext-to-real and real-to-real, to validate the domain adaptation of the TCM. The ablation studies are conducted w.r.t. the predefined prompt, the learnable prompt, the language prompt generator, the visual prompt generator, and the different settings. The DBNet is used as baseline for TCM.
Footnote 2: [https://github.com/MhLiao/DB](https://github.com/MhLiao/DB)
Footnote 3: [https://github.com/whai362/pan_pp.pytorch](https://github.com/whai362/pan_pp.pytorch)
Footnote 4: [https://github.com/open-mmlab/mmocr/tree/main/configs/textdet/fcenet](https://github.com/open-mmlab/mmocr/tree/main/configs/textdet/fcenet)
### Cooperation with Existing Methods
We report the text detection results of our TCM combined with three text detection methods on IC15, TD, and CTW in Table 1. Our method is +0.9%, +1.7%, and +1.9% higher than the original FCENet, PAN, and DBNet, respectively, in terms of F-measure on IC15. TD and CTW also have similar consistent improvement. Note that the inference speed of our method is 18, 8.4, and 10 FPS evaluated on IC15, TD, and CTW datasets, respectively, with PAN, FCENet, and DBNet, remaining the high efficiency of the detector.
We visualize our method in Fig. 7. It shows that the fine-grained features \(\tilde{\mathbf{I}}\) containing text information is recovered from the global image embedding \(\mathbf{I}\), demonstrating that TCM can identify text regions and provide this prior cues for downstream text detection.
### Few-shot Training Ability
To further verify the few-show training ability of our method, we directly train our model on real datasets using various training data ratio without pretraining, and evaluate it on the corresponding 4 benchmarks. As shown in Fig. 5, our method shows robust on limited data and outperforms the three baseline methods including DB, PAN and EAST [58]. The results show that the TCM can capture the inherent characteristic of text via leveraging the pretrained vision and language knowledge of the zero-shot trained CLIP model.
\begin{table}
\begin{tabular}{c|c c c c c c c} \hline \hline \multicolumn{2}{c}{\multirow{2}{*}{Method}} & \multicolumn{2}{c}{IC15} & \multicolumn{2}{c}{TD} & \multicolumn{2}{c}{CTW} \\ \cline{3-8} & \multicolumn{2}{c}{F} & \(\Delta\) & \multicolumn{1}{c}{F} & \(\Delta\) & \multicolumn{1}{c}{F} & \(\Delta\) & \multicolumn{1}{c}{F} \\ \hline \multirow{3}{*}{**CENet**} & FCENet [60] & 86.2 & - & 85.4\({}^{\dagger}\) & - & 85.5 & - & 11.5 \\ & TCM-FCENet & **87.1** & **+0.9** & **86.9** & **+1.5** & **85.9** & **+0.4** & 8.4 \\ \hline \multirow{3}{*}{**CENet**} & PAN [39] & 82.9 & - & 84.1 & - & 83.7 & - & 36 \\ & TCM-PAN & **84.6** & **+1.7** & **85.3** & **+1.2** & **84.3** & **+0.6** & 18 \\ \cline{1-1} \cline{2-8} & DBNet [18] & 87.3 & - & 84.9 & - & 83.4 & - & 14.5 \\ \cline{1-1} & TCM-DBNet & **89.2** & **+1.9** & **88.8** & **+3.9** & **84.9** & **+1.5** & 10 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Text detection results of cooperating with existing methods on IC15, TD, and CTW. \({}^{\dagger}\) indicates the results from [52]. Reg. and Seg. short for regression and segmentation methods, respectively. FPS are reported with ResNet50 backbone on a single V100.
Figure 5: Few-shot training ability with varying training data ratio. “F” represents F-measure.
### Generalization Ability
We conduct two types of experiments including synthtext-to-real adaptation and real-to-real adaptation, as shown in Table 2 and Table 3, respectively. From the tables, we can see that by plugging the TCM to DBNet, we significantly improve the performance by an average of 8.2% in terms of F-measure for four different settings including synthtext-to-real and real-to-real, which further demonstrates the effectiveness of our method for domain adaptation.
### Comparison with Pretraining Methods
The pretraining methods based on specifically designed pretext tasks has made effective progress in the field of text detection. In contrast to these efforts, TCM can turn the CLIP model directly into a scene text detector without pretraining process. The comparison results are shown in Table 4, from which we can see that without pretext tasks for pretraining, DB+TCM consistently outperforms previous methods including DB+STKM [37], DB+VLPT [31], and DB+oCLIP [48]. Especially on IC15, our method outperforms previous state-of-the-art pretraining method by a large margin, with 89.4% versus 86.5% in terms of the F-measure.
### Ablation Studies
**Pretrained CLIP Backbone.** First, we conduct experiments that we only replace the original backbone of the DBNet with the pretrained image encoder ResNet50 of the CLIP to quantify the performance variance of the backbones. As shown in Table 5, the original pretrained model of CLIP is insufficient for leveraging the visual-language knowledge of the CLIP. Therefore, it is necessary to use a proper method to excavate the knowledge of the CLIP model.
**Ablation Study for the Predefined Prompt.** When using the predefined prompt, as illustrated in the second row of Table 6, the performances are slightly improved on all four datasets (IC15, TD, TT, and CTW), with 0.05%, 0.2%, 0.04%, and 0.1% higher than the baseline method, respectively.
**Ablation Study for the Learnable Prompt.** Besides, results combing the learnable prompt with the predefined prompt on four datasets are provided in the third row of Table 6. We notice that a consistent improvement can be achieved by adding the learnable prompt. We also show the influence of using different numbers of the learnable prompt in row 4 to row 6 of Table 6. We observe that as the value of the number of the learnable prompt increases, the performance increases gradually on all datasets. Compared to the value 4, the value 32 obtains obvious improvements on CTW, TD, and TT. We conjecture that this is because the
\begin{table}
\begin{tabular}{l|l c c c c} \hline \hline & Methods & Pretext task & IC15 & TT & TD & CTW \\ \hline \multirow{4}{*}{\begin{tabular}{c} CSV-1 \\ PAN \\ CCN [44] \\ ST3D [16] \\ DBNet [18] \\ \end{tabular} } & SegLink [29] & \(\times\) & - & - & 77.0 & - \\ & PSENet-1s [14] & \(\times\) & 85.7 & 80.9 & - & 82.2 \\ & LOMO [53] & \(\times\) & 87.2 & 81.6 & - & 78.4 \\ & MOST [8] & \(\times\) & 88.2 & - & 86.4 & - \\ & Tang _et al._[33] & \(\times\) & 89.1 & - & 88.1 & - \\ \hline \multirow{4}{*}{\begin{tabular}{c} DB+ST\({}^{\dagger}\) \\ DB+STKM\({}^{\dagger}\)[37] \\ DB+VLPT\({}^{\dagger}\)[31] \\ DB+oCLIP* [48] \\ \end{tabular} } & \(\times\) & 85.4 & 84.7 & 84.9 & - \\ & DB+STKM\({}^{\dagger}\)[37] & \(\checkmark\) & 86.1 & 85.5 & 85.9 & - \\ & DB+VLPT\({}^{\dagger}\)[31] & \(\checkmark\) & 86.5 & 86.3 & 88.5 & - \\ & DB+oCLIP* [48] & \(\checkmark\) & - & - & - & 84.4 \\ \hline \multirow{2}{*}{
\begin{tabular}{c} DB+TCM(Ours) \\ \end{tabular} } & \(\times\) & **89.4** & 85.9 & **88.8** & **85.1** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison with existing scene text pretraining techniques on DBNet (DB). \({}^{\dagger}\) indicates the results from [31]. ST and VLP denote SynthText pretraining and visual-language pretraining methods, respectively. * stand for our reimplementation results. F-measure (%) is reported.
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & \(\text{ST}\rightarrow\text{IC13}\) & \(\text{ST}\rightarrow\text{IC15}\) \\ \hline EAST\({}^{\dagger}\)[58] & 67.1 & 60.5 \\ PAN [39] & - & 54.8 \\ CCN [44] & - & 65.1 \\ ST3D [16] & 73.8 & 67.6 \\ DBNet [18] & 71.7 & 64.0 \\ \hline TCM-DBNet & **79.6** & **76.7** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Synthetext-to-real adaptation. \({}^{\dagger}\) indicates the results from [42]. ST indicates SynthText. F-measure (%) is reported.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & \(\text{IC13}\rightarrow\text{IC15}\) & \(\text{IC13}\rightarrow\text{TD}\) \\ \hline EAST\({}^{\dagger}\)[58] & 53.3 & 46.8 \\ GD(AD) [52] & 64.4 & 58.5 \\ GD(10-AD) [52] & 69.4 & 62.1 \\ CycleGAN [59] & 57.2 & - \\ ST-GAN [19] & 57.6 & - \\ CycleGAN+ST-GAN & 60.8 & - \\ TST [42] & 52.4 & - \\ DBNet [18] & 63.9 & 53.8 \\ \hline TCM-DBNet & **71.9** & **65.1** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Real-to-real adaptation. \({}^{\dagger}\) indicates that the results are from [52]. Note that the proposed method outperforms other methods. F-measure (%) is reported.
larger number of the learnable prompt can better steer the pretrained text encoder knowledge which is useful for text detection. In the following experiments, the default number of the learnable prompt is set to 4 for simplicity.
**Ablation Study for the Language Prompt Generator.** Furthermore, we evaluate the performance of the proposed language prompt generator shown in \(7_{th}\) row of Table 6. With the help of the language prompt generator, we find that TCM achieves further improvements on all four datasets, especially on ICDAR2015, indicating that the conditional cue generated by the language prompt generator for each image can ensure better generalization over different types of datasets.
**Ablation Study for the Visual Prompt Generator.** Finally, combining the proposed visual prompt generator with the above other components, the improvement of F-measure is better than the baseline on all four datasets, with larger margins of 1.7% and 2.0% on IC15 and TD, respectively. The reason for this obvious complementary phenomenon is that the visual prompt generator can propagate fine-grained visual semantic information from textual features to visual features. Besides, the prompted locality image embedding generated by the visual prompt generator can guide the model to obtain more accurate text instance-level visual representations, which boosts the ability of instance-language matching and generates a precise segmentation score map that is useful for downstream detection head.
**Ablation Study for the VG and LG on Generalization Performance.** As described in Table 7, removing the VG and LG elements from TCM dramatically deteriorates the generalization performance, which further indicates the effectiveness of the VG and LG.
**Ablation Study for Image Encoder and Text Encoder.** We have investigated how the quality of the frozen text encoder and image encoder affects the performance via adjusting the corresponding learning rate (LR) factor. The experimental results of TCM-DBNet on the TD500 dataset are shown in Table 8. The results show that using a lower learning rate for both encoders and fixing the text encoder is the optimal setting for training the whole model. Note that we observe performance degradation when directly using \(1.0\times\) learning rate for both encoders, which suggests the frozen text encoder can stabilize the training process. The cores of the architecture, including the language prompt generator and visual prompt generator, are designed to better steer knowledge of the pretrained CLIP. Appropriate design of the network architecture and the use of the pretrained CLIP are complementary.
**Ablation Study for Different Amount of Data.** To further explore whether the TCM can learn the additional knowledge which is hard to be obtained from increasing data, we have trained the model on a large-scale public joint data including IC13, IC15, TD, CTW, TT, and MLT17, with total 13,784 image, and testing it on a NightTime-ArT data (326 images) carefully collected from ArT. The nighttime examples of ArT are shown in Fig. 6. Results are shown in Table 9. The results show that even with the addition of large amounts of training data, existing methods still show limitation to the nighttime data that is obviously out-of-distribution from the training set. However, TCM can still perform robust in such case, indicating its irreplaceable potential generalization ability.
**Ablation Study for the Parameters Comparison.** For a fair comparison, we have increased the parameters of DBNet by replacing the backbone with a larger ResNet and then conduct experiments on TD500 dataset. Trainable parameters and FLOPs are calculated with an input size 1280\(\times\)800. Results are shown in Table 10. The results show that TCM-DBNet has better performance than DBNet with less model size and computation overhead, demonstrating its effectiveness for scene text detection.
**Ablation Study for the Auxiliary Loss.** We further compare the results of with and without auxiliary loss on TD500 dataset, as shown in Table 11. We see that using auxiliary loss achieves higher performance. The results indicate auxiliary loss is beneficial to train the model via imposing constraints on instance-language matching score map. In addition, the improvement of the performance suggests that it might help the image encoder of pretrained CLIP to perceive locality text region effectively.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{PP} & \multirow{2}{*}{LP} & \multirow{2}{*}{LG} & \multirow{2}{*}{VG} & \multicolumn{2}{c}{IC15} & TD & TT & CTW \\ \cline{6-9} & & & & & & F & F & F & F \\ \hline BSL & \(\times\) & \(\times\) & \(\times\) & \(\times\) & 87.7 & 86.8 & 84.7 & 83.4 \\ \hline BSL+ & ✓ & \(\times\) & \(\times\) & \(\times\) & 87.75 & 87.0 & 84.74 & 83.5 \\ BSL+ & ✓ & 4 & \(\times\) & \(\times\) & 88.0 & 87.1 & 84.8 & 83.6 \\ \hline BSL+ & \(\times\) & 4 & \(\times\) & \(\times\) & 87.8 & 87.7 & 85.1 & 83.9 \\ BSL+ & \(\times\) & 18 & \(\times\) & \(\times\) & 88.1 & 87.8 & 85.3 & 83.9 \\ BSL+ & \(\times\) & 32 & \(\times\) & \(\times\) & 88.4 & 88.2 & 85.4 & 84.5 \\ \hline BSL+ & ✓ & 4 & ✓ & \(\times\) & 88.6 & 88.4 & 85.5 & 84.6 \\ TCM & ✓ & 4 & ✓ & ✓ & 89.2 & 88.8 & 85.6 & 84.9 \\ TCM & ✓ & 32 & ✓ & ✓ & **89.4** & **88.8** & **85.9** & **85.1** \\ \(\Delta\) & & & & & +1.7 & +2.0 & +1.2 & +1.7 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation study of our proposed components on IC15, TD, TT and CTW. “BSL”, “PP”, “LP”, “LG”, and “VG” represent the baseline method DBNet, the predefined prompt, the learnable prompt, and the visual prompt generator, respectively. F (%) represents F-measure. \(\Delta\) represents the variance.
Figure 6: The examples of our constructed NightTime-ArT.
## 5 Discussion of Failure Cases
There are some insightful failed cases as shown in Figure 8. The instance-language matching score map generates false positive region that is very similar to the characteristics of text, as shown in the region of the red circle in Fig. 8, which will be considered as noise. Therefore, it is necessary that the downstream text detection head can further refine this initial score map instead of directly using the score map of instance-language matching as the final results. We leave this problem as future work to alleviate the false positive score map of instance-language matching.
## 6 Conclusion
This paper proposes the TCM, which can directly excavate the prior knowledge from the CLIP model into a scene text detector without pretraining process. Such a new text detection paradigm reveals the importance of using visual-language prior for seeking information from the zero-shot off-the-rack model, and thus guiding the text detector adapting to small-scale data, divergent data distribution, and complicated scenes, without relying on carefully-designed pretraining tasks. Experiments comprehensively demonstrate the effectiveness of our method. It is worth mentioning that we also construct a NightTime-ArT dataset to further demonstrate that the TCM can steer useful prior knowledge from the CLIP model. As the CLIP model is an inborn-friendly framework for text, extension of TCM to scene text spotting is also a promising direction for future work.
**Acknowledgements** This work was supported by the National Natural Science Foundation of China (No.62225603, No.6220073278, No.62206103), and the National Key Research and Development Program (No.2022YFC3301703, No.2022YFC2305102).
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & Backbone & Params & FLOPs & F (\%) \\ \hline DBNet & R50 & 26 (M) & 98 (G) & 84.9 \\ DBNet & R101 & 46 (M) & 139 (G) & 85.9 \\ DBNet & R152 & 62 (M) & 180 (G) & 87.3 \\ \hline TCM-DBNet & R50 & 50 (M) & 156 (G) & **88.7** \\ \hline \hline \end{tabular}
\end{table}
Table 10: Ablation study of the parameters comparison with DBNet.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Method & IC13 \(\rightarrow\) IC15 & IC13 \(\rightarrow\) TD & IC15 \(\rightarrow\) MLT17(en) & TT \(\rightarrow\) ArT(-) & ST \(\rightarrow\) IC13 & ST \(\rightarrow\) IC15 \\ \hline TCM & 71.9 & 65.1 & 85.1 & 68.9 & 79.5 & 76.7 \\ \hline w/o VG & 68.4 (-3.5) & 59.4 (-5.7) & 81.8 (-3.3) & 59.1 (-9.8) & 76.3 (-3.2) & 72.6 (-4.1) \\ w/o LG & 66.1 (-5.8) & 56.8 (-8.3) & 79.7 (-5.4) & 57.8 (-11.1) & 74.5 (-5.0) & 68.2 (-8.5) \\ w/o VG \& LG & 64.8 (-7.1) & 55.7 (-9.4) & 78.4 (-6.7) & 54.2 (-14.7) & 71.7 (-7.8) & 63.9 (-12.8) \\ \hline \hline \end{tabular}
\end{table}
Table 7: Ablation study of the effect of LG and VG on generalization performance. F-measure (%) is reported.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Training Data & Testing Data & F (\%) \\ \hline FCNet & Joint data & NightTime-ArT & 55.2 \\ DBNet & Joint data & NightTime-ArT & 52.8 \\ \hline TCM-DBNet & Joint data & NightTime-ArT & **70.2** \\ \hline \hline \end{tabular}
\end{table}
Table 8: Ablation study of exploration on image encoder and text encoder. “LR” represents the learning rate.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Backbone & Params & FLOPs & F (\%) \\ \hline DBNet & R50 & 26 (M) & 98 (G) & 84.9 \\ DBNet & R101 & 46 (M) & 139 (G) & 85.9 \\ DBNet & R152 & 62 (M) & 180 (G) & 87.3 \\ \hline TCM-DBNet & R50 & 50 (M) & 156 (G) & **88.7** \\ \hline \hline \end{tabular}
\end{table}
Table 11: Ablation study of the auxiliary Loss.
Figure 8: Failure cases. Red circle means false positive region.
Figure 7: Visualization results of our method. For each pair, the left is the image embedding \(\mathbf{I}\), and the right is the generated visual prompt \(\mathbf{\tilde{I}}\). Best view in screen. More results can be found in appendix.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & IC13 \(\rightarrow\) IC15 & IC13 \(\rightarrow\) TD & IC15 \(\rightarrow\) MLT17(en) & TT \(\rightarrow\) ArT(-) & ST \(\rightarrow\) IC13 & ST \(\rightarrow\) IC15 \\ \hline TCM & 71.9 & 65.1 & 85.1 & 68.9 & 79.5 & 76.7 \\ \hline w/o VG & 68.4 (-3.5) & 59.4 (-5.7) & 81.8 (-3.3) & 59.1 (-9.8) & 76.3 (-3.2) & 72.6 (-4.1) \\ w/o LG & 66.1 (-5.8) & 56.8 (-8.3) & 79.7 (-5.4) & 57.8 (-11.1) & 74.5 (-5.0) & 68.2 (-8.5) \\ w/o VG \& LG & 64.8 (-7.1) & 55.7 (-9.4) & 78.4 (-6.7) & 54.2 (-14.7) & 71.7 (-7.8) & 63.9 (-12.8) \\ \hline \hline \end{tabular}
\end{table}
Table 7: Ablation study of the effect of LG and VG on generalization performance. F-measure (%) is reported. |